LLM Reference

Models on Bitdeer AI

18 models available · Bitdeer Technologies Group

ModelInput (per 1M)Output (per 1M)Context
Gemma 2 27B$0.08$0.248K
Gemma 2 9B$0.08$0.248K
DeepSeek R1$0.1$0.3128K
DeepSeek V3$0.1$0.364k
Qwen2.5 0.5B$0.12$0.36128K
Qwen2.5 1.5B$0.12$0.36128K
Qwen2.5 14B$0.12$0.36128K
Qwen2.5 32B$0.12$0.36128K
Qwen2.5 7B$0.12$0.36128K
GLM-4 9B$0.14$0.42
InternLM 20B$0.14$0.42
InternLM 7B$0.14$0.42
Llama 3.2 11B Vision Instruct$0.15$0.45128K
Llama 3.2 1B Instruct$0.15$0.45128K
Llama 3.2 90B Vision Instruct$0.15$0.45128K
Mistral NeMo (2407)$0.18$0.54128K
Mixtral 8x7B$0.18$0.5432K
Qwen2.5 72B$0.2$0.6128K

Pricing Overview

Cheapest$0.08/1M
Most expensive$0.20/1M

About Bitdeer AI

Bitdeer AI Cloud offers both serverless (pay-per-use) and dedicated (reserved GPU) deployment options for LLM inference. The platform provides OpenAI-compatible API endpoints and supports fine-tuning on dedicated instances. Models are hosted on NVIDIA H100/H200/A100 GPUs with automatic scaling.

Full provider profile →