Gemma 2B Instruct
gemma-2b-it
About
Gemma 2B Instruct is a large language model developed by Google, designed to balance performance and accessibility with its 2 billion parameters. Derived from the Gemini family, it excels in tasks such as text generation, code interpretation, and mathematical problem-solving. Built on a transformer decoder architecture, it features multi-query attention, RoPE, GeGLU activations, and RMSNorm. Trained on approximately 6 trillion tokens, including web documents, code, and mathematical content, it uses SFT and RLHF for instruction-tuning. Notable for its lightweight design permitting deployment on consumer-grade hardware, it's open-source and optimized for dialogue applications. Despite its capabilities, limitations include potential biases, factual inaccuracies, and challenges with complex reasoning.
Gemma 2B Instruct has a 2K-token context window.
Gemma 2B Instruct input tokens at $0.04/1M, output at $0.12/1M.
Capabilities
Providers(7)
Compare all →| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| Together AI | $0.1 | $0.1 | Serverless | |
| GCP Vertex AI | $0.04 | $0.12 | Serverless | |
| Cloudflare Workers AI | — | — | Serverless | |
| NVIDIA NIM | — | — | Provisioned | |
| Alibaba Cloud PAI-EAS | — | — | Serverless | |
| Fireworks AI | $0.1 | $0.1 | Serverless | |
| Replicate API | $0.05 | $0.25 | Serverless |