LLM ReferenceLLM Reference

Gemma 2 27B Instruct

gemma-2-27b-it

Open Source

About

Gemma 2 27B Instruct is a cutting-edge large language model from Google, excelling in text generation, question answering, summarization, and reasoning tasks. It features a decoder-only transformer architecture, utilizing 27 billion parameters, and supports context length processing of up to 8,192 tokens. The model incorporates innovative mechanisms like Grouped Query Attention and Sliding Window Attention to enhance efficiency and effectiveness in handling long texts. Its instruction-tuned variants are designed for improved interaction in conversational tasks, and it benefits from knowledge distillation techniques for enhanced performance. Additionally, Gemma 2 27B Instruct is openly accessible, promoting wider innovation in AI applications.

Gemma 2 27B Instruct has a 8K-token context window.

Gemma 2 27B Instruct input tokens at $0.25/1M, output at $0.75/1M.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(5)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
NVIDIA NIMProvisioned
OpenRouter$0.65$0.65Serverless
Fireworks AI$0.9$0.9Serverless
Arcee AI$0.25$0.75Serverless
Replicate API$0.4$0.4Serverless

Benchmark Scores(1)

BenchmarkScoreVersionSource
Massive Multitask Language Understanding82.35-shotOpen LLM Leaderboard

Rankings

Specifications

FamilyGemma 2
Released2024-06-27
Parameters27B
Context8K
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuned

Created by

Pioneering artificial intelligence research.

London, United Kingdom
Founded 2014
Website