Llama 4 Scout 17B-16E Instruct
llama-4-scout-17b-16e-instruct
Open Source
About
Meta's Llama 4 Scout is a 17-billion parameter mixture-of-experts model with 16 expert routing. Optimized for efficient inference on edge and cloud environments with strong multi-turn conversation capabilities. Available on Cloudflare Workers AI.
Llama 4 Scout 17B-16E Instruct has a 328K-token context window.
Llama 4 Scout 17B-16E Instruct input tokens at $0.08/1M, output at $0.3/1M.
Capabilities
VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode ExecutionPrompt CachingBatch APIAudioFine-tuning
Providers(8)
Compare all →| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| OpenRouter | $0.08 | $0.3 | Serverless | |
| Together AI | — | — | Serverless | |
| Fireworks AI | — | — | Serverless | |
| DeepInfra | $0.08 | $0.30 | Serverless | |
| GCP Vertex AI | $0.20 | $0.65 | Serverless | |
| NVIDIA NIM | — | — | Serverless | |
| GroqCloud | $0.11 | $0.34 | Serverless | |
| AWS Bedrock | $0.17 | $0.22 | Serverless |
Benchmark Scores(1)
| Benchmark | Score | Version | Source |
|---|---|---|---|
| τ-bench | 62.3 | τ-bench | https://taubench.com/ |
Rankings
Compare
All comparisons →Llama 4 Scout 17B-16E Instruct vs GPT-4o Mini (07-18)Llama 4 Scout 17B-16E Instruct vs Claude 3.5 HaikuLlama 4 Scout 17B-16E Instruct vs Gemini 2.5 FlashLlama 4 Scout 17B-16E Instruct vs Mistral Small 3.1 24B InstructLlama 4 Scout 17B-16E Instruct vs Llama 4 Maverick 17B Instruct FP8Llama 4 Scout 17B-16E Instruct vs Gemini 2.5 Flash