Claude Haiku 4.5
claude-haiku-4-5
Claude Haiku 4.5 is worth evaluating for coding, rag, and agents when its provider route and context window match the workload.
Decision context: Coding task fit, 7 tracked provider routes, and research from 2026-04-19.
Use it for
- Teams evaluating coding, rag, and agents
- Workloads that can use a 200k context window
- Buyers comparing 4 tracked provider routes
Do not use it for
- Workloads where another current model has stronger sourced task evidence
Cheapest output
$4.00
AWS Bedrock per 1M tokens
Provider routes
7
Tracked API hosts
Quality / dollar
Grade D
Ranked by benchmark score divided by cheapest output price
Freshness
2026-04-19
Researched 26d ago
Top use-case fit
Coding
Q/$ D1 relevant benchmark in the decision map.
RAG
Included by capability and metadata signals in the decision map.
Agents
Q/$ C2 relevant benchmarks in the decision map.
Provider price ladder
Compare all 7| Provider | Input / 1M | Output / 1M | Batch in / out | Route |
|---|---|---|---|---|
| AWS Bedrock | $0.800 | $4.00 | - | Serverless |
| GCP Vertex AI | $0.800 | $4.00 | - | Serverless |
| Anthropic | $1.00 | $5.00 | $0.500 / $2.50 | Serverless |
| OpenRouter | $1.00 | $5.00 | - | Serverless |
Benchmark peer barsfor Coding
Migration checks
No linked migration route is available for this model yet.
About
Claude Haiku 4.5 available on AWS Bedrock
Claude Haiku 4.5 has a 200K-token context window.
Claude Haiku 4.5 input tokens at $0.8/1M, output at $4/1M.
Capabilities
Benchmark Scores(3)
| Benchmark | Score | Version | Source |
|---|---|---|---|
| BFCL | 68.7 | v4 | https://gorilla.cs.berkeley.edu/leaderboard.html |
| SWE-bench Verified | 73.3 | SWE-bench Verified | https://www.swebench.com/verified.html |
| MultiChallenge | 50.5 | MultiChallenge | https://labs.scale.com/leaderboard/multichallenge |
Rankings
Compare
All comparisons →Specifications
Created by
Developing safe and ethical AI systems.