Llama 3 70B Instruct
llama3-70b-instruct
About
The Llama 3 70B Instruct model is a large language model with 70 billion parameters, released by Meta on April 18, 2024. It's an instruction-tuned variant optimized for conversational applications, utilizing an advanced auto-regressive transformer architecture. The model excels in following instructions and engaging in dialogue, having been trained on over 15 trillion tokens with a December 2023 knowledge cutoff. It demonstrates superior performance on industry benchmarks, scoring 82.0 on the MMLU (5-shot) test. The model incorporates extensive safety measures and optimizations, including RLHF, to enhance helpfulness and reduce harmful content generation. For more details, visit the model's Hugging Face page [1].
Llama 3 70B Instruct has a 8K-token context window.
Llama 3 70B Instruct input tokens at $0.4/1M, output at $0.4/1M.
Capabilities
Providers(18)
Compare all →| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| GCP Vertex AI | $1.20 | $3.60 | Serverless | |
| AWS Bedrock | $2.65 | $3.5 | Serverless | |
| Microsoft Foundry | $3.78 | $11.34 | ServerlessProvisioned | |
| NVIDIA NIM | — | — | Provisioned | |
| DeepInfra | $0.45 | $0.65 | Serverless | |
| OctoAI API | $0.9 | $0.9 | Serverless | |
| Databricks Foundation Model Serving | $1 | $3 | Serverless | |
| Fireworks AI | $0.9 | $0.9 | Serverless | |
| Baseten API | — | — | Serverless | |
| Lepton AI API | $0.80 | $0.80 | Serverless | |
| OCI Generative AI | — | — | Serverless | |
| Together AI | $0.88 | $0.88 | Serverless | |
| Perplexity Labs | $1.00 | $1.00 | Serverless | |
| IBM watsonx | $1.8 | $1.8 | Serverless | |
| Scale AI GenAI Platform | — | — | Serverless | |
| Hyperbolic AI Inference | $0.40 | $0.40 | Serverless | |
| OpenRouter | $0.51 | $0.74 | Serverless | |
| Replicate API | $0.65 | $2.75 | Serverless |
Benchmark Scores(4)
| Benchmark | Score | Version | Source |
|---|---|---|---|
| HumanEval | 72.6 | pass@1 | Open LLM Leaderboard |
| Massive Multitask Language Understanding | 82.0 | 5-shot | Open LLM Leaderboard |
| Instruction-Following Evaluation | 77.8 | v2 | https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard |
| MMLU PRO | 57.4 | — | https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro |