LLM ReferenceLLM Reference

Llama 3 70B

llama3-70b

Open Source

About

The Llama 3 70B model is a state-of-the-art large language model with 70 billion parameters, released by Meta on April 18, 2024. It's based on an auto-regressive transformer architecture and has been optimized for dialogue applications using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). The model supports an 8,000-token context length and has been trained on over 15 trillion tokens from public online sources. It excels in tasks such as conversational AI, text generation, and natural language understanding, outperforming many existing open-source chat models on industry benchmarks. The model is designed with a focus on safety and helpfulness, making it suitable for both commercial and research applications, particularly in English. For more details, visit the Hugging Face link .

Llama 3 70B has a 8K-token context window.

Llama 3 70B input tokens at $0.65/1M, output at $2.75/1M.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Replicate API$0.65$2.75Serverless

Benchmark Scores(4)

BenchmarkScoreVersionSource
Google-Proof Q&A44.1diamondresearch
HellaSwag92.410-shotresearch
HumanEval72.6pass@1research
Massive Multitask Language Understanding80.55-shotresearch

Rankings

Specifications

FamilyLlama 3
Released2024-04-18
Parameters70B
Context8K
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuned

Created by

Large-scale open-source AI for social technologies.

Menlo Park, California, United States
Founded 2013
Website

Providers(1)