LLM ReferenceLLM Reference

Benchmarks

88 evaluation benchmarks for language models

Leaderboard

Agent

Arena

Coding

HumanEval

HumanEval

active
Metric: Pass@12021
Coding

164 Python coding problems measuring functional correctness of code generation via pass@k metric. Released by OpenAI in 2021; HumanEval+ provides a more rigorous extension.

MBPP

Mostly Basic Programming Problems

active
Metric: Pass@k2021
Coding

974 Python programming problems for evaluating code generation, ranging from beginner to intermediate difficulty.

HumanEval+

HumanEval+

active
Metric: Pass@12023
Coding

HumanEval+ extends HumanEval with an average of 764 test cases per problem (vs 9.6 in original), greatly increasing evaluation rigor for edge case coverage.

MBPP+

Mostly Basic Programming Problems+

active
Metric: Pass@12023
Coding

MBPP+ extends MBPP with 21.2 average test cases per problem, a 35x increase over the original for more rigorous code generation evaluation.

Spider

Spider

active
Metric: Execution Accuracy2018
Coding

Large-scale text-to-SQL benchmark with 10,181 questions across 200 databases from 138 domains, requiring cross-domain generalization.

ToolTalk

ToolTalk

active
Metric: Task Success Rate2023
Coding

Conversational tool-use benchmark with 78 conversations requiring multi-step API calls across 28 tools including calendar, email, and messaging.

LiveCodeBench

active
Metric: Pass@12024
Coding

Continuously updated coding benchmark sourcing new problems from LeetCode, AtCoder, and Codeforces post-May 2023 to eliminate contamination. Evaluates code generation, self-repair, and execution prediction.

CRUXEval

active
Metric: Pass@12024
Coding

800 Python function input-output reasoning problems testing code execution understanding rather than code generation. Two tasks: input prediction and output prediction.

SWE-bench Verified

active
Metric: % Resolved2024
Coding

500 human-validated GitHub issue resolution tasks from SWE-bench, created with OpenAI in August 2024. The standard evaluation for agentic coding systems. Top performers (2026) exceed 78% resolved.

Code editing across 8 programming languages

Aider Polyglot

active
Metric: % Exercises Completed2024
Coding

Real-world code editing benchmark measuring a model's ability to apply changes to existing codebases across 8 programming languages using Exercism platform exercises.

1,140 complex Python programming tasks

BigCodeBench

active
Metric: Pass@12024
Coding

1,140 complex Python programming tasks spanning diverse real-world domains requiring multi-library function calls. Two variants: Complete (function completion) and Instruct (natural language to code).

SWE-bench Pro

active
Metric: % Resolved2025
Coding

731-task multilingual real-world GitHub issue benchmark extending SWE-bench Verified with harder, more diverse tasks across Python, JavaScript, TypeScript, Java, Go, C++, and Rust.

Composite

General

Holistic

Knowledge

Long context

Mathematics

Multilingual

Multimodal

Reasoning

GPQA

Google-Proof Q&A

active
Metric: Accuracy2023
Reasoning

PhD-level multiple-choice questions in biology, physics, and chemistry designed so that even experts with internet access score below 67%. The Diamond subset (198 questions) is the hardest variant used in most frontier model evaluations.

DROP

Discrete Reasoning Over Paragraphs

active
Metric: F1 / EM2019
Reasoning

Reading comprehension benchmark requiring numerical reasoning, counting, sorting, and set operations over paragraphs.

ARC

AI2 Reasoning Challenge

active
Metric: Accuracy2018
Reasoning

Grade-school science multiple-choice questions partitioned into Easy and Challenge (hard) sets. ARC-Challenge is the standard evaluation variant.

HellaSwag

HellaSwag

active
Metric: Accuracy2019
Reasoning

Commonsense sentence-completion benchmark using adversarially filtered wrong answers. Top LLMs now exceed 95% accuracy.

ANLS*

Average Normalized Levenshtein Similarity*

active
Metric: ANLS*2024
Reasoning

Extended ANLS metric for evaluating visual question answering on documents, supporting free-form answer evaluation beyond exact match.

TruthfulQA

TruthfulQA

active
Metric: TruthfulQA Score2021
Reasoning

817 questions spanning 38 categories where humans commonly hold misconceptions. Measures whether models give truthful answers or reproduce popular falsehoods.

WinoGrande

WinoGrande

active
Metric: Accuracy2019
Reasoning

44,000 adversarially filtered Winograd schema problems for commonsense reasoning, created at scale to reduce dataset biases.

ARC-E

AI2 Reasoning Challenge-Easy

active
Metric: Accuracy2018
Reasoning

The easy partition of ARC with 7,787 grade-school science multiple-choice questions answerable without complex reasoning.

BoolQ

Boolean Questions

active
Metric: Accuracy2019
Reasoning

15,942 yes/no reading comprehension questions derived from real Google searches paired with Wikipedia passages.

CommonsenseQA

CommonsenseQA

active
Metric: Accuracy2018
Reasoning

12,102 commonsense reasoning multiple-choice questions crowdsourced using ConceptNet to target implicit world knowledge.

OlympiadBench

active
Metric: Accuracy2024
Reasoning

8,952 bilingual (Chinese/English) math and physics Olympiad problems from national/international competitions, testing advanced scientific reasoning with precise symbolic answers.

MathVerse

active
Metric: Accuracy2024
Reasoning

2,612 visual math problems across 5 versions (text dominant, text-lite, text-only, vision-intensive, vision-only) evaluating multimodal mathematical reasoning.

ARC Prize / ARC Challenge

active
Metric: ARC-AGI Score2019
Reasoning

Abstraction and Reasoning Corpus (ARC-AGI) with visual grid puzzles testing core reasoning and novel problem-solving. ARC Prize 2024-2025 competitions track annual progress. NOTE: slug contains invalid characters '/' — recommend renaming to 'arc-prize'.

DynaBench

active
Metric: Task Accuracy2021
Reasoning

Dynamic adversarial data collection platform generating ever-harder NLU benchmarks through human-and-model-in-the-loop annotation to prevent contamination.

Safety

ToxiGen

ToxiGen

active
Metric: Toxicity Classification Accuracy2022
Safety

274,000 machine-generated toxic and benign statements about 13 minority groups, using adversarial classifier-in-the-loop decoding for implicit hate speech evaluation.

RealToxicity

RealToxicity

active
Metric: Toxicity Probability2020
Safety

100,000 naturally occurring web text prompts for measuring the propensity of language models to generate toxic continuations using the Perspective API.

CrowS-Pairs

CrowS-Pairs

active
Metric: Bias Score2020
Safety

1,508 crowdsourced sentence pairs measuring stereotypical bias in masked language models across 9 categories including race, gender, and religion.

BBQ Ambig

BBQ Ambig

active
Metric: Bias Score2022
Safety

Ambiguous context subset of the Bias Benchmark for QA (BBQ), measuring how models respond to social bias questions when context is underspecified.

BBQ Disambig

BBQ Disambig

active
Metric: Bias Score2022
Safety

Disambiguated context subset of BBQ that tests whether additional clarifying context reduces biased model predictions.

Winogender

Winogender

active
Metric: Gender Bias Score2018
Safety

720 Winograd-style schemas testing gender bias in pronoun resolution, checking whether models associate professions with gender stereotypes.

Winobias 1_2

Winobias 1_2

active
Metric: Gender Bias Score2018
Safety

Type 1 (syntactic) Winobias coreference schemas measuring occupational gender bias where syntactic cues are ambiguous.

Winobias 2_2

Winobias 2_2

active
Metric: Gender Bias Score2018
Safety

Type 2 (semantic) Winobias coreference schemas measuring occupational gender bias where gender-revealing pronouns provide semantic cues.

Tool use

Understanding

Other

MMMU

Massive Multi-discipline Multimodal Understanding

active
Metric: Accuracy2023

11,500+ vision-language questions spanning 30 disciplines across six core areas (art, business, science, health, humanities, tech). Evaluates college-level multimodal reasoning.

MathVista

MathVista

active
Metric: Accuracy2023

Mathematical reasoning benchmark combining visual contexts (charts, geometry diagrams, scientific figures) from 28 existing datasets with 6,141 total problems.

MedQA

MedQA

active
Metric: Accuracy2019

Medical question answering benchmark based on USMLE licensing exams with 12,723 questions across US, Mainland China, and Taiwan medical boards.

OpenBookQA

OpenBookQA

active
Metric: Accuracy2018

5,957 elementary science multiple-choice questions requiring both open-book facts and broader commonsense knowledge to answer.

PIQA

Physical Interaction: Question Answering

active
Metric: Accuracy2019

16,000 physical commonsense reasoning questions asking models to choose between two physical solutions for everyday tasks.

SocialIQA

Social Intelligence QA

active
Metric: Accuracy2019

38,000 multiple-choice questions about everyday social situations requiring inference about emotions, motivations, and social norms.

COPA

Choice of Plausible Alternatives

active
Metric: Accuracy2011

1,000 causal reasoning questions asking models to choose the more plausible cause or effect for everyday scenarios.

LAMBADA

LAnguage Modeling Broadened to Account for Discourse Aspects

active
Metric: Accuracy2016

Long-range language modeling benchmark predicting the last word of ~2,600 passages that require understanding broad discourse context.

RACE

ReAding Comprehension Dataset From Examinations

active
Metric: Accuracy2017

28,000 reading comprehension passages with multiple-choice questions from Chinese middle and high school English exams.

SQuAD

Stanford Question Answering Dataset

active
Metric: Exact Match / F12016

100,000+ reading comprehension questions on 500+ Wikipedia articles. SQuAD 2.0 (2018) added unanswerable questions. Both versions remain widely cited.

QuAC

Question Answering in Context

active
Metric: F1 / HEQ2018

98,407 conversational QA turns from 7,354 Wikipedia dialogues requiring multi-turn reasoning and follow-up question answering.

NaturalQ

Natural Questions

active
Metric: Exact Match / F12019

307,373 real Google search queries answered by extracting spans from Wikipedia, with short answer, long answer, and no-answer categories.

MultiNLI

Multi-Genre Natural Language Inference

active
Metric: Accuracy2017

433,000 hypothesis-premise pairs across 10 genres (fiction, telephone, travel, etc.) for textual entailment classification.

SciQ

SciQ

active
Metric: Accuracy2017

13,679 multiple-choice science questions across physics, chemistry, biology, and earth science, each paired with a supporting paragraph.

CRASS

Counterfactual Reasoning Assessment

active
Metric: Accuracy2022

Counterfactual reasoning benchmark testing LLMs on hypothetical scenarios and their logical implications.

ACI-BENCH

Ambient Clinical Intelligence Benchmark

active
Metric: ROUGE / BERTScore2023

Benchmark for generating clinical notes from doctor-patient conversation transcripts to evaluate ambient clinical intelligence systems.

MS-MARCO

MAchine Reading COmprehension Dataset

active
Metric: MRR@10 / NDCG@102016

Large-scale machine reading comprehension and passage ranking benchmark from real Bing search queries, widely used for information retrieval evaluation.

QMSum

Query-based Multi-domain Meeting Summarization

active
Metric: ROUGE2021

Query-focused meeting summarization benchmark with 1,808 query-summary pairs across parliamentary, product, and academic meeting transcripts.

HHH

Helpfulness, Honesty, Harmlessness

active
Metric: Human Preference Rate2022

Anthropic's alignment evaluation covering three core dimensions: helpfulness, honesty, and harmlessness, via human preference comparisons between model responses.

RAI

Responsible AI

active
Metric: Harm Rate2023

Microsoft's framework for automated measurement of responsible AI harms in generative AI applications across fairness, reliability, safety, and privacy dimensions.

CodeXGLUE

CodeXGLUE

active
Metric: Varies by task2021

10 code intelligence tasks across code completion, code translation, code summarization, and code search for multiple programming languages.

LLM Judge

LLM Judge

active
Metric: Judge Score2023

LLM-as-a-judge evaluation paradigm using strong models (GPT-4) to score responses, enabling scalable assessment of open-ended generation quality.

LLM-Eval

LLM-Eval

active
Metric: Multi-dimensional Score2023

Unified multi-dimensional automatic evaluation framework for LLMs assessing helpfulness, engagement, safety, and relevance in a single pass.

JudgeLM

JudgeLM

active
Metric: Judge Agreement Rate2023

Fine-tuned language models trained to be scalable judges for open-ended LLM evaluation, achieving high agreement with human preferences.

Prometheus

Prometheus

active
Metric: Rubric Score2023

LLM evaluation framework using rubric-based scoring with reference materials, enabling fine-grained absolute/relative assessment without GPT-4 dependency.

DocVQA

DocVQA

active
Metric: ANLS2020

Visual question answering on 12,767 document images (scanned forms, invoices, tables) requiring understanding of text layout and document structure.

C-Eval

Chinese Evaluation Suite

active
Metric: Accuracy2023

Comprehensive Chinese-language evaluation suite with 13,948 multiple-choice questions across 52 disciplines and 4 difficulty levels (middle school to professional).

CMMLU

Chinese Massive Multitask Language Understanding

active
Metric: Accuracy2023

67-task Chinese MMLU equivalent covering 11,528 questions across natural science, social science, STEM, and Chinese-specific cultural knowledge.

GAOKAO

GAOKAO

active
Metric: Accuracy2023

Evaluation benchmark using questions from China's college entrance examination (Gaokao) across math, language arts, science, and social studies.

GAOKAO-MM

GAOKAO-MM

active
Metric: Accuracy2024

Multimodal extension of GAOKAO with text+image questions from Chinese college entrance exams, evaluating vision-language model capabilities.