LLM ReferenceLLM Reference

MetaMath

MetaMathApache 2.0Mathematics
5 models2023Up to 32K ctx

About

The MetaMath family of large language models (LLMs) are fine-tuned models specializing in mathematical reasoning 125. Developed by a team of researchers from various institutions, including the University of Cambridge and Huawei Noah's Ark Lab, MetaMath models are trained on a dataset called MetaMathQA, created by bootstrapping mathematical questions from existing benchmarks like GSM8K and MATH 15. This process involves rewriting questions from multiple perspectives to create a richer and more diverse training set 15. The MetaMath family includes models of varying sizes, such as MetaMath-7B, MetaMath-13B, and MetaMath-70B, each demonstrating improved performance on mathematical reasoning benchmarks compared to other open-source LLMs of similar size 125. The MetaMath-70B model, in particular, achieves accuracy on the GSM8K benchmark that is comparable to GPT-3.5-Turbo. The models and dataset are publicly available 125.

Specifications(5 models)

MetaMath model specifications comparison
ModelReleasedContextParameters
MetaMath 70B2023-1070B
MetaMath 13B2023-1013B
MetaMath 7B2023-107B
MetaMath Mistral 7B2023-1032K7B
MetaMath Llemma 7B2023-107B

Frequently Asked Questions

What is MetaMath?
The MetaMath family of large language models (LLMs) are fine-tuned models specializing in mathematical reasoning 125. Developed by a team of researchers from various institutions, including the University of Cambridge and Huawei Noah's Ark Lab, MetaMath models are trained on a dataset called MetaMathQA, created by bootstrapping mathematical questions from existing benchmarks like GSM8K and MATH 15. This process involves rewriting questions from multiple perspectives to create a richer and more diverse training set 15. The MetaMath family includes models of varying sizes, such as MetaMath-7B, MetaMath-13B, and MetaMath-70B, each demonstrating improved performance on mathematical reasoning benchmarks compared to other open-source LLMs of similar size 125. The MetaMath-70B model, in particular, achieves accuracy on the GSM8K benchmark that is comparable to GPT-3.5-Turbo. The models and dataset are publicly available 125.
How many models are in the MetaMath family?
The MetaMath family contains 5 models.
What is the latest MetaMath model?
The latest model is MetaMath 70B, released in 2023-10.

Models(5)