MetaMath
About
The MetaMath family of large language models (LLMs) are fine-tuned models specializing in mathematical reasoning 125. Developed by a team of researchers from various institutions, including the University of Cambridge and Huawei Noah's Ark Lab, MetaMath models are trained on a dataset called MetaMathQA, created by bootstrapping mathematical questions from existing benchmarks like GSM8K and MATH 15. This process involves rewriting questions from multiple perspectives to create a richer and more diverse training set 15. The MetaMath family includes models of varying sizes, such as MetaMath-7B, MetaMath-13B, and MetaMath-70B, each demonstrating improved performance on mathematical reasoning benchmarks compared to other open-source LLMs of similar size 125. The MetaMath-70B model, in particular, achieves accuracy on the GSM8K benchmark that is comparable to GPT-3.5-Turbo. The models and dataset are publicly available 125.
Specifications(5 models)
| Model | Released | Context | Parameters |
|---|---|---|---|
| MetaMath 70B | 2023-10 | — | 70B |
| MetaMath 13B | 2023-10 | — | 13B |
| MetaMath 7B | 2023-10 | — | 7B |
| MetaMath Mistral 7B | 2023-10 | 32K | 7B |
| MetaMath Llemma 7B | 2023-10 | — | 7B |
Frequently Asked Questions
- What is MetaMath?
- The MetaMath family of large language models (LLMs) are fine-tuned models specializing in mathematical reasoning 125. Developed by a team of researchers from various institutions, including the University of Cambridge and Huawei Noah's Ark Lab, MetaMath models are trained on a dataset called MetaMathQA, created by bootstrapping mathematical questions from existing benchmarks like GSM8K and MATH 15. This process involves rewriting questions from multiple perspectives to create a richer and more diverse training set 15. The MetaMath family includes models of varying sizes, such as MetaMath-7B, MetaMath-13B, and MetaMath-70B, each demonstrating improved performance on mathematical reasoning benchmarks compared to other open-source LLMs of similar size 125. The MetaMath-70B model, in particular, achieves accuracy on the GSM8K benchmark that is comparable to GPT-3.5-Turbo. The models and dataset are publicly available 125.
- How many models are in the MetaMath family?
- The MetaMath family contains 5 models.
- What is the latest MetaMath model?
- The latest model is MetaMath 70B, released in 2023-10.




