LLM Reference

DeepSeek Math 7B on Cloudflare Workers AI

DeepSeek Math · DeepSeek

Serverless

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About DeepSeek Math 7B

DeepSeek Math 7B is a powerful family of large language models by DeepSeek AI, crafted for advanced mathematical reasoning. The base model begins as DeepSeek-Coder-v1.5 7B, further pre-trained with 500 billion tokens, encompassing math-focused and general data sources. This model attains a 51.7% score on the MATH benchmark, demonstrating competitive prowess without external aids. Enhanced by instruction tuning, DeepSeekMath-Instruct 7B boosts its mathematical expertise. The DeepSeekMath-RL 7B model, further refined by a novel Group Relative Policy Optimization algorithm, capitalizes on reinforcement learning for superior performance. Available on platforms like Hugging Face, these models cater to applications in education, research, and productivity, offering various quantized formats suitable for diverse hardware 110.

Get Started

Model Specs

Released2024-02-05
Parameters7B
ArchitectureDecoder Only