Using DeepSeek V4 Pro on Fireworks AI
Implementation guide · DeepSeek V4 · DeepSeek
Quick Start
- 1
- 2Use the Fireworks AI SDK or REST API to call
deepseek-v4-pro— see the documentation for request format. - 3
About Fireworks AI
The Fireworks AI Platform is a comprehensive generative AI solution that enables developers and businesses to build, customize, and deploy AI models at scale. It supports a diverse range of cutting-edge open-source models, including Meta's Llama and Stable Diffusion, for tasks such as natural language processing and image generation. The platform's serverless architecture allows for quick deployment without extensive infrastructure management, operating on a pay-as-you-go basis. Users can fine-tune models using parameter-efficient techniques, ensuring tailored AI solutions that maintain high performance for specific business needs. Optimized for high throughput and low latency, the platform can handle trillions of inferences daily while providing a seamless user experience. It offers tools for efficient model maintenance and iteration, allowing businesses to focus on innovation rather than complex AI model management. The platform's design facilitates easy integration and customization, enabling organizations to effectively scale their AI-powered solutions. With its cost-efficient approach and comprehensive features, the Fireworks AI Platform empowers businesses to leverage advanced AI capabilities for enhanced productivity and competitive advantage in their respective markets.
Fireworks AI offers a generative AI platform as a service, focusing on rapid product iteration and cost-efficient AI deployment. Their platform is designed to optimize the development and serving of generative AI applications, enabling businesses to quickly build and scale AI-powered solutions. Fireworks.ai emphasizes minimizing the cost to serve while maximizing the potential of generative AI technologies, making advanced AI capabilities more accessible and practical for a wide range of applications.
Pricing on Fireworks AI
| Type | Price (per 1M) |
|---|---|
| Input tokens | $1.74 |
| Output tokens | $3.48 |
Capabilities
About DeepSeek V4 Pro
DeepSeek V4 Pro is the flagship 1.6T parameter (49B activated) Mixture-of-Experts language model with 1M-token context. Features hybrid attention (CSA+HCA) requiring only 27% of inference FLOPs vs DeepSeek-V3.2 at 1M context, Manifold-Constrained Hyper-Connections (mHC), and Muon Optimizer for training stability. Achieves 93.5% on LiveCodeBench, 89.8% on IMOAnswerBench, and 90.1% on MMLU. Supports Non-Think, Think High, and Think Max reasoning modes. Pricing: $1.74/1M input, $3.48/1M output (cache hit: $0.145/1M input). MIT licensed.