Why use DeepSeek V3.2 on Microsoft Foundry?
Microsoft Foundry offers DeepSeek V3.2 with competitive pricing. Microsoft Foundry is a unified Azure platform-as-a-service offering for enterprise AI operations, model builders, and application development.
Compare DeepSeek V3.2 across 5 providers to find the best fit for your use caseInput / 1M
-
Output / 1M
-
Cache
Not sourced
Batch
Not sourced
Setup recipe
Docs fallbackInstall
Use the provider REST API or SDKAuth
Create a provider API keyCall
model: deepseek-v3.2Model ID
deepseek-v3.2Request example
Curated snippets for this provider are not sourced yet. Use Microsoft Foundry documentation with model ID
deepseek-v3.2.Gotchas
No curated gotchas have been sourced for this exact provider/model route yet.
Compare DeepSeek V3.2 Across Providers
| Provider | Input (per 1M) | Output (per 1M) |
|---|---|---|
| Fireworks AI | $0.56 | $1.68 |
| NVIDIA NIM | — | — |
| AWS Bedrock | $0.62 | $1.85 |
| OpenRouter | $0.25 | $0.38 |
| Microsoft Foundry | — | — |
Capabilities
Structured OutputsCode Execution
About DeepSeek V3.2
DeepSeek V3.2 available on AWS Bedrock
FAQ
What is the context window for DeepSeek V3.2 on Microsoft Foundry?
DeepSeek V3.2 supports a 128,000 token context window on Microsoft Foundry.
How does Microsoft Foundry compare to other DeepSeek V3.2 providers?
DeepSeek V3.2 is available from 5 providers. The cheapest input pricing is $0.252/1M tokens from OpenRouter.
What API model ID do I use for DeepSeek V3.2 on Microsoft Foundry?
Use the model ID deepseek-v3.2 when calling Microsoft Foundry's API.
Who created DeepSeek V3.2?
DeepSeek V3.2 was created by DeepSeek as part of the DeepSeek V3 model family.
Is DeepSeek V3.2 open source?
DeepSeek V3.2 is open source according to the seed data.
Get Started
Model Specs
Released2025-01-01
Parameters671B
Context160K
ArchitectureDecoder Only