LLM ReferenceLLM Reference
Microsoft Foundry

DeepSeek V3.2 on Microsoft Foundry

DeepSeek V3 · DeepSeek

ServerlessOpen Source

Why use DeepSeek V3.2 on Microsoft Foundry?

Microsoft Foundry offers DeepSeek V3.2 with competitive pricing. Microsoft Foundry is a unified Azure platform-as-a-service offering for enterprise AI operations, model builders, and application development.

Compare DeepSeek V3.2 across 5 providers to find the best fit for your use case
Input / 1M
-
Output / 1M
-
Cache
Not sourced
Batch
Not sourced

Setup recipe

Docs fallback
Install
Use the provider REST API or SDK
Auth
Create a provider API key
Call
model: deepseek-v3.2
Model ID
deepseek-v3.2

Request example

Curated snippets for this provider are not sourced yet. Use Microsoft Foundry documentation with model ID deepseek-v3.2.

Gotchas

No curated gotchas have been sourced for this exact provider/model route yet.

Compare DeepSeek V3.2 Across Providers

ProviderInput (per 1M)Output (per 1M)
Fireworks AI$0.56$1.68
NVIDIA NIM
AWS Bedrock$0.62$1.85
OpenRouter$0.25$0.38
Microsoft Foundry

Capabilities

Structured OutputsCode Execution

About DeepSeek V3.2

DeepSeek V3.2 available on AWS Bedrock

FAQ

What is the context window for DeepSeek V3.2 on Microsoft Foundry?

DeepSeek V3.2 supports a 128,000 token context window on Microsoft Foundry.

How does Microsoft Foundry compare to other DeepSeek V3.2 providers?

DeepSeek V3.2 is available from 5 providers. The cheapest input pricing is $0.252/1M tokens from OpenRouter.

What API model ID do I use for DeepSeek V3.2 on Microsoft Foundry?

Use the model ID deepseek-v3.2 when calling Microsoft Foundry's API.

Who created DeepSeek V3.2?

DeepSeek V3.2 was created by DeepSeek as part of the DeepSeek V3 model family.

Is DeepSeek V3.2 open source?

DeepSeek V3.2 is open source according to the seed data.

Get Started

Model Specs

Released2025-01-01
Parameters671B
Context160K
ArchitectureDecoder Only

GPU-Hour Providers(1)

Related Models on Microsoft Foundry