LLM ReferenceLLM Reference
Microsoft Foundry

Using DeepSeek V4 Flash on Microsoft Foundry

Implementation guide · DeepSeek V4 · DeepSeek

ServerlessOpen Source

Quick Start

  1. 1
    Create an account at Microsoft Foundry and generate an API key.
  2. 2
    Use the Microsoft Foundry SDK or REST API to call deepseek-v4-flash — see the documentation for request format.

Code Examples

See Microsoft Foundry documentation for integration details.

About Microsoft Foundry

Microsoft Foundry offers a comprehensive platform-as-a-service for enterprise AI operations. It provides multiple deployment options including Serverless APIs (pay-as-you-go), Global Standard (shared managed capacity), Provisioned Throughput Units (reserved capacity), batch processing, and bring-your-own model deployments. The platform features a unified control plane for models, agents, tools, and observability. Its Agent Service enables building and deploying AI agents with built-in tracing, monitoring, and governance. Evaluation and monitoring tools assess model performance, safety, and groundedness. Foundry supports seamless upgrades from Azure OpenAI with non-destructive migration, maintaining existing deployments while unlocking multi-provider model access and advanced platform capabilities.

Microsoft Foundry is a unified Azure platform-as-a-service offering for enterprise AI operations, model builders, and application development. It provides access to over 1,900 models from Microsoft, OpenAI, Anthropic, Mistral, xAI, Meta, DeepSeek, Hugging Face, and more. Foundry unifies agents, models, and tools under a single management grouping with built-in enterprise-readiness capabilities including tracing, monitoring, evaluations, and customizable enterprise setup configurations.

Pricing on Microsoft Foundry

Capabilities

ReasoningFunction CallingTool UseStructured Outputs

About DeepSeek V4 Flash

DeepSeek V4 Flash is a 284B parameter (13B activated) Mixture-of-Experts language model with 1M-token context. Features a hybrid attention architecture combining Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) for efficient long-context inference. Supports thinking and non-thinking modes. Legacy API aliases deepseek-chat and deepseek-reasoner map to this model's non-thinking and thinking modes respectively. Pricing: $0.14/1M input, $0.28/1M output (cache hit: $0.0028/1M input). MIT licensed.

Model Specs

Released2026-04-24
Parameters284B
Context1M
ArchitectureMixture of Experts

Provider

Microsoft Foundry
Microsoft Foundry

Microsoft

Redmond, Washington, United States