Get Started with Phi-3 Medium 4K on Microsoft Foundry
Microsoft Foundry offers access to Phi-3 Medium 4K with a 4K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.45 |
| Output tokens | $1.35 |
Capabilities
About Phi-3 Medium 4K
The Phi-3 Medium 4K, developed by Microsoft, is a state-of-the-art large language model with 14 billion parameters. It is engineered for efficiency across various tasks, particularly excelling in reasoning capabilities. This model is designed to handle 4,096 token context lengths, allowing for the processing of longer input sequences. Leveraging a dense, decoder-only Transformer architecture, it incorporates techniques like supervised fine-tuning and direct preference optimization to align with human preferences and safety standards. The model supports multilingual data, although it is primarily trained in English. Its lightweight nature allows for deployment on diverse hardware platforms, making it accessible and versatile for both commercial and research purposes. Safety measures are embedded, although further precautions are advised for applications with higher risks.