Get Started with Phi-3 Mini 4k on Microsoft Foundry
Microsoft Foundry offers access to Phi-3 Mini 4k with a 4K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.28 |
| Output tokens | $0.84 |
Capabilities
About Phi-3 Mini 4k
The Phi-3 Mini-4K-Instruct model by Microsoft is an advanced, lightweight language model boasting 3.8 billion parameters, optimized for environments with limited computational resources. It excels in various natural language processing tasks, especially in reasoning, text generation, and maintaining multi-turn conversations. Trained on a mix of synthetic and high-quality data, the model is tailored for effective instruction-following. Despite its capabilities, it has limitations in factual knowledge and multilingual support, often requiring external resources to enhance accuracy. The model is ideal for commercial and research applications that demand efficient processing, such as mobile apps and real-time systems.