Quick Start
- 1
- 2Use the Microsoft Foundry SDK or REST API to call
nemotron-3-8b— see the documentation for request format. - 3You'll be billed $0.37/1M input, $1.10/1M output tokens.
About Microsoft Foundry
Microsoft Foundry offers a comprehensive platform-as-a-service for enterprise AI operations. It provides multiple deployment options including Serverless APIs (pay-as-you-go), Global Standard (shared managed capacity), Provisioned Throughput Units (reserved capacity), batch processing, and bring-your-own model deployments. The platform features a unified control plane for models, agents, tools, and observability. Its Agent Service enables building and deploying AI agents with built-in tracing, monitoring, and governance. Evaluation and monitoring tools assess model performance, safety, and groundedness. Foundry supports seamless upgrades from Azure OpenAI with non-destructive migration, maintaining existing deployments while unlocking multi-provider model access and advanced platform capabilities.
Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.
Pricing on Microsoft Foundry
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.37 |
| Output tokens | $1.10 |
Capabilities
About Nemotron 3 8B
Nemotron-3 8B is a series of large language models from NVIDIA, geared towards corporate applications for developing bespoke LLMs. Utilizing a GPT-3-style transformer architecture, the core model features 8 billion parameters and supports a 4,096 token context length. This model forms the backbone for specialized variants like Nemotron-3-8B-Base-4k for customization, Nemotron-3-8B-Chat models allowing for steerable outputs and refined via RLHF, and Nemotron-3-8B-QA, optimized for question-answering. Compatible with the NVIDIA NeMo framework, these models support fine-tuning methods such as LoRA and are designed for efficient deployment on NVIDIA GPUs. They have been trained on extensive multilingual data containing 3.5 to 3.8 trillion tokens across a diverse range of languages and evaluation benchmarks, although they may exhibit biases and inaccuracies due to their training data.