LLM ReferenceLLM Reference
Microsoft Foundry

Using Qwen1.5-110B on Microsoft Foundry

Implementation guide · Qwen1.5 · Alibaba

Provisioned

Quick Start

  1. 1
    Create an account at Microsoft Foundry and generate an API key.
  2. 2
    Use the Microsoft Foundry SDK or REST API to call qwen1.5-110b — see the documentation for request format.
  3. 3
    You'll be billed $1.50/1M input, $2.50/1M output tokens.

About Microsoft Foundry

Microsoft Foundry offers a comprehensive platform-as-a-service for enterprise AI operations. It provides multiple deployment options including Serverless APIs (pay-as-you-go), Global Standard (shared managed capacity), Provisioned Throughput Units (reserved capacity), batch processing, and bring-your-own model deployments. The platform features a unified control plane for models, agents, tools, and observability. Its Agent Service enables building and deploying AI agents with built-in tracing, monitoring, and governance. Evaluation and monitoring tools assess model performance, safety, and groundedness. Foundry supports seamless upgrades from Azure OpenAI with non-destructive migration, maintaining existing deployments while unlocking multi-provider model access and advanced platform capabilities.

Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing on Microsoft Foundry

TypePrice (per 1M)
Input tokens$1.50
Output tokens$2.50

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Qwen1.5-110B

The Qwen1.5-110B is a large language model created by Alibaba Cloud, distinguished as the largest in the Qwen1.5 series. It is a transformer-based, decoder-only model equipped with 110 billion parameters and optimized for efficiency using features like SwiGLU activation and Grouped Query Attention (GQA). Pretrained on an extensive dataset, it supports a 32K context length and multilingual capabilities, handling various languages including English and Chinese. The model excels in tasks like text generation, dialogue systems, and is noted for its competitive performance and advanced tokenizer, making it highly versatile and applicable across multiple NLP tasks. Various quantized versions are available to accommodate different hardware specifications.

Model Specs

Released2024-04-25
Parameters110B
ArchitectureDecoder Only

Provider

Microsoft Foundry
Microsoft Foundry

Microsoft

Redmond, Washington, United States