LLM ReferenceLLM Reference
AWS Bedrock

Using Mistral Small on AWS Bedrock

Implementation guide · Mistral Small · MistralAI

Serverless

Quick Start

  1. 1
    Create an account at AWS Bedrock and generate an API key.
  2. 2
    Use the AWS Bedrock SDK or REST API to call mistral-small-1 — see the documentation for request format.
  3. 3
    You'll be billed $1.00/1M input, $3.00/1M output tokens. See full pricing.

Code Examples

Install
pip install boto3
API key
AWS_ACCESS_KEY_ID
Model ID
mistral-small-1

Use Amazon Bedrock model IDs, e.g. "anthropic.claude-3-opus-20240229-v1:0" for on-demand, or cross-region inference profile IDs like "us.anthropic.claude-opus-4-7-20251101-v1:0". These differ from the public model slug.

import boto3

# Reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION from env
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.converse(
    modelId="mistral-small-1",
    messages=[{
        "role": "user",
        "content": [{"text": "Hello"}]
    }]
)
print(response["output"]["message"]["content"][0]["text"])

About AWS Bedrock

Amazon Bedrock is a comprehensive, fully managed service for building and scaling generative AI applications. The platform provides access to a diverse array of high-performing foundation models (FMs) from leading AI companies through a unified API, enabling users to select the most suitable models for their specific use cases. Key features include model customization using proprietary data through techniques like fine-tuning and Retrieval Augmented Generation (RAG), which significantly enhances the relevance and accuracy of AI outputs. Additionally, the platform supports the automation of complex tasks with agents capable of executing multi-step operations, making it versatile for applications ranging from text generation and image creation to conversational AI. Beyond its robust technical capabilities, Amazon Bedrock offers a serverless experience that streamlines infrastructure management, allowing developers to focus on application development without the burden of managing underlying resources. The platform prioritizes security and compliance, ensuring that data remains within the AWS ecosystem and adheres to industry standards. Bedrock's flexible pricing models, including pay-as-you-go options, enable organizations to effectively manage costs while scaling their AI initiatives. This combination of advanced features, ease of use, and cost-effectiveness positions Amazon Bedrock as a powerful tool for businesses looking to innovate rapidly in the generative AI space, ultimately enhancing productivity and operational efficiency.

AWS Bedrock is Amazon's fully managed foundation-model service, providing unified API access to top models from Anthropic, Meta, Mistral, and other leading AI labs with built-in tools for RAG, fine-tuning, and AI agent development.

Pricing on AWS Bedrock

TypePrice (per 1M)
Input tokens$1.00
Output tokens$3.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mistral Small

Mistral Small available on AWS Bedrock

Model Specs

Released2024-02-26
Context32K
ArchitectureDecoder Only

Provider

AWS Bedrock
AWS Bedrock

Amazon Web Services

Seattle, Washington, United States