LLM ReferenceLLM Reference
Replicate API

Using Gemini 3.1 Pro Preview on Replicate API

Implementation guide · Gemini 3.1 · Google DeepMind

Serverless

Quick Start

  1. 1
    Create an account at Replicate API and generate an API key.
  2. 2
    Use the Replicate API SDK or REST API to call gemini-3.1-pro-preview — see the documentation for request format.
  3. 3
    You'll be billed $2.00/1M input, $12.00/1M output tokens. See full pricing.

Code Examples

Install
pip install replicate
API key
REPLICATE_API_TOKEN
Model ID
gemini-3.1-pro-preview

Replicate uses "owner/model-name" format (e.g. "meta/meta-llama-3-8b-instruct") for the latest version, or "owner/model-name:version-sha" to pin to a specific version. The REST endpoint splits owner and model-name into the path: /v1/models/{owner}/{model-name}/predictions.

import replicate

# reads REPLICATE_API_TOKEN from env
# gemini-3.1-pro-preview format: "owner/model-name" (latest version) or "owner/model-name:version-hash"
output = replicate.run(
    "gemini-3.1-pro-preview",
    input={"prompt": "Hello"}
)
# Output is a list or generator depending on the model
print("".join(output))

About Replicate API

Replicate offers a cloud-based AI platform that simplifies the deployment and integration of machine learning models. The platform provides an extensive library of open-source models that users can run with minimal coding, enabling easy access to advanced AI functionalities such as text generation, image creation, and video production. With automatic API generation, users can effortlessly deploy custom models on a large GPU cluster. The platform also supports the "Cog" tool, which packages models into production-ready containers, streamlining the management and scaling of AI applications. The platform's scalability is a key feature, automatically adjusting resources based on demand to ensure optimal performance during peak usage times. Users benefit from a cost-effective pricing model, paying only for the active time their code runs. Replicate fosters collaboration by allowing users to share their models publicly or keep them private, promoting innovation and knowledge sharing within the developer community. The platform's focus on accessibility and ease of use makes it an ideal solution for developers looking to integrate AI into their projects without the complexities typically associated with machine learning.

Replicate is a cloud-based platform that enables users to run machine learning models easily and efficiently. The company specializes in providing a streamlined environment for deploying, scaling, and managing AI models, making advanced machine learning capabilities accessible to developers and researchers. Replicate's platform allows users to run a wide variety of pre-trained models or deploy their own custom models, facilitating rapid experimentation and development in AI projects. The service is designed to handle the complexities of infrastructure management, allowing users to focus on their core AI tasks rather than worrying about the underlying technical details of model deployment and scaling. By offering a user-friendly interface and robust cloud infrastructure, Replicate aims to democratize access to cutting-edge AI technologies, enabling both individuals and organizations to leverage powerful machine learning models without the need for extensive in-house resources or expertise.

Pricing on Replicate API

TypePrice (per 1M)
Input tokens$2.00
Output tokens$12.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Gemini 3.1 Pro Preview

Google: Gemini 3.1 Pro Preview available via OpenRouter. Pricing: $2/1M input, $12/1M output.

Model Specs

Released2026-02-19
Context1M
ArchitectureDecoder Only
Knowledge cutoff2025-01

Provider

Replicate API
Replicate API

Replicate

San Francisco, California, United States