LLM ReferenceLLM Reference
Fireworks AI

Nemotron 3 Super-120B-A12B on Fireworks AI

Nemotron 3 · NVIDIA AI

Provisioned

Last refreshed 2026-05-16. Next refresh: weekly.

Why use Nemotron 3 Super-120B-A12B on Fireworks AI?

Fireworks AI offers Nemotron 3 Super-120B-A12B with competitive pricing. Fireworks AI offers a generative AI platform as a service, focusing on rapid product iteration and cost-efficient AI deployment.

Compare Nemotron 3 Super-120B-A12B across 4 providers to find the best fit for your use case
Input / 1M
-
Output / 1M
-
Cache
Not sourced
Batch
Not sourced

Setup recipe

Python + curl
Install
pip install openai
Auth
export FIREWORKS_API_KEY=...
Call
import os
from openai import OpenAI
client = OpenAI(
    api_key=os.environ["FIREWORKS_API_KEY"],
Model ID
accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4

Request example

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["FIREWORKS_API_KEY"],
    base_url="https://api.fireworks.ai/inference/v1"
)
response = client.chat.completions.create(
    model="accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

Gotchas

  • Use provider model ID "accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4", not the LLMReference slug "nemotron-3-super-120b-a12b".
  • Fireworks model IDs use "accounts/fireworks/models/{model-name}" format, e.g. "accounts/fireworks/models/llama4-scout-instruct-basic" or "accounts/fireworks/models/deepseek-r1".
  • The examples expect FIREWORKS_API_KEY; rename it only if your application config maps the new variable.

Compare Nemotron 3 Super-120B-A12B Across Providers

ProviderInput (per 1M)Output (per 1M)
DeepInfra$0.10$0.50
NVIDIA NIM
OpenRouter$0.09$0.45
Fireworks AI

Capabilities

Structured Outputs

About Nemotron 3 Super-120B-A12B

NVIDIA Nemotron 3 Super-120B-A12B is a 120B total / 12B active hybrid Latent MoE model with interleaved Mamba-2 and MoE layers for agentic, reasoning, and conversational tasks. Fireworks lists the NVFP4 variant for on-demand deployment with 262k context.

FAQ

What is the context window for Nemotron 3 Super-120B-A12B on Fireworks AI?

Nemotron 3 Super-120B-A12B supports a 262,144 token context window on Fireworks AI.

How does Fireworks AI compare to other Nemotron 3 Super-120B-A12B providers?

Nemotron 3 Super-120B-A12B is available from 4 providers. The cheapest input pricing is $0.09/1M tokens from OpenRouter.

What API model ID do I use for Nemotron 3 Super-120B-A12B on Fireworks AI?

Use the model ID accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4 when calling Fireworks AI's API.

Who created Nemotron 3 Super-120B-A12B?

Nemotron 3 Super-120B-A12B was created by NVIDIA AI as part of the Nemotron 3 model family.

Is Nemotron 3 Super-120B-A12B open source?

Nemotron 3 Super-120B-A12B's open source status is unknown in the seed data.

Get Started

Model Specs

Released2026-03-11
Parameters120B
Context1M
ArchitectureDecoder Only

GPU-Hour Providers(1)