Last refreshed 2026-05-16. Next refresh: weekly.
Why use Nemotron 3 Super-120B-A12B on Fireworks AI?
Fireworks AI offers Nemotron 3 Super-120B-A12B with competitive pricing. Fireworks AI offers a generative AI platform as a service, focusing on rapid product iteration and cost-efficient AI deployment.
Compare Nemotron 3 Super-120B-A12B across 4 providers to find the best fit for your use caseSetup recipe
Python + curlpip install openaiexport FIREWORKS_API_KEY=...import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["FIREWORKS_API_KEY"],accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4Request example
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["FIREWORKS_API_KEY"],
base_url="https://api.fireworks.ai/inference/v1"
)
response = client.chat.completions.create(
model="accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)Gotchas
- Use provider model ID "accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4", not the LLMReference slug "nemotron-3-super-120b-a12b".
- Fireworks model IDs use "accounts/fireworks/models/{model-name}" format, e.g. "accounts/fireworks/models/llama4-scout-instruct-basic" or "accounts/fireworks/models/deepseek-r1".
- The examples expect FIREWORKS_API_KEY; rename it only if your application config maps the new variable.
Compare Nemotron 3 Super-120B-A12B Across Providers
| Provider | Input (per 1M) | Output (per 1M) |
|---|---|---|
| DeepInfra | $0.10 | $0.50 |
| NVIDIA NIM | — | — |
| OpenRouter | $0.09 | $0.45 |
| Fireworks AI | — | — |
Capabilities
About Nemotron 3 Super-120B-A12B
NVIDIA Nemotron 3 Super-120B-A12B is a 120B total / 12B active hybrid Latent MoE model with interleaved Mamba-2 and MoE layers for agentic, reasoning, and conversational tasks. Fireworks lists the NVFP4 variant for on-demand deployment with 262k context.
FAQ
What is the context window for Nemotron 3 Super-120B-A12B on Fireworks AI?
Nemotron 3 Super-120B-A12B supports a 262,144 token context window on Fireworks AI.
How does Fireworks AI compare to other Nemotron 3 Super-120B-A12B providers?
Nemotron 3 Super-120B-A12B is available from 4 providers. The cheapest input pricing is $0.09/1M tokens from OpenRouter.
What API model ID do I use for Nemotron 3 Super-120B-A12B on Fireworks AI?
Use the model ID accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-nvfp4 when calling Fireworks AI's API.
Who created Nemotron 3 Super-120B-A12B?
Nemotron 3 Super-120B-A12B was created by NVIDIA AI as part of the Nemotron 3 model family.
Is Nemotron 3 Super-120B-A12B open source?
Nemotron 3 Super-120B-A12B's open source status is unknown in the seed data.