LLM ReferenceLLM Reference

Codex Models by OpenAI

OpenAIProprietary
5 models2021–2026Up to 400K ctxFrom $1.75/1M input

About

OpenAI coding models optimized for agentic software engineering in Codex and similar developer environments, including GPT-5.3-Codex.

Specifications(5 models)

Codex model specifications comparison
ModelReleasedContextVisionReasoningFn CallingTool UseStructured OutputsCode Exec
GPT-5.3-Codex2026-02400KYesYesYesYesYesYes

Available From(2 providers)

Pricing

Codex model pricing by provider
ModelProviderInput / 1MOutput / 1MType
GPT-5.3-CodexOpenRouter$1.75$14Serverless
GPT-5.3-CodexOpenAI API$1.75$14Serverless

Frequently Asked Questions

What is Codex used for?
Codex is used for code, vision and multimodal work, and reasoning. The family description and listed model capabilities point to those workloads as the best fit.
How does Codex compare to GPT Realtime 2?
Codex by OpenAI is strongest where you need code, while GPT Realtime 2 by OpenAI is the closest related family to check for translation. Codex has 5 listed variants and reaches up to 400K context, while GPT Realtime 2 reaches up to 131K context, so compare the specs and pricing tables before choosing a production model.
Which Codex model should I use?
For the lowest listed input price, start with GPT-5.3-Codex through OpenRouter at $1.75/1M input tokens. For the most capable/latest local choice, evaluate GPT-5.3-Codex with 400K context and reasoning, tool use, function calling, structured outputs, and multimodal inputs.

Models(5)