LLM Reference

GPT-4o (05-13)

Proprietary
Multimodal

About

GPT-4o, or GPT-4 Omni, is OpenAI's premier multimodal AI model, introduced on May 13, 2024. It is designed to handle and generate content from various input types, including text, audio, and visual data, using a unified model, unlike its predecessors which required separate models for different modalities. This model is notable for its rapid response times averaging 320 milliseconds, which align with human conversational speed, along with being 50% more cost-effective than GPT-4 Turbo. Although it mirrors GPT-4 Turbo in standard benchmark performance, GPT-4o excels in multilingual, audio, and vision tasks. Despite its advancements, it shares the limitation of a knowledge cutoff at October 2023 and is prone to generating incorrect information, like other language models. A more compact and economical version, GPT-4o mini, is also available.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
Azure OpenAI$5$15
Serverless
OpenAI API$5$15
Serverless

API Versions

gpt-4o-2024-05-13gpt-4o

Specifications

FamilyGPT-4o
Released2024-05-13
Parameters1.76T (8x222B MoE)*
Context128K
ArchitectureMixture of Experts
Knowledge cutoff2023-10
Specializationgeneral
LicenseProprietary