LLM ReferenceLLM Reference

GPT-4o (05-13)

ProprietaryMultimodal

About

GPT-4o, or GPT-4 Omni, is OpenAI's premier multimodal AI model, introduced on May 13, 2024. It is designed to handle and generate content from various input types, including text, audio, and visual data, using a unified model, unlike its predecessors which required separate models for different modalities. This model is notable for its rapid response times averaging 320 milliseconds, which align with human conversational speed, along with being 50% more cost-effective than GPT-4 Turbo. Although it mirrors GPT-4 Turbo in standard benchmark performance, GPT-4o excels in multilingual, audio, and vision tasks. Despite its advancements, it shares the limitation of a knowledge cutoff at October 2023 and is prone to generating incorrect information, like other language models. A more compact and economical version, GPT-4o mini, is also available.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(5)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Azure OpenAI$5$15Serverless
OpenAI API$5$15Serverless
OpenRouter$5$15Serverless
OpenAI Batch API$2.5$7.5Serverless
Replicate API$2.5$10Serverless

Benchmark Scores(4)

BenchmarkScoreVersionSource
HellaSwag96.410-shotHELM, Open LLM Leaderboard
HumanEval90.2pass@1HELM, Open LLM Leaderboard
Massive Multitask Language Understanding88.75-shotHELM, Open LLM Leaderboard
MMLU PRO72.5https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro

API Versions

gpt-4o-2024-05-13gpt-4o

Rankings

Specifications

FamilyGPT-4o
Released2024-05-13
Parameters1.76T (8x222B MoE)*
Context128K
ArchitectureMixture of Experts
Knowledge cutoff2023-10
Specializationgeneral
LicenseProprietary
Trainingfinetuning

Created by

Cutting-edge research and development.

San Francisco, California, United States
Founded 2015
Website