LLM ReferenceLLM Reference

text-embedding-3 Models by OpenAI

OpenAIProprietary
2 models2024Up to 8K ctx

About

OpenAI's third-generation text embedding model family with improved retrieval performance and configurable output dimensions.

Current Variants

Use-when guidance is derived from seed capabilities, context, release, and replacement fields.

2 in view

Use when the workload needs embedding and 8K context.

2024-01embedding8K context

Use when the workload needs embedding and 8K context.

2024-01embedding8K context

Release Timeline

1 release group
2024-01
2 current
text-embedding-3-large
embedding8K context
Current
text-embedding-3-small
embedding8K context
Current

Specifications(2 models)

text-embedding-3 model specifications comparison
ModelReleasedContext
text-embedding-3-large2024-018K
text-embedding-3-small2024-018K

Frequently Asked Questions

What is text-embedding-3 used for?
text-embedding-3 is used for embedding. The family description and listed model capabilities point to those workloads as the best fit.
How does text-embedding-3 compare to GPT Realtime 2?
text-embedding-3 by OpenAI is strongest where you need embedding, while GPT Realtime 2 by OpenAI is the closest related family to check for translation. text-embedding-3 has 2 listed variants and reaches up to 8K context, while GPT Realtime 2 reaches up to 131K context, so compare the specs and pricing tables before choosing a production model.
Which text-embedding-3 model should I use?
If price is the main constraint, use the pricing table first because text-embedding-3 does not have complete provider pricing in the local data. For the most capable/latest local choice, evaluate text-embedding-3-large with 8K context.

Models(2)