text-embedding-3 Models by OpenAI
OpenAIProprietary
2 models2024Up to 8K ctx
About
OpenAI's third-generation text embedding model family with improved retrieval performance and configurable output dimensions.
Current Variants
Use-when guidance is derived from seed capabilities, context, release, and replacement fields.
2 in view
text-embedding-3-largeCurrent
Use when the workload needs embedding and 8K context.
2024-01embedding8K context
text-embedding-3-smallCurrent
Use when the workload needs embedding and 8K context.
2024-01embedding8K context
| Model | Use when | Released | Signals | Status |
|---|---|---|---|---|
| text-embedding-3-large | Use when the workload needs embedding and 8K context. | 2024-01 | embedding8K context | Current |
| text-embedding-3-small | Use when the workload needs embedding and 8K context. | 2024-01 | embedding8K context | Current |
Release Timeline
1 release group2024-01
2 current
text-embedding-3-large
Currentembedding8K context
text-embedding-3-small
Currentembedding8K context
Specifications(2 models)
| Model | Released | Context |
|---|---|---|
| text-embedding-3-large | 2024-01 | 8K |
| text-embedding-3-small | 2024-01 | 8K |
Frequently Asked Questions
- What is text-embedding-3 used for?
- text-embedding-3 is used for embedding. The family description and listed model capabilities point to those workloads as the best fit.
- How does text-embedding-3 compare to GPT Realtime 2?
- text-embedding-3 by OpenAI is strongest where you need embedding, while GPT Realtime 2 by OpenAI is the closest related family to check for translation. text-embedding-3 has 2 listed variants and reaches up to 8K context, while GPT Realtime 2 reaches up to 131K context, so compare the specs and pricing tables before choosing a production model.
- Which text-embedding-3 model should I use?
- If price is the main constraint, use the pricing table first because text-embedding-3 does not have complete provider pricing in the local data. For the most capable/latest local choice, evaluate text-embedding-3-large with 8K context.


