New and improved embedding model

OpenAI News
New and improved embedding model

The new model, `text-embedding-ada-002`, replaces five separate models for text search, text similarity, and code search, and outperforms our previous most capable model, Davinci, at most tasks, while being priced 99.8% lower.

Embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts. Since theinitial launch⁠of the OpenAI/embeddings⁠(opens in a new window)endpoint, many applications have incorporated embeddings to personalize, recommend, and search content.

You can query the/embeddings⁠(opens in a new window)endpoint for the new model with two lines of code using ourOpenAI Python Library⁠(opens in a new window), just like you could with previous models:

```python import openai response = openai.Embedding.create( input="porcine pals say", model="text-embedding-ada-002" ) ``` Print response

## Model improvements

Stronger performance.`text-embedding-ada-002`outperforms all the old embedding models on text search, code search, and sentence similarity tasks and gets comparable performance on text classification. For each task category, we evaluate the models on the datasets used inold embeddings⁠(opens in a new window).

Text search Code search Sentence similarity Text classification

Model Performance text-embedding-ada-00253.3 text-search-davinci-*-001 52.8 text-search-curie-*-001 50.9 text-search-babbage-*-001 50.4 text-search-ada-*-001 49.0

Dataset: BEIR (ArguAna, ClimateFEVER, DBPedia, FEVER, FiQA2018, HotpotQA, NFCorpus, QuoraRetrieval, SciFact, TRECCOVID, Touche2020)

Unification of capabilities. We have significantly simplified the interface of the/embeddings⁠(opens in a new window)endpoint by merging the five separate models shown above (`text-similarity`,`text-search-query`,`text-search-doc`,`code-search-text`and`code-search-code`) into a single new model. This single representation performs better than our previous embedding models across a diverse set of text search, sentence similarity, and code search benchmarks.

Longer context.The context length of the new model is increased by a factor of four, from 2048 to 8192, making it more convenient to work with long documents.

Smaller embedding size.The new embeddings have only 1536 dimensions, one-eighth the size of`davinci-001`embeddings, making the new embeddings more cost effective in working with vector databases.

Reduced price.We have reduced the price of new embedding models by 90% compared to old models of the same size. The new model achieves better or similar performance as the old Davinci models at a 99.8% lower price.

Overall, the new embedding model is a much more powerful tool for natural language processing and code tasks. We are excited to see how our customers will use it to create even more capable applications in their respective fields.

The new`text-embedding-ada-002`model is not outperforming`text-similarity-davinci-001`on the SentEval linear probing classification benchmark. For tasks that require training a light-weighted linear layer on top of embedding vectors for classification prediction, we suggest comparing the new model to`text-similarity-davinci-001`and choosing whichever model gives optimal performance.

Check theLimitations & Risks⁠(opens in a new window)section in the embeddings documentation for general limitations of our embedding models.

## Examples of the embeddings API in action

Kalendar AI⁠(opens in a new window)is a sales outreach product that uses embeddings to match the right sales pitch to the right customers out of a dataset containing 340M profiles. This automation relies on similarity between embeddings of customer profiles and sale pitches to rank up most suitable matches, eliminating 40–56% of unwanted targeting compared to their old approach.

Notion⁠(opens in a new window), the online workspace company, will use OpenAI’s new embeddings to improve Notion search beyond today’s keyword matching systems.

* Read documentation(opens in a new window)

Ryan Greene, Ted Sanders, Lilian Weng, Arvind Neelakantan

View all product articles

Global news partnerships: Le Monde and Prisa Media Company Mar 13, 2024

Review completed & Altman, Brockman to continue to lead OpenAI Company Mar 8, 2024

OpenAI announces new members to board of directors Company Mar 8, 2024

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.