OpenAI's NEW Embedding Models

James Briggs
25 Jan 202416:31

Summary

TLDROpenAI released two new text embedding models, text-embedding-3-small and text-embedding-3-large, showing decent improvements in English language embeddings and massive gains in multilingual embeddings quality. The models have the same context window as previous versions but are trained on more recent data. Most impressively, the large 372-dimensional model can supposedly be reduced to 256 dimensions while still outperforming the previous 536-dimensional model. The new models were tested by indexing sample text and querying the vectors, with the large model returning more relevant results.

Takeaways

  • 😲 OpenAI released two new text embedding models - text-embedding-3-small and text-embedding-3-large
  • 📈 The new models show decent improvements in English language embedding quality
  • 🚀 Massive improvements in multilingual embedding quality - from 31.4 to 54.9 on MIRACL benchmark
  • ⏪ The models use data cutoff from Sept 2021, so may not perform as well on recent events
  • 🔢 Text-embedding-3-large has higher dimensionality for better meaning compression
  • 🤯 Can compress text-embedding-3-large to 256 dims and outperform 002 model with 512 dims
  • 🐢 Text-embedding-3-large is slower for embedding than previous models
  • 🔎 Compared retrieval results across models - text-embedding-3-large performed best overall
  • 🤔 Hard to see big performance differences between models in this test
  • 👍 New models correlate with claimed performance gains, exciting to test 256 dim version

Q & A

  • What were the two new embedding models released by OpenAI?

    -OpenAI released Text Embedding 3 Small and Text Embedding 3 Large.

  • What benchmark showed massive improvements with the new models?

    -The models showed massive improvements on the multilingual embeddings benchmark MIRACL.

  • What was the knowledge cut-off date for the new models?

    -The knowledge cut-off date is still September 2021.

  • What is the benefit of reducing the number of dimensions in embedding vectors?

    -Reducing the number of dimensions leads to reduced quality embeddings, but allows for improved compression of meaning into the vectors.

  • How many dimensions does the Text Embedding 3 Large model use?

    -The Text Embedding 3 Large model uses 372 dimensions.

  • What did OpenAI claim about reducing dimensions in the large model?

    -OpenAI claimed the large model could be reduced to 256 dimensions and still outperform the previous model with 536 dimensions.

  • Which model showed the slowest embedding speed?

    -The Text Embedding 3 Large model was the slowest, taking about 24 minutes to embed the entire dataset.

  • What datatype does the output need to be in?

    -The output needs to be in JSON format.

  • What was the hardest sample question that none of the models answered well?

    -The question "Keep talking about red teaming for LLama 2" was too difficult and none of the models provided good answers.

  • Which model provided the most accurate results in the comparison?

    -The Text Embedding 3 Large model provided the most accurate results in the GPT-4 vs LLama comparison.

Outlines

plate

此内容仅限付费用户访问。 请升级后访问。

立即升级

Mindmap

plate

此内容仅限付费用户访问。 请升级后访问。

立即升级

Keywords

plate

此内容仅限付费用户访问。 请升级后访问。

立即升级

Highlights

plate

此内容仅限付费用户访问。 请升级后访问。

立即升级

Transcripts

plate

此内容仅限付费用户访问。 请升级后访问。

立即升级
Rate This

5.0 / 5 (0 votes)

您是否需要英文摘要?