Introduction to Embeddings at Cohere
Embeddings are a way to represent the meaning of text as a list of numbers. Using a simple comparison function, we can then calculate a similarity score for two embeddings to figure out whether two texts are talking about similar things. Common use-cases for embeddings include semantic search, clustering, and classification.
In the example below we use the embed-english-v3.0
model to generate embeddings for 3 phrases and compare them using a similarity function. The two similar phrases have a high similarity score, and the embeddings for two unrelated phrases have a low similarity score:
The input_type
parameter
Cohere embeddings are optimized for different types of inputs. For example, when using embeddings for semantic search, the search query should be embedded by setting input_type="search_query"
whereas the text passages that are being searched over should be embedded with input_type="search_document"
. You can find more details and a code snippet in the Semantic Search guide. Similarly, the input type can be set to classification
(example) and clustering
to optimize the embeddings for those use cases.
Multilingual Support
In addition to embed-english-v3.0
we offer a best-in-class multilingual model embed-multilingual-v3.0 with support for over 100 languages, including Chinese, Spanish, and French. This model can be used with the Embed API, just like its English counterpart:
Compression Levels
The Cohere embeddings platform supports compression. The Embed API features a required parameter, embeddings_types
, which allows the user to specify various ways of compressing the output.
The following embedding types are now supported:
float
int8
unint8
binary
ubinary
To specify an embedding type
, pass one of the types from the list above in as list containing a string:
Finally, you can also pass several embedding_types
in as a list, in which case the endpoint will return a dictionary with both types available: