Unlocking the Power of Multimodal Embeddings

embeddings.
This Guide Uses the Embed API.

You can find the API reference for the api here

Image capabilities are only compatible with v4.0 and v3.0 models, but v4.0 has features that v3.0 does not have. Consult the embedding documentation for more details.

In this guide, we show you how to use the embed endpoint to embed a series of images. This guide uses a simple dataset of graphs to illustrate how semantic search can be done over images with Cohere. To see an end-to-end example of retrieval, check out this notebook.

Introduction to Multimodal Embeddings

Information is often represented in multiple modalities. A document, for instance, may contain text, images, and graphs, while a product can be described through images, its title, and a written description. This combination of elements often leads to a comprehensive semantic understanding of the subject matter. Traditional embedding models have been limited to a single modality, and even multimodal embedding models often suffer from degradation in text-to-text or text-to-image retrieval tasks. embed-v4.0 and the embed-v3.0 series of models, however, are fully multimodal, enabling them to embed both images and text effectively. We have achieved state-of-the-art performance without compromising text-to-text retrieval capabilities.

How to use Multimodal Embeddings

1. Prepare your Image for Embeddings

PYTHON
1# Import the necessary packages
2import os
3import base64
4
5
6# Defining the function to convert an image to a base 64 Data URL
7def image_to_base64_data_url(image_path):
8 _, file_extension = os.path.splitext(image_path)
9 file_type = file_extension[1:]
10
11 with open(image_path, "rb") as f:
12 enc_img = base64.b64encode(f.read()).decode("utf-8")
13 enc_img = f"data:image/{file_type};base64,{enc_img}"
14 return enc_img
15
16
17image_path = "<YOUR IMAGE PATH>"
18base64_url = image_to_base64_data_url(image_path)

2. Call the Embed Endpoint

PYTHON
1# Import the necessary packages
2import cohere
3
4co = cohere.ClientV2(api_key="<YOUR API KEY>")
5
6# format the input_object
7image_input = {
8 "content": [
9 {"type": "image_url", "image_url": {"url": base64_url}}
10 ]
11}
12
13co.embed(
14 model="embed-v4.0",
15 inputs=[image_input],
16 input_type="search_document",
17 embedding_types=["float"],
18)

Sample Output

Below is a sample of what the output would look like if you passed in a jpeg with original dimensions of 1080x1350 with a standard bit-depth of 24.

JSON
1{
2 "id": "d8f2b461-79a4-44ee-82e4-be601bbb07be",
3 "embeddings": {
4 "float_": [[-0.025604248, 0.0154418945, ...]],
5 "int8": null,
6 "uint8": null,
7 "binary": null,
8 "ubinary": null,
9 },
10 "texts": [],
11 "meta": {
12 "api_version": {"version": "2", "is_deprecated": null, "is_experimental": null},
13 "billed_units": {
14 "input_tokens": null,
15 "output_tokens": null,
16 "search_units": null,
17 "classifications": null,
18 "images": 1,
19 },
20 "tokens": null,
21 "warnings": null,
22 },
23 "images": [{"width": 1080, "height": 1080, "format": "jpeg", "bit_depth": 24}],
24 "response_type": "embeddings_by_type",
25}
Built with