embeddings.
This Guide Uses the Embed API.

You can find the API reference for the api here

Image capabilities is only compatible with our embed v3.0 models

In this guide, we show you how to use the embed endpoint to embed a series of images. This guide uses a simple dataset of graphs to illustrate how semantic search can be done over images with Cohere. To see an end-to-end example of retrieval, check out this notebook.

Introduction to Multimodal Embeddings

Information is often represented in multiple modalities. A document, for instance, may contain text, images, and graphs, while a product can be described through images, its title, and a written description. This combination of elements often leads to a comprehensive semantic understanding of the subject matter. Traditional embedding models have been limited to a single modality, and even multimodal embedding models often suffer from degradation in text-to-text or text-to-image retrieval tasks. The embed-v3.0 series of models, however, is fully multimodal, enabling it to embed both images and text effectively. We have achieved state-of-the-art performance without compromising text-to-text retrieval capabilities.

How to use Multimodal Embeddings

1. Prepare your Image for Embeddings

The Embed API takes in images with the following file formats: png, jpeg,Webp, and gif. The images must then be formatted as a Data URL.

PYTHON
1# Import the necessary packages
2import os
3import base64
4
5# Defining the function to convert an image to a base 64 Data URL
6def image_to_base64_data_url(image_path):
7 _, file_extension = os.path.splitext(image_path)
8 file_type=(file_extension[1:])
9
10 with open(image_path, "rb") as f:
11 enc_img = base64.b64encode(f.read()).decode('utf-8')
12 enc_img = f"data:image/{file_type};base64,{enc_img}"
13 return enc_img
14
15image_path='<YOUR IMAGE PATH>'
16processed_image=image_to_base64_data_url(image_path)

2. Call the Embed Endpoint

PYTHON
1# Import the necessary packages
2import cohere
3co = cohere.ClientV2(api_key="<YOUR API KEY>")
4
5co.embed(
6 model='embed-english-v3.0',
7 images=[processed_image],
8 input_type='image',
9 embedding_types=['float']
10)

Sample Output

Below is a sample of what the output would look like if you passed in a jpeg with original dimensions of 1080x1350 with a standard bit-depth of 24.

JSON
1{
2 "id": "d8f2b461-79a4-44ee-82e4-be601bbb07be",
3 "embeddings": {
4 "float_": [[-0.025604248, 0.0154418945, ...]],
5 "int8": null,
6 "uint8": null,
7 "binary": null,
8 "ubinary": null,
9 },
10 "texts": [],
11 "meta": {
12 "api_version": {"version": "2", "is_deprecated": null, "is_experimental": null},
13 "billed_units": {
14 "input_tokens": null,
15 "output_tokens": null,
16 "search_units": null,
17 "classifications": null,
18 "images": 1,
19 },
20 "tokens": null,
21 "warnings": null,
22 },
23 "images": [{"width": 1080, "height": 1080, "format": "jpeg", "bit_depth": 24}],
24 "response_type": "embeddings_by_type",
25}