๐Ÿš€ New multimodal model: Command A Vision! (Learn more) ๐Ÿš€

Announcing Embed Multimodal v4

Weโ€™re thrilled to announce the release of Embed 4, the most recent entrant into the Embed family of enterprise-focusedย large language modelsย (LLMs).

Embed v4 is Cohereโ€™s most performant search model to date, and supports the following new features:

  1. Matryoshka Embeddings in the following dimensions: โ€˜[256, 512, 1024, 1536]โ€™
  2. Unified Embeddings produced from mixed modality input (i.e. a single payload of image(s) and text(s))
  3. Context length of 128k

Embed v4 achieves state of the art in the following areas:

  1. Text-to-text retrieval
  2. Text-to-image retrieval
  3. Text-to-mixed modality retrieval (from e.g. PDFs)

Embed v4 is available today on the Cohere Platform, AWS Sagemaker, and Azure AI Foundry. For more information, check out our dedicated blog post here.