Announcing Cohere's Command A Translate Model

We’re excited to announce the release of Command A Translate, Cohere’s first machine translation model. It achieves state-of-the-art performance at producing accurate, fluent translations across 23 languages.

Key Features

  • 23 supported languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Chinese, Arabic, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian
  • 111 billion parameters for superior translation quality
  • 16K token context length (8K input + 8K output) for handling longer texts
  • Optimized for deployment on 1-2 GPUs (A100s/H100s)
  • Secure deployment options for sensitive data translation

Getting Started

The model is available immediately through Cohere’s Chat API endpoint. You can start translating text with simple prompts or integrate it programmatically into your applications.

1from cohere import ClientV2
2
3co = cohere.ClientV2(api_key="<YOUR API KEY>")
4
5response = co.chat(
6 model="command-a-translate-08-2025",
7 messages=[
8 {
9 "role": "user",
10 "content": "Translate this text to Spanish: Hello, how are you?",
11 }
12 ],
13)

Availability

Command A Translate (command-a-translate-08-2025) is now available for all Cohere users through our standard API endpoints. For enterprise customers, private deployment options are available to ensure maximum security and control over your translation workflows.

For more detailed information about Command A Translate, including technical specifications and implementation examples, visit our model documentation.

Announcing Cohere's Command A Reasoning Model

We’re excited to announce the release of Command A Reasoning, a hybrid reasoning model designed to excel at complex agentic tasks, in English and 22 other languages. With 111 billion parameters and a 256K context length, this model brings advanced reasoning capabilities to your applications through the familiar Command API interface.

Key Features

  • Tool Use: Provides the strongest tool use performance out of the Command family of models.
  • Agentic Applications: Demonstrates proactive problem-solving, autonomously using tools and resources to complete highly complex tasks.
  • Multilingual: With 23 languages supported, the model solves reasoning and agentic problems in the language your business operates in.

Technical Specifications

  • Model Name: command-a-reasoning-08-2025
  • Context Length: 256K tokens
  • Maximum Output: 32K tokens
  • API Endpoint: Chat API

Getting Started

Integrating Command A Reasoning is straightforward using the Chat API. Here’s a non-streaming example:

1from cohere import ClientV2
2
3co = ClientV2("<YOUR_API_KEY>")
4
5prompt = """
6Alice has 3 brothers and she also has 2 sisters. How many sisters does Alice's brother have?
7"""
8
9response = co.chat(
10 model="command-a-reasoning-08-2025",
11 messages=[
12 {
13 "role": "user",
14 "content": prompt,
15 }
16 ],
17)
18
19for content in response.message.content:
20 if content.type == "thinking":
21 print("Thinking:", content.thinking)
22
23 if content.type == "text":
24 print("Response:", content.text)

Customization Options

You can enable and disable thinking capabilities using the thinking parameter, and steer the model’s output with a flexible user-controlled thinking budget; for more details on token budgets, advanced configurations, and best practices, refer to our dedicated Reasoning documentation.

Announcing Cohere's Command A Vision Model

We’re excited to announce the release of Command A Vision, Cohere’s first commercial model capable of understanding and interpreting visual data alongside text. This addition to our Command family brings enterprise-grade vision capabilities to your applications with the same familiar Command API interface.

Key Features

Multimodal Capabilities

  • Text + Image Processing: Combine text prompts with image inputs
  • Enterprise-Focused Use Cases: Optimized for business applications like document analysis, chart interpretation, and OCR
  • Multiple Languages: Officially supports English, Portuguese, Italian, French, German, and Spanish

Technical Specifications

  • Model Name: command-a-vision-07-2025
  • Context Length: 128K tokens
  • Maximum Output: 8K tokens
  • Image Support: Up to 20 images per request (or 20MB total)
  • API Endpoint: Chat API

What You Can Do

Command A Vision excels in enterprise use cases including:

  • 📊 Chart & Graph Analysis: Extract insights from complex visualizations
  • 📋 Table Understanding: Parse and interpret data tables within images
  • 📄 Document OCR: Optical character recognition with natural language processing
  • 🌐 Image Processing for Multiple Languages: Handle text in images across multiple languages
  • 🔍 Scene Analysis: Identify and describe objects within images

💻 Getting Started

The API structure is identical to our existing Command models, making integration straightforward:

1import cohere
2
3co = cohere.Client("your-api-key")
4
5response = co.chat(
6 model="command-a-vision-07-2025",
7 messages=[
8 {
9 "role": "user",
10 "content": [
11 {
12 "type": "text",
13 "text": "Analyze this chart and extract the key data points",
14 },
15 {
16 "type": "image_url",
17 "image_url": {"url": "your-image-url"},
18 },
19 ],
20 }
21 ],
22)

There’s much more to be said about working with images, various limitations, and best practices, which you can find in our dedicated Command A Vision and Image Inputs documents.

Announcing Cutting-Edge Cohere Models on OCI

We are thrilled to announce that the Oracle Cloud Infrastructure (OCI) Generative AI service now supports Cohere Command A, Rerank v3.5, Embed v3.0 multimodal. This marks a major advancement in providing OCI’s customers with enterprise-ready AI solutions.

Command A 03-2025 is the most performant Command model to date, delivering 150% of the throughput of its predecessor on only two GPUs.

Embed v3.0 is a cutting-edge AI search model enhanced with multimodal capabilities, allowing it to generate embeddings from both text and images.

Rerank 3.5, Cohere’s newest AI search foundation model, is engineered to improve the precision of enterprise search and retrieval-augmented generation (RAG) systems across a wide range of data formats (such as lengthy documents, emails, tables, JSON, and code) and in over 100 languages.

Check out Oracle’s announcement and documentation for more details.

Announcing Embed Multimodal v4

We’re thrilled to announce the release of Embed 4, the most recent entrant into the Embed family of enterprise-focused large language models (LLMs).

Embed v4 is Cohere’s most performant search model to date, and supports the following new features:

  1. Matryoshka Embeddings in the following dimensions: ‘[256, 512, 1024, 1536]’
  2. Unified Embeddings produced from mixed modality input (i.e. a single payload of image(s) and text(s))
  3. Context length of 128k

Embed v4 achieves state of the art in the following areas:

  1. Text-to-text retrieval
  2. Text-to-image retrieval
  3. Text-to-mixed modality retrieval (from e.g. PDFs)

Embed v4 is available today on the Cohere Platform, AWS Sagemaker, and Azure AI Foundry. For more information, check out our dedicated blog post here.

Announcing Command A

We’re thrilled to announce the release of Command A, the most recent entrant into the Command family of enterprise-focused large language models (LLMs).

Command A is Cohere’s most performant model to date, excelling at real world enterprise tasks including tool use, retrieval augmented generation (RAG), agents, and multilingual use cases. With 111B parameters and a context length of 256K, Command A boasts a considerable increase in inference-time efficiency — 150% higher throughput compared to its predecessor Command R+ 08-2024 — and only requires two GPUs (A100s / H100s) to run.

Command A is available today on the Cohere Platform, HuggingFace, or through the SDK with command-a-03-2025. For more information, check out our dedicated blog post.

Our Groundbreaking Multimodal Model, Aya Vision, is Here!

Today, Cohere Labs, Cohere’s research arm, is proud to announce Aya Vision, a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.

built as a foundation for multilingual and multimodal communication, this groundbreaking AI model supports tasks such as image captioning, visual question answering, text generation, and translations from both texts and images into coherent text.

Find more information about Aya Vision here.

Cohere Releases Arabic-Optimized Command Model!

Cohere is thrilled to announce the release of Command R7B Arabic (c4ai-command-r7b-12-2024). This is an open weights release of an advanced, 8-billion parameter custom model optimized for the Arabic language (MSA dialect), in addition to English. As with Cohere’s other command models, this one comes with context length of 128,000 tokens; it excels at a number of critical enterprise tasks — instruction following, length control, retrieval-augmented generation (RAG), minimizing code-switching — and it demonstrates excellent general purpose knowledge and understanding of the Arabic language and culture.

Try Command R7B Arabic

If you want to try Command R7B Arabic, it’s very easy: you can use it through the Cohere playground or in our dedicated Hugging Face Space.

Alternatively, you can use the model in your own code. To do that, first install the transformers library from its source repository:

$pip install 'git+https://github.com/huggingface/transformers.git'

Then, use this Python snippet to run a simple text-generation task with the model:

1from transformers import AutoTokenizer, AutoModelForCausalLM
2
3model_id = "CohereForAI/c4ai-command-r7b-12-2024"
4tokenizer = AutoTokenizer.from_pretrained(model_id)
5model = AutoModelForCausalLM.from_pretrained(model_id)
6
7# Format message with the c4ai-command-r7b-12-2024 chat template
8messages = [{"role": "user", "content": "مرحبا، كيف حالك؟"}]
9input_ids = tokenizer.apply_chat_template(
10 messages,
11 tokenize=True,
12 add_generation_prompt=True,
13 return_tensors="pt",
14)
15
16gen_tokens = model.generate(
17 input_ids,
18 max_new_tokens=100,
19 do_sample=True,
20 temperature=0.3,
21)
22
23gen_text = tokenizer.decode(gen_tokens[0])
24print(gen_text)

Chat Capabilities

Command R7B Arabic can be operated in two modes, “conversational” and “instruct” mode:

  • Conversational mode conditions the model on interactive behaviour, meaning it is expected to reply in a conversational fashion, provide introductory statements and follow-up questions, and use Markdown as well as LaTeX where appropriate. This mode is optimized for interactive experiences, such as chatbots, where the model engages in dialogue.
  • Instruct mode conditions the model to provide concise yet comprehensive responses, and to not use Markdown or LaTeX by default. This mode is designed for non-interactive, task-focused use cases such as extracting information, summarizing text, translation, and categorization.

Multilingual RAG Capabilities

Command R7B Arabic has been trained specifically for Arabic and English tasks, such as the generation step of Retrieval Augmented Generation (RAG).

Command R7B Arabic’s RAG functionality is supported through chat templates in Transformers. Using our RAG chat template, the model takes a conversation (with an optional user-supplied system preamble) and a list of document snippets as input. The resulting output contains a response with in-line citations. Here’s what that looks like:

1# Define conversation input
2conversation = [
3 {
4 "role": "user",
5 "content": "اقترح طبقًا يمزج نكهات من عدة دول عربية",
6 }
7]
8
9# Define documents for retrieval-based generation
10documents = [
11 {
12 "heading": "المطبخ العربي: أطباقنا التقليدية",
13 "body": "يشتهر المطبخ العربي بأطباقه الغنية والنكهات الفريدة. في هذا المقال، سنستكشف ...",
14 },
15 {
16 "heading": "وصفة اليوم: مقلوبة",
17 "body": "المقلوبة هي طبق فلسطيني تقليدي، يُحضر من الأرز واللحم أو الدجاج والخضروات. في وصفتنا اليوم ...",
18 },
19]
20
21# Get the RAG prompt
22input_prompt = tokenizer.apply_chat_template(
23 conversation=conversation,
24 documents=documents,
25 tokenize=False,
26 add_generation_prompt=True,
27 return_tensors="pt",
28)
29# Tokenize the prompt
30input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")

You can then generate text from this input as normal.

Notes on Usage

We recommend document snippets be short chunks (around 100-400 words per chunk) instead of long documents. They should also be formatted as key-value pairs, where the keys are short descriptive strings and the values are either text or semi-structured.

You may find that simply including relevant documents directly in a user message works as well as or better than using the documents parameter to render the special RAG template (though the template is a strong default for those wanting citations). We encourage users to experiment with both approaches, and to evaluate which mode works best for their specific use case.

Cohere via OpenAI SDK Using Compatibility API

Today, we are releasing our Compatibility API, enabling developers to seamlessly use Cohere’s models via OpenAI’s SDK.

This API enables you to switch your existing OpenAI-based applications to use Cohere’s models without major refactoring.

It includes comprehensive support for chat completions, such as function calling and structured outputs, as well as support for text embeddings generation.

Check out our documentation on how to get started with the Compatibility API, with examples in Python, TypeScript, and cURL.

Cohere's Rerank v3.5 Model is on Azure AI Foundry!

In December 2024, Cohere released Rerank v3.5 model. It demonstrates SOTA performance on multilingual retrieval, reasoning, and tasks in domains as varied as finance, eCommerce, hospitality, project management, and email/messaging retrieval.

This model has been available through the Cohere API, but today we’re pleased to announce that it can also be utilized through Microsoft Azure’s AI Foundry!

You can find more information about using Cohere’s embedding models on AI Foundry here.