Announcing the Cohere Transcribe model

We’re pleased to announce the release of Cohere Transcribe, our first transcription model. Cohere Transcribe specializes in audio-in, text-out, automatic speech recognition (ASR).

Technical details

  • Model name: cohere-transcribe-03-2026
  • Input: Audio waveform
  • Output: Text
  • Languages covered: English, German, French, Italian, Spanish, Portuguese, Greek, Dutch, Polish, Vietnamese, Chinese, Arabic, Japanese, Korean.
  • License: Apache 2.0
  • API endpoint: Audio Transcriptions API

Getting started

The model is available immediately through Cohere’s Audio Transcriptions API endpoint. You can start transcribing audio using the following example query:

PYTHON
1import cohere
2
3co = cohere.ClientV2()
4
5response = co.audio.transcriptions.create(
6 model="cohere-transcribe-03-2026",
7 language="en",
8 file=open("./sample.wav", "rb"),
9)
10
11print(response)

Availability

You can access Cohere Transcribe via our API for free, low-setup experimentation subject to rate limits. See the Different Types of API Keys and Rate Limits page for usage details and integration guidance.

For production deployment without rate limits, provision a dedicated Model Vault. This enables low-latency, private cloud inference without having to manage infrastructure. Pricing is calculated per hour-instance, with discounted plans for longer-term commitments. Contact our team to discuss your requirements.


Cohere's Rerank v4.0 Model is Here!

We’re pleased to announce the release of Rerank 4.0 our newest and most performant foundational model for ranking.

Technical Details

  • Two model variants available:
    • rerank-v4.0-pro: Optimized for state-of-the-art quality and complex use-cases
    • rerank-v4.0-fast: Optimized for low latency and high throughput use-cases
  • Multilingual support: Re-rank both English and non-English documents
  • Semi-structured data support: Re-rank JSON documents
  • Extended context length: 32k token context window

Example Query

PYTHON
1import cohere
2
3co = cohere.ClientV2()
4
5query = "What is the capital of the United States?"
6docs = [
7 "Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.",
8 "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.",
9 "Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas.",
10 "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.",
11 "Capital punishment has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.",
12]
13
14results = co.rerank(
15 model="rerank-v4.0-pro", query=query, documents=docs, top_n=5
16)

Announcing Major Command Deprecations

As part of our ongoing commitment to delivering advanced AI solutions, we are deprecating the following models, features, and API endpoints:

Deprecated Models:

  • command-r-03-2024 (and the alias command-r)
  • command-r-plus-04-2024 (and the alias command-r-plus)
  • command-light
  • command
  • summarize (Refer to the migration guide for alternatives).

For command model replacements, we recommend you use command-r-08-2024, command-r-plus-08-2024, or command-a-03-2025 (which is the strongest-performing model across domains) instead.

Retired Fine-Tuning Capabilities: All fine-tuning options via dashboard and API for models including command-light, command, command-r, classify, and rerank are being retired. Previously fine-tuned models will no longer be accessible.

Deprecated Features and API Endpoints:

  • /v1/connectors (Managed connectors for RAG)
  • /v1/chat parameters: connectors, search_queries_only
  • /v1/generate (Legacy generative endpoint)
  • /v1/summarize (Legacy summarization endpoint)
  • /v1/classify
  • Slack App integration
  • Coral Web UI (chat.cohere.com and coral.cohere.com)

For questions, reach out to support@cohere.com


Announcing Cohere's Command A Translate Model

We’re excited to announce the release of Command A Translate, Cohere’s first machine translation model. It achieves state-of-the-art performance at producing accurate, fluent translations across 23 languages.

Key Features

  • 23 supported languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Chinese, Arabic, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian
  • 111 billion parameters for superior translation quality
  • 16K token context length (8K input + 8K output) for handling longer texts
  • Optimized for deployment on 1-2 GPUs (A100s/H100s)
  • Secure deployment options for sensitive data translation

Getting Started

The model is available immediately through Cohere’s Chat API endpoint. You can start translating text with simple prompts or integrate it programmatically into your applications.

1from cohere import ClientV2
2
3co = ClientV2(api_key="<YOUR API KEY>")
4
5response = co.chat(
6 model="command-a-translate-08-2025",
7 messages=[
8 {
9 "role": "user",
10 "content": "Translate this text to Spanish: Hello, how are you?",
11 }
12 ],
13)

Availability

Command A Translate (command-a-translate-08-2025) is now available for all Cohere users through our standard API endpoints. For enterprise customers, private deployment options are available to ensure maximum security and control over your translation workflows.

For more detailed information about Command A Translate, including technical specifications and implementation examples, visit our model documentation.


Announcing Cohere's Command A Reasoning Model

We’re excited to announce the release of Command A Reasoning, a hybrid reasoning model designed to excel at complex agentic tasks, in English and 22 other languages. With 111 billion parameters and a 256K context length, this model brings advanced reasoning capabilities to your applications through the familiar Command API interface.

Key Features

  • Tool Use: Provides the strongest tool use performance out of the Command family of models.
  • Agentic Applications: Demonstrates proactive problem-solving, autonomously using tools and resources to complete highly complex tasks.
  • Multilingual: With 23 languages supported, the model solves reasoning and agentic problems in the language your business operates in.

Technical Specifications

  • Model Name: command-a-reasoning-08-2025
  • Context Length: 256K tokens
  • Maximum Output: 32K tokens
  • API Endpoint: Chat API

Getting Started

Integrating Command A Reasoning is straightforward using the Chat API. Here’s a non-streaming example:

1from cohere import ClientV2
2
3co = ClientV2("<YOUR_API_KEY>")
4
5prompt = """
6Alice has 3 brothers and she also has 2 sisters. How many sisters does Alice's brother have?
7"""
8
9response = co.chat(
10 model="command-a-reasoning-08-2025",
11 messages=[
12 {
13 "role": "user",
14 "content": prompt,
15 }
16 ],
17)
18
19for content in response.message.content:
20 if content.type == "thinking":
21 print("Thinking:", content.thinking)
22
23 if content.type == "text":
24 print("Response:", content.text)

Customization Options

You can enable and disable thinking capabilities using the thinking parameter, and steer the model’s output with a flexible user-controlled thinking budget; for more details on token budgets, advanced configurations, and best practices, refer to our dedicated Reasoning documentation.


Announcing Cohere's Command A Vision Model

We’re excited to announce the release of Command A Vision, Cohere’s first commercial model capable of understanding and interpreting visual data alongside text. This addition to our Command family brings enterprise-grade vision capabilities to your applications with the same familiar Command API interface.

Key Features

Multimodal Capabilities

  • Text + Image Processing: Combine text prompts with image inputs
  • Enterprise-Focused Use Cases: Optimized for business applications like document analysis, chart interpretation, and OCR
  • Multiple Languages: Officially supports English, Portuguese, Italian, French, German, and Spanish

Technical Specifications

  • Model Name: command-a-vision-07-2025
  • Context Length: 128K tokens
  • Maximum Output: 8K tokens
  • Image Support: Up to 20 images per request (or 20MB total)
  • API Endpoint: Chat API

What You Can Do

Command A Vision excels in enterprise use cases including:

  • 📊 Chart & Graph Analysis: Extract insights from complex visualizations
  • 📋 Table Understanding: Parse and interpret data tables within images
  • 📄 Document OCR: Optical character recognition with natural language processing
  • 🌐 Image Processing for Multiple Languages: Handle text in images across multiple languages
  • 🔍 Scene Analysis: Identify and describe objects within images

💻 Getting Started

The API structure is identical to our existing Command models, making integration straightforward:

1import cohere
2
3co = cohere.Client("your-api-key")
4
5response = co.chat(
6 model="command-a-vision-07-2025",
7 messages=[
8 {
9 "role": "user",
10 "content": [
11 {
12 "type": "text",
13 "text": "Analyze this chart and extract the key data points",
14 },
15 {
16 "type": "image_url",
17 "image_url": {"url": "your-image-url"},
18 },
19 ],
20 }
21 ],
22)

There’s much more to be said about working with images, various limitations, and best practices, which you can find in our dedicated Command A Vision and Image Inputs documents.


Announcing Cutting-Edge Cohere Models on OCI

We are thrilled to announce that the Oracle Cloud Infrastructure (OCI) Generative AI service now supports Cohere Command A, Rerank v3.5, Embed v3.0 multimodal. This marks a major advancement in providing OCI’s customers with enterprise-ready AI solutions.

Command A 03-2025 is the most performant Command model to date, delivering 150% of the throughput of its predecessor on only two GPUs.

Embed v3.0 is a cutting-edge AI search model enhanced with multimodal capabilities, allowing it to generate embeddings from both text and images.

Rerank 3.5, Cohere’s newest AI search foundation model, is engineered to improve the precision of enterprise search and retrieval-augmented generation (RAG) systems across a wide range of data formats (such as lengthy documents, emails, tables, JSON, and code) and in over 100 languages.

Check out Oracle’s announcement and documentation for more details.


Announcing Embed Multimodal v4

We’re thrilled to announce the release of Embed 4, the most recent entrant into the Embed family of enterprise-focused large language models (LLMs).

Embed v4 is Cohere’s most performant search model to date, and supports the following new features:

  1. Matryoshka Embeddings in the following dimensions: ‘[256, 512, 1024, 1536]’
  2. Unified Embeddings produced from mixed modality input (i.e. a single payload of image(s) and text(s))
  3. Context length of 128k

Embed v4 achieves state of the art in the following areas:

  1. Text-to-text retrieval
  2. Text-to-image retrieval
  3. Text-to-mixed modality retrieval (from e.g. PDFs)

Embed v4 is available today on the Cohere Platform, AWS Sagemaker, and Azure AI Foundry. For more information, check out our dedicated blog post here.


Announcing Command A

We’re thrilled to announce the release of Command A, the most recent entrant into the Command family of enterprise-focused large language models (LLMs).

Command A is Cohere’s most performant model to date, excelling at real world enterprise tasks including tool use, retrieval augmented generation (RAG), agents, and multilingual use cases. With 111B parameters and a context length of 256K, Command A boasts a considerable increase in inference-time efficiency — 150% higher throughput compared to its predecessor Command R+ 08-2024 — and only requires two GPUs (A100s / H100s) to run.

Command A is available today on the Cohere Platform, HuggingFace, or through the SDK with command-a-03-2025. For more information, check out our dedicated blog post.


Our Groundbreaking Multimodal Model, Aya Vision, is Here!

Today, Cohere Labs, Cohere’s research arm, is proud to announce Aya Vision, a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.

built as a foundation for multilingual and multimodal communication, this groundbreaking AI model supports tasks such as image captioning, visual question answering, text generation, and translations from both texts and images into coherent text.

Find more information about Aya Vision here.