Announcing the Cohere Transcribe model
We’re pleased to announce the release of Cohere Transcribe, our first transcription model. Cohere Transcribe specializes in audio-in, text-out, automatic speech recognition (ASR).
Technical details
- Model name:
cohere-transcribe-03-2026 - Input: Audio waveform
- Output: Text
- Languages covered: English, German, French, Italian, Spanish, Portuguese, Greek, Dutch, Polish, Vietnamese, Chinese, Arabic, Japanese, Korean.
- License: Apache 2.0
- API endpoint: Audio Transcriptions API
Getting started
The model is available immediately through Cohere’s Audio Transcriptions API endpoint. You can start transcribing audio using the following example query:
Availability
You can access Cohere Transcribe via our API for free, low-setup experimentation subject to rate limits. See the Different Types of API Keys and Rate Limits page for usage details and integration guidance.
For production deployment without rate limits, provision a dedicated Model Vault. This enables low-latency, private cloud inference without having to manage infrastructure. Pricing is calculated per hour-instance, with discounted plans for longer-term commitments. Contact our team to discuss your requirements.
Cohere's Rerank v4.0 Model is Here!
We’re pleased to announce the release of Rerank 4.0 our newest and most performant foundational model for ranking.
Technical Details
- Two model variants available:
rerank-v4.0-pro: Optimized for state-of-the-art quality and complex use-casesrerank-v4.0-fast: Optimized for low latency and high throughput use-cases
- Multilingual support: Re-rank both English and non-English documents
- Semi-structured data support: Re-rank JSON documents
- Extended context length: 32k token context window
Example Query
Announcing Major Command Deprecations
As part of our ongoing commitment to delivering advanced AI solutions, we are deprecating the following models, features, and API endpoints:
Deprecated Models:
command-r-03-2024(and the aliascommand-r)command-r-plus-04-2024(and the aliascommand-r-plus)command-lightcommandsummarize(Refer to the migration guide for alternatives).
For command model replacements, we recommend you use command-r-08-2024, command-r-plus-08-2024, or command-a-03-2025 (which is the strongest-performing model across domains) instead.
Retired Fine-Tuning Capabilities:
All fine-tuning options via dashboard and API for models including command-light, command, command-r, classify, and rerank are being retired. Previously fine-tuned models will no longer be accessible.
Deprecated Features and API Endpoints:
/v1/connectors(Managed connectors for RAG)/v1/chatparameters:connectors,search_queries_only/v1/generate(Legacy generative endpoint)/v1/summarize(Legacy summarization endpoint)/v1/classify- Slack App integration
- Coral Web UI (chat.cohere.com and coral.cohere.com)
For questions, reach out to support@cohere.com
Announcing Cohere's Command A Translate Model
We’re excited to announce the release of Command A Translate, Cohere’s first machine translation model. It achieves state-of-the-art performance at producing accurate, fluent translations across 23 languages.
Key Features
- 23 supported languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Chinese, Arabic, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian
- 111 billion parameters for superior translation quality
- 16K token context length (8K input + 8K output) for handling longer texts
- Optimized for deployment on 1-2 GPUs (A100s/H100s)
- Secure deployment options for sensitive data translation
Getting Started
The model is available immediately through Cohere’s Chat API endpoint. You can start translating text with simple prompts or integrate it programmatically into your applications.
Availability
Command A Translate (command-a-translate-08-2025) is now available for all Cohere users through our standard API endpoints. For enterprise customers, private deployment options are available to ensure maximum security and control over your translation workflows.
For more detailed information about Command A Translate, including technical specifications and implementation examples, visit our model documentation.
Announcing Cohere's Command A Reasoning Model
We’re excited to announce the release of Command A Reasoning, a hybrid reasoning model designed to excel at complex agentic tasks, in English and 22 other languages. With 111 billion parameters and a 256K context length, this model brings advanced reasoning capabilities to your applications through the familiar Command API interface.
Key Features
- Tool Use: Provides the strongest tool use performance out of the Command family of models.
- Agentic Applications: Demonstrates proactive problem-solving, autonomously using tools and resources to complete highly complex tasks.
- Multilingual: With 23 languages supported, the model solves reasoning and agentic problems in the language your business operates in.
Technical Specifications
- Model Name:
command-a-reasoning-08-2025 - Context Length: 256K tokens
- Maximum Output: 32K tokens
- API Endpoint: Chat API
Getting Started
Integrating Command A Reasoning is straightforward using the Chat API. Here’s a non-streaming example:
Customization Options
You can enable and disable thinking capabilities using the thinking parameter, and steer the model’s output with a flexible user-controlled thinking budget; for more details on token budgets, advanced configurations, and best practices, refer to our dedicated Reasoning documentation.
Announcing Cohere's Command A Vision Model
We’re excited to announce the release of Command A Vision, Cohere’s first commercial model capable of understanding and interpreting visual data alongside text. This addition to our Command family brings enterprise-grade vision capabilities to your applications with the same familiar Command API interface.
Key Features
Multimodal Capabilities
- Text + Image Processing: Combine text prompts with image inputs
- Enterprise-Focused Use Cases: Optimized for business applications like document analysis, chart interpretation, and OCR
- Multiple Languages: Officially supports English, Portuguese, Italian, French, German, and Spanish
Technical Specifications
- Model Name:
command-a-vision-07-2025 - Context Length: 128K tokens
- Maximum Output: 8K tokens
- Image Support: Up to 20 images per request (or 20MB total)
- API Endpoint: Chat API
What You Can Do
Command A Vision excels in enterprise use cases including:
- 📊 Chart & Graph Analysis: Extract insights from complex visualizations
- 📋 Table Understanding: Parse and interpret data tables within images
- 📄 Document OCR: Optical character recognition with natural language processing
- 🌐 Image Processing for Multiple Languages: Handle text in images across multiple languages
- 🔍 Scene Analysis: Identify and describe objects within images
💻 Getting Started
The API structure is identical to our existing Command models, making integration straightforward:
There’s much more to be said about working with images, various limitations, and best practices, which you can find in our dedicated Command A Vision and Image Inputs documents.
Announcing Cutting-Edge Cohere Models on OCI
We are thrilled to announce that the Oracle Cloud Infrastructure (OCI) Generative AI service now supports Cohere Command A, Rerank v3.5, Embed v3.0 multimodal. This marks a major advancement in providing OCI’s customers with enterprise-ready AI solutions.
Command A 03-2025 is the most performant Command model to date, delivering 150% of the throughput of its predecessor on only two GPUs.
Embed v3.0 is a cutting-edge AI search model enhanced with multimodal capabilities, allowing it to generate embeddings from both text and images.
Rerank 3.5, Cohere’s newest AI search foundation model, is engineered to improve the precision of enterprise search and retrieval-augmented generation (RAG) systems across a wide range of data formats (such as lengthy documents, emails, tables, JSON, and code) and in over 100 languages.
Check out Oracle’s announcement and documentation for more details.
Announcing Embed Multimodal v4
We’re thrilled to announce the release of Embed 4, the most recent entrant into the Embed family of enterprise-focused large language models (LLMs).
Embed v4 is Cohere’s most performant search model to date, and supports the following new features:
- Matryoshka Embeddings in the following dimensions: ‘[256, 512, 1024, 1536]’
- Unified Embeddings produced from mixed modality input (i.e. a single payload of image(s) and text(s))
- Context length of 128k
Embed v4 achieves state of the art in the following areas:
- Text-to-text retrieval
- Text-to-image retrieval
- Text-to-mixed modality retrieval (from e.g. PDFs)
Embed v4 is available today on the Cohere Platform, AWS Sagemaker, and Azure AI Foundry. For more information, check out our dedicated blog post here.
Announcing Command A
We’re thrilled to announce the release of Command A, the most recent entrant into the Command family of enterprise-focused large language models (LLMs).
Command A is Cohere’s most performant model to date, excelling at real world enterprise tasks including tool use, retrieval augmented generation (RAG), agents, and multilingual use cases. With 111B parameters and a context length of 256K, Command A boasts a considerable increase in inference-time efficiency — 150% higher throughput compared to its predecessor Command R+ 08-2024 — and only requires two GPUs (A100s / H100s) to run.
Command A is available today on the Cohere Platform, HuggingFace, or through the SDK with command-a-03-2025. For more information, check out our dedicated blog post.
Our Groundbreaking Multimodal Model, Aya Vision, is Here!
Today, Cohere Labs, Cohere’s research arm, is proud to announce Aya Vision, a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.
built as a foundation for multilingual and multimodal communication, this groundbreaking AI model supports tasks such as image captioning, visual question answering, text generation, and translations from both texts and images into coherent text.
Find more information about Aya Vision here.