Release Notes


Command models get an August refresh

Today we’re announcing updates to our flagship generative AI model series: Command R and Command R+. These models demonstrate improved performance on a variety of tasks.

The latest model versions are designated with timestamps, as follows:

  • The updated Command R is command-r-08-2024 on the API.
  • The updated Command R+ is command-r-plus-08-2024 on the API.

In the rest of these release notes, we’ll provide more details about technical enhancements, new features, and new pricing.

Technical Details

command-r-08-2024 shows improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, command-r-08-2024 is better at math, code and reasoning and is competitive with the previous version of the larger Command R+ model.

command-r-08-2024 delivers around 50% higher throughput and 20% lower latencies as compared to the previous Command R version, while cutting the hardware footprint required to serve the model by half. Similarly, command-r-plus-08-2024 delivers roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint the same.

Both models include the following feature improvements:

  • For tool use, command-r-08-2024 and command-r-plus-08-2024 have demonstrated improved decision-making around which tool to use in which context, and whether or not to use a tool.
  • Improved instruction following in the preamble.
  • Improved multilingual RAG searches in the language of the user with improved responses.
  • Better structured data analysis for structured data manipulation.
  • Better structured data creation from unstructured natural language instructions.
  • Improved robustness to non-semantic prompt changes like white space or new lines.
  • The models will decline unanswerable questions.
  • The models have improved citation quality and users can now turn off citations for RAG workflows.
  • For command-r-08-2024 there are meaningful improvements on length and formatting control.

New Feature: Safety Modes

The primary new feature available in both command-r-08-2024 and command-r-plus-08-2024 is Safety Modes (in beta). For our enterprise customers building with our models, what is considered safe depends on their use case and the context the model is deployed in. To support diverse enterprise applications, we have developed safety modes, acknowledging that safety and appropriateness are context-dependent, and that predictability and control are critical in building confidence in Cohere models.

Safety guardrails have traditionally been reactive and binary, and we’ve observed that users often have difficulty defining what safe usage means to them for their use case. Safety Modes introduce a nuanced approach that is context sensitive.

(Note: Command R/R+ have built-in protections against core harms, such as content that endangers child safety. These types of harm are always blocked and cannot be adjusted.)

Safety modes are activated through a safety_mode parameter, which can (currently) be in one of two modes:

  • "STRICT": Encourages avoidance of all sensitive topics. Strict content guardrails provide an extra safe experience by prohibiting inappropriate responses or recommendations. Ideal for general and enterprise use.
  • "CONTEXTUAL": (enabled by default): For wide-ranging interactions with fewer constraints on output while maintaining core protections. The model responds as instructed while still rejecting harmful or illegal suggestions. Well-suited for entertainment, creative, educational use.

You can also opt out of the safety modes beta by setting safety_mode="NONE". For more information, check out our dedicated guide to Safety Modes.

Pricing

Here’s a breakdown the pricing structure for the new models:

  • For command-r-plus-08-2024, input tokens are priced at $2.50/M and output tokens at $10.00/M.
  • For command-r-08-2024, input tokens are priced at $0.15/M and output tokens at $0.60/M.

Force JSON object response format

Users can now force command-nightlyto generate outputs in JSON objects by setting the response_format parameter in the Chat API. Users can also specify a JSON schema for the output.

This feature is available across all of Cohere’s SDKs (Python, Typescript, Java, Go).

Example request for forcing JSON response format:

cURL
1POST https://api.cohere.ai/v1/chat
2{
3 "message": "Generate a JSON that represents a person, with name and age",
4 "model": "command-nightly",
5 "response_format": {
6 "type": "json_object"
7 }
8}

Example request for forcing JSON response format in user defined schema:

cURL
1POST https://api.cohere.ai/v1/chat
2{
3 "message": "Generate a JSON that represents a person, with name and age",
4 "model": "command-nightly",
5 "response_format": {
6 "type": "json_object",
7 "schema": {
8 "type": "object",
9 "required": ["name", "age"],
10 "properties": {
11 "name": { "type": "string" },
12 "age": { "type": "integer" }
13 }
14 }
15 }
16}

Currently only compatible with `command-nightly model.


Release Notes for June 10th 2024: Updates to Tool Use, SDKs, Billing

Multi-step tool use now default in Chat API

Tool use is a technique which allows developers to connect Cohere’s Command family of models to external tools like search engines, APIs, functions, databases, etc. It comes in two variants, single-step and multi-step, both of which are available through Cohere’s Chat API.

As of today, tool use will now be multi-step by default. Here are some resources to help you get started:

We’ve published additional docs!

Cohere’s models and functionality are always improving, and we’ve recently dropped the following guides to help you make full use of our offering:

  • Predictable outputs - Information about the seed parameter has been added, giving you more control over the predictability of the text generated by Cohere models.
  • Using Cohere SDKs with private cloud models - To maximize convenience in building on and switching between Cohere-supported environments, our SDKs have been developed to allow seamless support of whichever cloud backend you choose. This guide walks you through when you can use Python, Typescript, Go, and Java on Amazon Bedrock, Amazon SageMaker, Azure, and OCI, what features and parameters are supported, etc.

Changes to Billing

Going forward, Cohere is implementing the following two billing policies:

  • When a user accrues $150 of outstanding debts, a warning email will be sent alerting them of upcoming charges.
  • When a self-serve customer (i.e. a non-contracted organization with a credit card on file) accumulates $250 in outstanding debts, a charge will be forced via Stripe.

Advanced Retrieval Launch

We’re pleased to announce the release of Rerank 3 our newest and most performant foundational model for ranking. Rerank 3 boast a context length of 4096, SOTA performance on Code Retrieval, Long Document, and Semi-Structured Data. In addition to quality improvements, we’ve improved inference speed by a factor of 2x for short documents (doc length < 512 tokens) and 3x for long documents (doc length ~4096 tokens).


Python SDK v5.2.0 release

We’ve released an additional update for our Python SDK! Here are the highlights.

  • The tokenize and detokenize functions in the Python SDK now default to using a local tokenizer.
  • When using the local tokenizer, the response will not include token_strings, but users can revert to using the hosted tokenizer by specifying offline=False.
  • Also, model will now be a required field.
  • For more information, see the guide for tokens and tokenizers.


Command R: Retrieval-Augmented Generation at Production Scale

Today, we are introducing Command R, a new LLM aimed at large-scale production workloads. Command R targets the emerging “scalable” category of models that balance high efficiency with strong accuracy, enabling companies to move beyond proof of concept, and into production.

Command R is a generative model optimized for long context tasks such as retrieval-augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with our industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command R boasts:

  • Strong accuracy on RAG and Tool Use
  • Low latency, and high throughput
  • Longer 128k context and lower pricing
  • Strong capabilities across 10 key languages
  • Model weights available on HuggingFace for research and evaluation

For more information, check out the official blog post or the Command R documentation.



Python SDK v5.0.0

With the release of our latest Python SDK, there are a number of functions that are no longer supported, including create_custom_models.

For more granular instructions on upgrading to the new SDK, and what that will mean for your Cohere integrations, see the comprehensive migration guide.


Release Notes January 22, 2024

Apply Cohere’s AI with Connectors!

One of the most exciting applications of generative AI is known as “retrieval augmented generation” (RAG). This refers to the practice of grounding the outputs of a large language model (LLM) by offering it resources — like your internal technical documentation, chat logs, etc. — from which to draw as it formulates its replies.

Cohere has made it much easier to utilize RAG in bespoke applications via Connectors. As the name implies, Connectors allow you to connect Cohere’s generative AI platform up to whatever resources you’d like it to ground on, facilitating the creation of a wide variety of applications — customer service chatbots, internal tutors, or whatever else you want to build.

Our docs cover how to create and deploy connectors, how to manage your connectors , how to handle authentication, and more!

Expanded Fine-tuning Functionality

Cohere’s ready-to-use LLMs, such as Command, are very good at producing responses to natural language prompts. However, there are many cases in which getting the best model performance requires performing an additional round of training on custom user data. This is process known as fine-tuning, and we’ve dramatically revamped our fine-tuning documentation.

The new docs are organized according to the major endpoints, and we support fine-tuning for Generate, Classify, Rerank, and Chat.

But wait, there’s more: many developers work with generative AI through popular cloud-compute platforms like Amazon Web Services (AWS), and we support fine-tuning on AWS Bedrock. We also support fine-tuning with Sagemaker, and the relevant documentation will be published in the coming weeks.

A new Embed Jobs API Endpoint Has Been Released

The Embed Jobs API was designed for users who want to leverage the power of retrieval over large corpuses of information. Encoding a large volume of documents with an API can be tedious and difficult, but the Embed Jobs API makes it a breeze to handle encoding workflows involving 100,000 documents, or more!

The API works in conjunction with co.embed(). For more information, consult the docs.

Our SDK now Supports More Languages

Throughout our documentation you’ll find code-snippets for performing common tasks with Python. Recently, we made the decision to expand these code snippets to include Typescript and Go, and are working to include several other popular languages as well.


Release Notes September 29th 2023

We’re Releasing co.chat() and the Chat + RAG Playground

We’re pleased to announce that we’ve released our co.chat() beta! Of particular importance is the fact that the co.chat() API is able to utilize retrieval augmented generation (RAG), meaning developers can provide sources of context that inform and ground the model’s output.

This represents a leap forward in the accuracy, verifiability, and timeliness of our generative AI offering. For our public beta, developers can connect co.chat() to web search or plain text documents.

Access to the co.chat() public beta is available through an API key included with a Cohere account.

Our Command Model has Been Updated

We’ve updated both the command and command-light models. Expect improved question answering, generation quality, rewriting and conversational capabilities.

New Rate Limits

For all trial keys and all endpoints, there is now a rate limit of 5000 calls per month.