New Maximum Number of Input Documents for Rerank

We have updated how the maximum number of documents is calculated for co.rerank. The endpoint will error if len(documents) * max_chunks_per_doc >10,000 where max_chunks_per_doc is set to 10 as default.

improved

Model Names Are Changing!

We are updating the names of our models to bring consistency and simplicity to our product offerings. As of today, you will be able to call Cohere’s models via our API and SDK with the new model names, and all of our documentation has been updated to reflect the new naming convention.

improved

Multilingual Support for Co.classify

The co.classify endpoint now supports the use of Cohere's multilingual embedding model. The multilingual-22-12 model is now a valid model input in the co.classify call.

improved

Command Model Nightly Available!

Nightly versions of our Common models are now available. This means that every week, you can expect the performance of command-nightly to improve as we continually retrain them.

added

Multilingual Text Understanding Model + Language Detection!

Cohere's multilingual text understanding model is now available! The multilingual-22-12 model can be used to semantically search within a single language, as well as across languages. Compared to keyword search, where you often need separate tokenizers and indices to handle different languages, the deployment of the multilingual model for search is trivial: no language-specific handling is needed — everything can be done by a single model within a single index.

improved

Model Sizing Update + Improvements

Effective December 2, 2022, we will be consolidating our generative models and only serving our Medium (focused on speed) and X-Large (focused on quality). We will also be discontinuing support for our Medium embedding model.

added

Improvements to Current Models + New Beta Model (Command)!

New & Improved Medium & Extremely Large

improved

New Look For Docs!

We've updated our docs to better suit our new developer journey! You'll have a sleeker, more streamlined documentation experience.

added

New Logit Bias experimental parameter

Our Generative models have now the option to use the new logit_bias parameter to prevent the model from generating unwanted tokens or to incentivize it to include desired tokens. Logit bias is supported in all our default Generative models.

improved

Co.classify powered by our Representational model embeddings

The Co.classify endpoint now serves few-shot classification tasks using embeddings from our Representational model for the small, medium, and large default models.