An Overview of Cohere’s Models
Cohere has a variety of models that cover many different use cases. If you need more customization, you can train a model to tune it to your specific use case.
Cohere models are currently available on the following platforms:
At the end of each major sections below, you’ll find technical details about how to call a given model on a particular platform.
What can These Models Be Used For?
In this section, we’ll provide some high-level context on Cohere’s offerings, and what the strengths of each are.
- The Command family of models includes Command, Command R, and Command R+. Together, they are the text-generation LLMs powering conversational agents, summarization, copywriting, and similar use cases. They work through the Chat endpoint, which can be used with or without retrieval augmented generation (RAG).
- Rerank is the fastest way to inject the intelligence of a language model into an existing search system. It can be accessed via the Rerank endpoint.
- Embed improves the accuracy of search, classification, clustering, and RAG results. It also powers the Embed and Classify endpoints.
Command
Command is Cohere’s default generation model that takes a user instruction (or command) and generates text following the instruction. Our Command models also have conversational capabilities which means that they are well-suited for chat applications.
Using Command Models on Different Platforms
In this table, we provide some important context for using Cohere Command models on Amazon Bedrock, Amazon SageMaker, and more.
Embed
These models can be used to generate embeddings from text or classify it based on various parameters. Embeddings can be used for estimating semantic similarity between two sentences, choosing a sentence which is most likely to follow another sentence, or categorizing user feedback, while outputs from the Classify endpoint can be used for any classification or analysis task. The Representation model comes with a variety of helper functions, such as for detecting the language of an input.
In this table we’ve listed older v2.0
models alongside the newer v3.0
models, but we recommend you use the v3.0
versions.
Using Embed Models on Different Platforms
In this table, we provide some important context for using Cohere Embed models on Amazon Bedrock, Amazon SageMaker, and more.
Rerank
The Rerank model can improve created models by re-organizing their results based on certain parameters. This can be used to improve search algorithms.
Using Rerank Models on Different Platforms
In this table, we provide some important context for using Cohere Rerank models on Amazon Bedrock, SageMaker, and more.
Rerank accepts full strings rather than tokens, so the token limit works a little differently. Rerank will automatically chunk documents longer than 510 tokens, and there is therefore no explicit limit to how long a document can be when using rerank. See our best practice guide for more info about formatting documents for the Rerank endpoint.