Cohere has a variety of models that cover many different use cases. If you need more customization, you can train a model to tune it to your specific use case.

Command

Command is Cohere's default generation model that takes a user instruction (or command) and generates text following the instruction. Our Command models also have conversational capabilities which means that they are well-suited for chat applications.

Latest ModelDescriptionMax TokensEndpoints
command-lightA smaller, faster version of command. Almost as capable, but a lot faster.4096Co.generate(), Co.summarize()
commandAn instruction-following conversational model that performs language tasks with high quality, more reliably and with a longer context than our base generative models.4096Co.generate(), Co.summarize()

Generation

This model generates natural language that can be used for interactive autocomplete, augmenting human writing processes, summarization, text rephrasing, and other text-to-text tasks in non-sensitive domains.

Latest ModelDescriptionMax TokensEndpoints
base-lightA smaller, faster version of base. Almost as capable, but a lot faster.2048Co.generate()
baseA model that performs generative language tasks.2048Co.generate()

Representation

These models can be used to generate embeddings from text or classify it based on various parameters. Embeddings can be used for estimating semantic similarity between two sentences, choosing a sentence which is most likely to follow another sentence, or categorizing user feedback, while outputs from the Classify endpoint can be used for any classification or analysis task. The Representation model comes with a variety of helper functions, such as for detecting the language of an input.

Latest ModelDescriptionDimensionsMax TokensSimilarity MetricEndpoints
embed-english-v2.0Our older embeddings model that allows for text to be classified or turned into embeddings. English only4096512Cosine SimilarityCo.Classify(), Co.Embed()
Co.Tokenize(),
Co.Detokenize()
embed-english-light-v2.0A smaller, faster version of embed-english-v2.0. Almost as capable, but a lot faster. English only.1024512Cosine SimilarityCo.Classify(), Co.Embed(),
Co.Tokenize(),
Co.Detokenize()
embed-multilingual-v2.0Provides multilingual classification and embedding support. See supported languages here.768256Dot Product SimilarityCo.Classify(), Co.Embed(), Co.Tokenize(),
Co.Detokenize()
embed-english-v3.0A model that allows for text to be classified or turned into embeddings. English only.1024512Cosine SimilarityCo.Embed(),
Co.Tokenize(),
Co.Detokenize()
embed-english-light-v3.0A smaller, faster version of embed-english-v3.0. Almost as capable, but a lot faster. English only.384512Cosine SimilarityCo.Embed(),
Co.Tokenize()
Co.Detokenize()
embed-multilingual-v3.0Provides multilingual classification and embedding support. See supported languages here.1024512Cosine SimilarityCo.Embed(),
Co.Tokenize(),
Co.Detokenize()
embed-multilingual-light-v3.0A smaller, faster version of embed-multilingual-v3.0. Almost as capable, but a lot faster. Supports multiple languages.384512Cosine SimilarityCo.Embed(),
Co.Tokenize(),
Co.Detokenize()

In this table we've listed older v2.0 models alongside the newer v3.0 models, but we recommend you use the v3.0 versions.

Rerank (Beta)

The Rerank model can improve created models by re-organizing their results based on certain parameters. This can be used to improve search algorithms.

Latest ModelDescriptionMax TokensEndpoints
rerank-english-v2.0A model that allows for re-ranking English language documents.N/ACo.rerank()
rerank-multilingual-v2.0A model for documents that are not in English. Supports the same languages as embed-multilingual-v3.0.N/ACo.rerank()

📘

Rerank accepts full strings and than tokens, so the token limit works a little differently. Rerank will automatically chunk documents longer than 510 tokens, and there is therefore no explicit limit to how long a document can be when using rerank. See our best practice guide for more info about formatting documents for the Rerank endpoint.