Generates a streamed text response to a user message.
To learn how to use the Chat API and RAG follow our Text Generation guides.
The name of the project that is making the request.
text/event-stream
Pass text/event-stream to receive the streamed response as server-sent events. The default is \n
delimited events.
Text input for the model to respond to.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to false
.
When true
, the response will be a JSON stream of events. The final event will contain the complete response, and will have an event_type
of "stream-end"
.
Streaming is beneficial for user interfaces that render the contents of the response piece by piece, as it gets generated.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to command-r-plus-08-2024
.
The name of a compatible Cohere model or the ID of a fine-tuned model.
Compatible Deployments: Cohere Platform, Private Deployments
When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model’s overall behavior and conversation style, and use the SYSTEM
role.
The SYSTEM
role is also used for the contents of the optional chat_history=
parameter. When used with the chat_history=
parameter it adds content throughout a conversation. Conversely, when used with the preamble=
parameter it adds content at the start of the conversation only.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
A list of previous messages between the user and the model, giving the model conversational context for responding to the user’s message
.
Each item represents a single message in the chat history, excluding the current user turn. It has two properties: role
and message
. The role
identifies the sender (CHATBOT
, SYSTEM
, or USER
), while the message
contains the text content.
The chat_history parameter should not be used for SYSTEM
messages in most cases. Instead, to add a SYSTEM
role message at the beginning of a conversation, the preamble
parameter should be used.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
An alternative to chat_history
.
Providing a conversation_id
creates or resumes a persisted conversation with the specified ID. The ID can be any non empty string.
Compatible Deployments: Cohere Platform
Defaults to AUTO
when connectors
are specified and OFF
in all other cases.
Dictates how the prompt will be constructed.
With prompt_truncation
set to “AUTO”, some elements from chat_history
and documents
will be dropped in an attempt to construct a prompt that fits within the model’s context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.
With prompt_truncation
set to “AUTO_PRESERVE_ORDER”, some elements from chat_history
and documents
will be dropped in an attempt to construct a prompt that fits within the model’s context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.
With prompt_truncation
set to “OFF”, no elements will be dropped. If the sum of the inputs exceeds the model’s context length limit, a TooManyTokens
error will be returned.
Compatible Deployments:
Accepts {"id": "web-search"}
, and/or the "id"
for a custom connector, if you’ve created one.
When specified, the model’s reply will be enriched with information found by querying each of the connectors (RAG).
Compatible Deployments: Cohere Platform
false
Defaults to false
.
When true
, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user’s message
will be generated.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
Example:
[ { "title": "Tall penguins", "text": "Emperor penguins are the tallest." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica." }, ]
Keys and values from each document will be serialized to a string and passed to the model. The resulting generation will include citations that reference some of these documents.
Some suggested keys are “text”, “author”, and “date”. For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
An id
field (string) can be optionally supplied to identify the document in the citations. This field will not be passed to the model.
An _excludes
field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields will still show up in the citation object. The “_excludes” field will not be passed to the model.
See ‘Document Mode’ in the guide for more information.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to "accurate"
.
Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want "accurate"
results, "fast"
results or no results.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to 0.3
.
A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
Randomness can be further maximized by increasing the value of the p
parameter.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
The maximum number of input tokens to send to the model. If not specified, max_input_tokens
is the model’s context length limit minus a small buffer.
Input will be truncated according to the prompt_truncation
parameter.
Compatible Deployments: Cohere Platform
Ensures only the top k
most likely tokens are considered for generation at each step.
Defaults to 0
, min value of 0
, max value of 500
.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Ensures that only the most likely tokens, with total probability mass of p
, are considered for generation at each step. If both k
and p
are enabled, p
acts after k
.
Defaults to 0.75
. min value of 0.01
, max value of 0.99
.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to 0.0
, min value of 0.0
, max value of 1.0
.
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
Defaults to 0.0
, min value of 0.0
, max value of 1.0
.
Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
A list of available tools (functions) that the model may suggest invoking before producing a text response.
When tools
is passed (without tool_results
), the text
field in the response will be ""
and the tool_calls
field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the tool_calls
array will be empty.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using tool_results
, tools
must be passed as well.
Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
Note: outputs
must be a list of objects. If your tool returns a single object (eg {"status": 200}
), make sure to wrap it in a list.
tool_results = [ { "call": { "name": <tool name>, "parameters": { <param name>: <param value> } }, "outputs": [{ <key>: <value> }] }, ... ]
Note: Chat calls with tool_results
should not be included in the Chat history to avoid duplication of the message text.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
false
Forces the chat to be single step. Defaults to false
.
Configuration for forcing the model output to adhere to the specified format. Supported on Command R 03-2024, Command R+ 04-2024 and newer models.
The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting { "type": "json_object" }
.
A JSON Schema can optionally be provided, to ensure a specific structure.
Note: When using { "type": "json_object" }
your message
should always explicitly instruct the model to generate a JSON (eg: “Generate a JSON …”) . Otherwise the model may end up getting stuck generating an infinite stream of characters and eventually run out of context length.
Limitation: The parameter is not supported in RAG mode (when any of connectors
, documents
, tools
, tool_results
are provided).
Used to select the safety instruction inserted into the prompt. Defaults to CONTEXTUAL
.
When NONE
is specified, the safety instruction will be omitted.
Safety modes are not yet configurable in combination with tools
, tool_results
and documents
parameters.
Note: This parameter is only compatible with models Command R 08-2024, Command R+ 08-2024 and newer.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments