Using the Chat API

The Chat API endpoint is used to generate text with Cohere LLMs. This endpoint facilitates a conversational interface, allowing users to send messages to the model and receive text responses.

Every message comes with a content field and an associated role, which indicates who that message is sent from. Roles can be user, assistant, system and tool.

1import cohere
2co = cohere.ClientV2(api_key="<YOUR API KEY>")
3
4res = co.chat(
5 model="command-r-plus-08-2024",
6 messages=[
7 {
8 "role": "user",
9 "content": "Write a title for a blog post about API design. Only output the title text.",
10 }
11 ],
12)
13
14print(res.message.content[0].text) # "The Ultimate Guide to API Design: Best Practices for Building Robust and Scalable APIs"

Response Structure

Below is a sample response from the Chat API. Here, the role of the message is going to be assistant.

JSON
1{
2 "id": "5a50480a-cf52-46f0-af01-53d18539bd31",
3 "message": {
4 "role": "assistant",
5 "content": [
6 {
7 "type": "text",
8 "text": "The Art of API Design: Crafting Elegant and Powerful Interfaces",
9 }
10 ],
11 },
12 "finish_reason": "COMPLETE",
13 "meta": {
14 "api_version": {"version": "2", "is_experimental": True},
15 "warnings": [
16 "You are using an experimental version, for more information please refer to https://docs.cohere.com/versioning-reference"
17 ],
18 "billed_units": {"input_tokens": 17, "output_tokens": 12},
19 "tokens": {"input_tokens": 215, "output_tokens": 12},
20 },
21}

Every response contains the following fields:

  • message the generated message from the model.
  • id the ID corresponding to this response.
  • finish_reason can be one of the following:
    • COMPLETE the model successfully finished generating the message
    • MAX_TOKENS the model’s context limit was reached before the generation could be completed
  • meta contains information with token counts, billing etc.

System Message

Developers can adjust the LLMs behavior by including a system message in the messages list with the role set to system.

The system message contains instructions that the model will respect over any instructions sent in messages sent from other roles. It is often used by developers to control the style in which the model communicates and to provide guidelines for how to handle various topics.

It is recommended to send the system message as the first element in the messages list.

PYTHON
1import cohere
2co = cohere.ClientV2(api_key="<YOUR API KEY>")
3
4system_message = "You respond concisely, in about 5 words or less"
5
6res = co.chat(
7 model="command-r-plus-08-2024",
8 messages=[
9 {"role": "system", "content": system_message},
10 {
11 "role": "user",
12 "content": "Write a title for a blog post about API design. Only output the title text.",
13 },
14 ], # "Designing Perfect APIs"
15)
16
17print(res.message.content[0].text)

Multi-Turn Conversations

A single Chat request can encapsulate multiple turns of a conversation, where each message in the messages list appears in the order it was sent. Sending multiple messages can give the model context for generating a response.

PYTHON
1import cohere
2co = cohere.ClientV2(api_key="<YOUR API KEY>")
3
4system_message = "You respond concisely, in about 5 words or less"
5
6res = co.chat(
7 model="command-r-plus-08-2024",
8 messages=[
9 {"role": "system", "content": system_message},
10 {
11 "role": "user",
12 "content": "Write a title for a blog post about API design. Only output the title text.",
13 },
14 {"role": "assistant", "content": "Designing Perfect APIs"},
15 {"role": "user", "content": "Another one about generative AI."},
16 ],
17)
18
19print(res.message.content[0].text) # "AI: The Generative Age"