Tool use & agents - quickstart

Tool use enables developers to build agentic applications that connect to external tools, do reasoning, and perform actions.

The Chat endpoint comes with built-in tool use capabilities such as function calling, multi-step reasoning, and citation generation.

This quickstart guide shows you how to utilize tool use with the Chat endpoint.

1

Setup

First, install the Cohere Python SDK with the following command.

$pip install -U cohere

Next, import the library and create a client.

PYTHON
1import cohere
2
3co = cohere.ClientV2(
4 "COHERE_API_KEY"
5) # Get your free API key here: https://dashboard.cohere.com/api-keys
2

Tool Definition

First, we need to set up the tools. A tool can be any function or service that can receive and send objects.

We also need to define the tool schemas in a format that can be passed to the Chat endpoint. The schema must contain the following fields: name, description, and parameters.

PYTHON
1def get_weather(location):
2 # Implement your tool calling logic here
3 return [{"temperature": "20C"}]
4 # Return a list of objects e.g. [{"url": "abc.com", "text": "..."}, {"url": "xyz.com", "text": "..."}]
5
6
7functions_map = {"get_weather": get_weather}
8
9tools = [
10 {
11 "type": "function",
12 "function": {
13 "name": "get_weather",
14 "description": "gets the weather of a given location",
15 "parameters": {
16 "type": "object",
17 "properties": {
18 "location": {
19 "type": "string",
20 "description": "the location to get weather, example: San Fransisco, CA",
21 }
22 },
23 "required": ["location"],
24 },
25 },
26 },
27]
3

Tool Calling

Next, pass the tool schema to the Chat endpoint together with the user message.

The LLM will then generate the tool calls (if any) and return the tool_plan and tool_calls objects.

PYTHON
1messages = [
2 {"role": "user", "content": "What's the weather in Toronto?"}
3]
4
5response = co.chat(
6 model="command-r-plus-08-2024", messages=messages, tools=tools
7)
8
9if response.message.tool_calls:
10 messages.append(
11 {
12 "role": "assistant",
13 "tool_calls": response.message.tool_calls,
14 "tool_plan": response.message.tool_plan,
15 }
16 )
17 print(response.message.tool_calls)
1[ToolCallV2(id='get_weather_776n8ctsgycn', type='function', function=ToolCallV2Function(name='get_weather', arguments='{"location":"Toronto"}'))]
4

Tool Execution

Next, the tools called will be executed based on the arguments generated in the tool calling step earlier.

PYTHON
1import json
2
3if response.message.tool_calls:
4 for tc in response.message.tool_calls:
5 tool_result = functions_map[tc.function.name](
6 **json.loads(tc.function.arguments)
7 )
8 tool_content = []
9 for data in tool_result:
10 tool_content.append(
11 {
12 "type": "document",
13 "document": {"data": json.dumps(data)},
14 }
15 )
16 # Optional: add an "id" field in the "document" object, otherwise IDs are auto-generated
17 messages.append(
18 {
19 "role": "tool",
20 "tool_call_id": tc.id,
21 "content": tool_content,
22 }
23 )
5

Response Generation

The results are passed back to the LLM, which generates the final response.

PYTHON
1response = co.chat(
2 model="command-r-plus-08-2024", messages=messages, tools=tools
3)
4print(response.message.content[0].text)
1It is 20C in Toronto.
6

Citation Generation

The response object contains a citations field, which contains specific text spans from the documents on which the response is grounded.

PYTHON
1if response.message.citations:
2 for citation in response.message.citations:
3 print(citation, "\n")
1start=6 end=9 text='20C' sources=[ToolSource(type='tool', id='get_weather_776n8ctsgycn:0', tool_output={'temperature': '20C'})]

Further Resources

Built with