Tools on LangChain

Cohere supports various integrations with LangChain, a large language model (LLM) framework which allows you to quickly create applications based on Cohere’s models. This doc will guide you through how to leverage Cohere tools with LangChain.

Prerequisites

Running Cohere tools with LangChain doesn’t require many prerequisites, consult the top-level document for more information.

Multi-Step Tool Use

Multi-step is enabled by default. Here’s an example of using it to put together a simple agent:

PYTHON
1from langchain.agents import AgentExecutor
2from langchain_cohere.react_multi_hop.agent import create_cohere_react_agent
3from langchain_core.prompts import ChatPromptTemplate
4from langchain_cohere.chat_models import ChatCohere
5
6from langchain_community.tools.tavily_search import TavilySearchResults
7from langchain_core.pydantic_v1 import BaseModel, Field
8
9os.environ["TAVILY_API_KEY"] = #<your-api-key>
10
11internet_search = TavilySearchResults()
12internet_search.name = "internet_search"
13internet_search.description = "Returns a list of relevant document snippets for a textual query retrieved from the internet."
14
15class TavilySearchInput(BaseModel):
16 query: str = Field(description="Query to search the internet with")
17internet_search.args_schema = TavilySearchInput
18
19# LLM
20llm = ChatCohere(model="command-r-plus", temperature=0)
21
22# Preamble
23preamble = """
24You are an expert who answers the user's question with the most relevant datasource. You are equipped with an internet search tool and a special vectorstore of information about how to write good essays.
25"""
26
27# Prompt template
28prompt = ChatPromptTemplate.from_template("{input}")
29
30# Create the ReAct agent
31agent = create_cohere_react_agent(
32 llm=llm,
33 tools=[internet_search],
34 prompt=prompt,
35)
36
37agent_executor = AgentExecutor(agent=agent, tools=[internet_search], verbose=True)
38
39response = agent_executor.invoke({
40 "input": "I want to write an essay. Any tips?",
41 "preamble": preamble,
42})
43
44print(response['output'])

Single-Step Tool Use

In order to utilize single-step mode, you have to set force_single_step=False. Here’s an example of using it to answer a few questions:

PYTHON
1### Router
2from typing import Literal
3
4from langchain_core.prompts import ChatPromptTemplate
5from langchain_core.pydantic_v1 import BaseModel, Field
6from langchain_cohere import ChatCohere
7
8# Data model
9class web_search(BaseModel):
10 """
11 The internet. Use web_search for questions that are related to anything else than agents, prompt engineering, and adversarial attacks.
12 """
13 query: str = Field(description="The query to use when searching the internet.")
14
15class vectorstore(BaseModel):
16 """
17 A vectorstore containing documents related to agents, prompt engineering, and adversarial attacks. Use the vectorstore for questions on these topics.
18 """
19 query: str = Field(description="The query to use when searching the vectorstore.")
20
21# Preamble
22preamble = """You are an expert at routing a user question to a vectorstore or web search.
23The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
24Use the vectorstore for questions on these topics. Otherwise, use web-search."""
25
26# LLM with tool use and preamble
27llm = ChatCohere()
28structured_llm_router = llm.bind_tools(tools=[web_search, vectorstore], preamble=preamble)
29
30# Prompt
31route_prompt = ChatPromptTemplate.from_messages(
32 [
33 ("human", "{question}"),
34 ]
35)
36
37question_router = route_prompt | structured_llm_router
38response = question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"})
39print(response.response_metadata['tool_calls'])
40response = question_router.invoke({"question": "What are the types of agent memory?"})
41print(response.response_metadata['tool_calls'])
42response = question_router.invoke({"question": "Hi how are you?"})
43print('tool_calls' in response.response_metadata)