Chat on LangChain
Cohere supports various integrations with LangChain, a large language model (LLM) framework which allows you to quickly create applications based on Cohere’s models. This doc will guide you through how to leverage Cohere Chat with LangChain.
Prerequisites
Running Cohere Chat with LangChain doesn’t require many prerequisites, consult the top-level document for more information.
Cohere Chat with LangChain
To use Cohere chat with LangChain, simply create a ChatCohere object and pass in the message or message history. In the example below, you will need to add your Cohere API key.
Cohere Agents with LangChain
LangChain Agents use a language model to choose a sequence of actions to take.
To use Cohere’s multi hop agent create a create_cohere_react_agent
and pass in the LangChain tools you would like to use.
For example, using an internet search tool to get essay writing advice from Cohere with citations:
Cohere Chat and RAG with LangChain
To use Cohere’s retrieval augmented generation (RAG) functionality with LangChain, create a CohereRagRetriever object. Then there are a few RAG uses, discussed in the next few sections.
Using LangChain’s Retrievers
In this example, we use the wikipedia retriever but any retriever supported by LangChain can be used here. In order to set up the wikipedia retriever you need to install the wikipedia python package using %pip install --upgrade --quiet wikipedia
. With that done, you can execute this code to see how a retriever works:
Using Documents
In this example, we take documents (which might be generated in other parts of your application) and pass them into the CohereRagRetriever object:
Using a Connector
In this example, we create a generation with a connector which allows us to get a generation with citations to results from the connector. We use the “web-search” connector, which is available to everyone. But if you have created your own connector in your org you can pass in its id, like so: rag = CohereRagRetriever(llm=cohere_chat_model, connectors=[{"id": "example-connector-id"}])
Here’s a code sample illustrating how to use a connector:
Using the create_stuff_documents_chain
Chain
This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window of the LLM you are using.
Note: this feature is currently in beta.
Structured Output Generation
Cohere supports generating JSON objects to structure and organize the model’s responses in a way that can be used in downstream applications.
You can specify the response_format
parameter to indicate that you want the response in a JSON object format.
Text Summarization
You can use the load_summarize_chain
chain to perform text summarization.
Using LangChain on Private Deployments
You can use LangChain with privately deployed Cohere models. To use it, specify your model deployment URL in the base_url
parameter.