IntegrationsLangChain

Embed on LangChain

Cohere supports various integrations with LangChain, a large language model (LLM) framework which allows you to quickly create applications based on Cohere’s models. This doc will guide you through how to leverage different Cohere embeddings with LangChain.

Prerequisites

Running Cohere embeddings with LangChain doesn’t require many prerequisites, consult the top-level document for more information.

Cohere Embeddings with LangChain

To use Cohere’s Embeddings with LangChain, create a CohereEmbedding object as follows (the available cohere embedding models are listed here):

PYTHON
1from langchain_cohere import CohereEmbeddings
2
3# Define the Cohere embedding model
4embeddings = CohereEmbeddings(cohere_api_key="COHERE_API_KEY",
5 model="embed-english-v3.0")
6
7# Embed a document
8text = "This is a test document."
9query_result = embeddings.embed_query(text)
10print(query_result[:5], "...")
11doc_result = embeddings.embed_documents([text])
12print(doc_result[0][:5], "...")

To use these embeddings with Cohere’s RAG functionality, you will need to use one of the vector DBs from this list. In this example we use chroma, so in order to run it you will need to install chroma using pip install chromadb.

PYTHON
1from langchain_cohere import ChatCohere, CohereEmbeddings, CohereRerank, CohereRagRetriever
2from langchain.text_splitter import CharacterTextSplitter
3from langchain_community.vectorstores import Chroma
4from langchain_community.document_loaders import WebBaseLoader
5
6user_query = "what is Cohere Toolkit?"
7
8llm = ChatCohere(cohere_api_key="COHERE_API_KEY",
9 model="command-r-plus-08-2024",
10 temperature=0)
11
12embeddings = CohereEmbeddings(cohere_api_key="COHERE_API_KEY",
13 model="embed-english-v3.0")
14
15# Load text files and split into chunks, you can also use data gathered elsewhere in your application
16raw_documents = WebBaseLoader("https://docs.cohere.com/docs/cohere-toolkit").load()
17
18text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
19documents = text_splitter.split_documents(raw_documents)
20# Create a vector store from the documents
21db = Chroma.from_documents(documents, embeddings)
22input_docs = db.as_retriever().invoke(user_query)
23
24# Create the cohere rag retriever using the chat model
25rag = CohereRagRetriever(llm=llm)
26docs = rag.invoke(
27 user_query,
28 documents=input_docs,
29)
30# Print the documents
31print("Documents:")
32for doc in docs[:-1]:
33 print(doc.metadata)
34 print("\n\n" + doc.page_content)
35 print("\n\n" + "-" * 30 + "\n\n")
36# Print the final generation
37answer = docs[-1].page_content
38print("Answer:")
39print(answer)
40# Print the final citations
41citations = docs[-1].metadata['citations']
42print("Citations:")
43print(citations)

Cohere with LangChain and Bedrock

Prerequisite

In addition to the prerequisites above, integrating Cohere with LangChain on Amazon Bedrock also requires:

  • The LangChain AWS package. To install it, run pip install langchain-aws.
  • AWS Python SDK. To install it, run pip install boto3. You can find more details here .
  • Configured authentication credentials for AWS. For more details, see this document.

Cohere Embeddings with LangChain and Amazon Bedrock

In this example, we create embeddings for a query using Bedrock and LangChain:

PYTHON
1from langchain_aws import BedrockEmbeddings
2
3# Replace the profile name with the one created in the setup.
4embeddings = BedrockEmbeddings(
5 credentials_profile_name="{PROFILE-NAME}",
6 region_name="us-east-1",
7 model_id="cohere.embed-english-v3"
8)
9
10embeddings.embed_query("This is a content of the document")

Using LangChain on Private Deployments

You can use LangChain with privately deployed Cohere models. To use it, specify your model deployment URL in the base_url parameter.

PYTHON
1llm = CohereEmbeddings(base_url=<YOUR_DEPLOYMENT_URL>,
2 cohere_api_key="COHERE_API_KEY",
3 model="MODEL_NAME")