SQL Agent with Cohere and LangChain (i-5O Case Study)

SQL Agent Demo with Cohere and LangChain

This tutorial demonstrates how to create a SQL agent using Cohere and LangChain. The agent can translate natural language queries coming from users into SQL, and execute them against a database. This powerful combination allows for intuitive interaction with databases without requiring direct SQL knowledge.

Key topics covered:

  1. Setting up the necessary libraries and environment
  2. Connecting to a SQLite database
  3. Configuring the LangChain SQL Toolkit
  4. Creating a custom prompt template with few-shot examples
  5. Building and running the SQL agent
  6. Adding memory to the agent to keep track of historical messages

By the end of this tutorial, you’ll have a functional SQL agent that can answer questions about your data using natural language.

This tutorial uses a mocked up data of a manufacturing environment where a product item’s production is tracked across multiple stations, allowing for analysis of production efficiency, station performance, and individual item progress through the manufacturing process. This is modelled after a real customer use case.

The database contains two tables:

  • The product_tracking table records the movement of items through different zones in manufacturing stations, including start and end times, station names, and product IDs.
  • The status table logs the operational status of stations, including timestamps, station names, and whether they are productive or in downtime.

Table of contents

Import the required libraries

First, let’s import the necessary libraries for creating a SQL agent using Cohere and LangChain. These libraries enable natural language interaction with databases and provide tools for building AI-powered agents.

1import os
2
3os.environ["COHERE_API_KEY"] = "<cohere-api-key>"
1! pip install faiss-cpu -qq
1! pip install langchain-core langchain-cohere langchain-community -qq
1from langchain_cohere import create_sql_agent
2from langchain_cohere.chat_models import ChatCohere
3from langchain_community.agent_toolkits import SQLDatabaseToolkit
4from langchain_community.vectorstores import FAISS
5from langchain_core.example_selectors import SemanticSimilarityExampleSelector
6from langchain_cohere import CohereEmbeddings
7from datetime import datetime

Load the database

Next, we load the database for our manufacturing data.

We create an in-memory SQLite database using SQL scripts for the product_tracking and status tables. You can get the SQL tables here.

We then create a SQLDatabase instance, which will be used by our LangChain tools and agents to interact with the data.

1import sqlite3
2
3from langchain_community.utilities.sql_database import SQLDatabase
4from sqlalchemy import create_engine
5from sqlalchemy.pool import StaticPool
6
7def get_engine_for_manufacturing_db():
8 """Create an in-memory database with the manufacturing data tables."""
9 connection = sqlite3.connect(":memory:", check_same_thread=False)
10
11 # Read and execute the SQL scripts
12 for sql_file in ['product_tracking.sql', 'status.sql']:
13 with open(sql_file, 'r') as file:
14 sql_script = file.read()
15 connection.executescript(sql_script)
16
17 return create_engine(
18 "sqlite://",
19 creator=lambda: connection,
20 poolclass=StaticPool,
21 connect_args={"check_same_thread": False},
22 )
23
24# Create the engine
25engine = get_engine_for_manufacturing_db()
26
27# Create the SQLDatabase instance
28db = SQLDatabase(engine)
29
30# Now you can use this db instance with your LangChain tools and agents
1# Test the connection
2db.run("SELECT * FROM status LIMIT 5;")
1# Test the connection
2db.run("SELECT * FROM product_tracking LIMIT 5;")

Setup the LangChain SQL Toolkit

Next, we initialize the LangChain SQL Toolkit and initialize the language model to use Cohere’s LLM. This prepares the necessary components for querying the SQL database using natural language.

1## Define model to use
2import os
3
4MODEL="command-r-plus-08-2024"
5llm = ChatCohere(model=MODEL,
6 temperature=0.1,
7 verbose=True,
8 cohere_api_key=os.getenv("COHERE_API_KEY"))
9
10
11toolkit = SQLDatabaseToolkit(db=db, llm=llm)
12context = toolkit.get_context()
13tools = toolkit.get_tools()
14
15print('**List of pre-defined Langchain Tools**')
16print([tool.name for tool in tools])

Create a prompt template

Next, we create a prompt template. In this section, we will introduce a simple system message, and then also show how we can improve the prompt by introducing few shot prompting examples in the later sections. The system message is used to communicate instructions or provide context to the model at the beginning of a conversation.

In this case, we provide the model with context on what SQL dialect it should use, how many samples to query among other instructions.

1from langchain_core.prompts import (
2 PromptTemplate,
3 ChatPromptTemplate,
4 SystemMessagePromptTemplate,
5 MessagesPlaceholder
6)
7
8system_message = """You are an agent designed to interact with a SQL database.
9You are an expert at answering questions about manufacturing data.
10Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
11Always start with checking the schema of the available tables.
12Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.
13You can order the results by a relevant column to return the most interesting examples in the database.
14Never query for all the columns from a specific table, only ask for the relevant columns given the question.
15You have access to tools for interacting with the database.
16Only use the given tools. Only use the information returned by the tools to construct your final answer.
17You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
18
19DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
20
21The current date is {date}.
22
23For questions regarding productive time, downtime, productive or productivity, use minutes as units.
24
25For questions regarding productive time, downtime, productive or productivity use the status table.
26
27For questions regarding processing time and average processing time, use minutes as units.
28
29For questions regarding bottlenecks, processing time and average processing time use the product_tracking table.
30
31If the question does not seem related to the database, just return "I don't know" as the answer."""
32
33system_prompt = PromptTemplate.from_template(system_message)
1full_prompt = ChatPromptTemplate.from_messages(
2 [
3 SystemMessagePromptTemplate(prompt=system_prompt),
4 MessagesPlaceholder(variable_name='chat_history', optional=True),
5 ("human", "{input}"),
6 MessagesPlaceholder("agent_scratchpad"),
7 ]
8)
1prompt_val = full_prompt.invoke({
2 "input": "What was the productive time for all stations today?",
3 "top_k": 5,
4 "dialect": "SQLite",
5 "date":datetime.now(),
6 "agent_scratchpad": [],
7 })
8print(prompt_val.to_string())

Create a few-shot prompt template

In the above step, we’ve created a simple system prompt. Now, let us see how we can create a better few shot prompt template in this section. Few-shot examples are used to provide the model with context and improve its performance on specific tasks. In this case, we’ll prepare examples of natural language queries and their corresponding SQL queries to help the model generate accurate SQL statements for our database.

In this example, we use SemanticSimilarityExampleSelector to select the top k examples that are most similar to an input query out of all the examples available.

1examples = [
2 {
3 "input": "What was the average processing time for all stations on April 3rd 2024?",
4 "query": "SELECT station_name, AVG(CAST(duration AS INTEGER)) AS avg_processing_time FROM product_tracking WHERE date = '2024-04-03' AND zone = 'wip' GROUP BY station_name ORDER BY station_name;",
5 },
6 {
7 "input": "What was the average processing time for all stations on April 3rd 2024 between 4pm and 6pm?",
8 "query": "SELECT station_name, AVG(CAST(duration AS INTEGER)) AS avg_processing_time FROM product_tracking WHERE date = '2024-04-03' AND CAST(hour AS INTEGER) BETWEEN 16 AND 18 AND zone = 'wip' GROUP BY station_name ORDER BY station_name;",
9 },
10 {
11 "input": "What was the average processing time for stn4 on April 3rd 2024?",
12 "query": "SELECT AVG(CAST(duration AS INTEGER)) AS avg_processing_time FROM product_tracking WHERE date = '2024-04-03' AND station_name = 'stn4' AND zone = 'wip';",
13 },
14 {
15 "input": "How much downtime did stn2 have on April 3rd 2024?",
16 "query": "SELECT COUNT(*) AS downtime_count FROM status WHERE date = '2024-04-03' AND station_name = 'stn2' AND station_status = 'downtime';",
17 },
18 {
19 "input": "What were the productive time and downtime numbers for all stations on April 3rd 2024?",
20 "query": "SELECT station_name, station_status, COUNT(*) as total_time FROM status WHERE date = '2024-04-03' GROUP BY station_name, station_status;",
21 },
22 {
23 "input": "What was the bottleneck station on April 3rd 2024?",
24 "query": "SELECT station_name, AVG(CAST(duration AS INTEGER)) AS avg_processing_time FROM product_tracking WHERE date = '2024-04-03' AND zone = 'wip' GROUP BY station_name ORDER BY avg_processing_time DESC LIMIT 1;",
25 },
26 {
27 "input": "Which percentage of the time was stn5 down in the last week of May?",
28 "query": "SELECT SUM(CASE WHEN station_status = 'downtime' THEN 1 ELSE 0 END) * 100.0 / COUNT(*) AS percentage_downtime FROM status WHERE station_name = 'stn5' AND date >= '2024-05-25' AND date <= '2024-05-31';",
29 },
30]
1example_selector = SemanticSimilarityExampleSelector.from_examples(
2 examples,
3 CohereEmbeddings(cohere_api_key=os.getenv("COHERE_API_KEY"),
4 model="embed-english-v3.0"),
5 FAISS,
6 k=5,
7 input_keys=["input"],
8)
1from langchain_core.prompts import (
2 ChatPromptTemplate,
3 FewShotPromptTemplate,
4 MessagesPlaceholder,
5 PromptTemplate,
6 SystemMessagePromptTemplate,
7)
8
9system_prefix = """You are an agent designed to interact with a SQL database.
10You are an expert at answering questions about manufacturing data.
11Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
12Always start with checking the schema of the available tables.
13Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.
14You can order the results by a relevant column to return the most interesting examples in the database.
15Never query for all the columns from a specific table, only ask for the relevant columns given the question.
16You have access to tools for interacting with the database.
17Only use the given tools. Only use the information returned by the tools to construct your final answer.
18You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
19
20DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
21
22The current date is {date}.
23
24For questions regarding productive time, downtime, productive or productivity, use minutes as units.
25
26For questions regarding productive time, downtime, productive or productivity use the status table.
27
28For questions regarding processing time and average processing time, use minutes as units.
29
30For questions regarding bottlenecks, processing time and average processing time use the product_tracking table.
31
32If the question does not seem related to the database, just return "I don't know" as the answer.
33
34Here are some examples of user inputs and their corresponding SQL queries:
35"""
36
37few_shot_prompt = FewShotPromptTemplate(
38 example_selector=example_selector,
39 example_prompt=PromptTemplate.from_template(
40 "User input: {input}\nSQL query: {query}"
41 ),
42 input_variables=["input", "dialect", "top_k","date"],
43 prefix=system_prefix,
44 suffix="",
45)
1full_prompt = ChatPromptTemplate.from_messages(
2 [
3 # In the previous section, this was system_prompt instead without the few shot examples.
4 # We can use either prompting style as required
5 SystemMessagePromptTemplate(prompt=few_shot_prompt),
6 ("human", "{input}"),
7 MessagesPlaceholder("agent_scratchpad"),
8 ]
9)
1# Example formatted prompt
2prompt_val = full_prompt.invoke(
3 {
4 "input": "What was the productive time for all stations today?",
5 "top_k": 5,
6 "dialect": "SQLite",
7 "date":datetime.now(),
8 "agent_scratchpad": [],
9 }
10)
11print(prompt_val.to_string())

Create the agent

Next, we create an instance of the SQL agent using the LangChain framework, specifically using create_sql_agent.

This agent will be capable of interpreting natural language queries, converting them into SQL queries, and executing them against our database. The agent uses the LLM we defined earlier, along with the SQL toolkit and the custom prompt we created.

1agent = create_sql_agent(
2 llm=llm,
3 toolkit=toolkit,
4 prompt=full_prompt,
5 verbose=True
6)

Run the agent

Now, we can run the agent and test it with a few different queries.

1# %%time
2output=agent.invoke({
3 "input": "Which stations had some downtime in the month of May 2024?",
4 "date": datetime.now()
5})
6print(output['output'])
7
8# Answer: stn2, stn3 and stn5 had some downtime in the month of May 2024.
1output=agent.invoke({
2 "input": "What is the average processing duration at stn5 in the wip zone?",
3 "date": datetime.now()
4})
5print(output['output'])
6
7# Answer: 39.17 minutes
1output=agent.invoke({
2 "input": "Which station had the highest total duration in the wait zone?",
3 "date": datetime.now()
4})
5print(output['output'])
6
7# Answer: stn4 - 251 minutes

Memory in the sql agent

We may want the agent to hold memory of our previous messages so that we’re able to coherently engage with the agent to answer our queries. In this section, let’s take a look at how we can add memory to the agent so that we’re able to achieve this outcome!

1from langchain_core.runnables.history import RunnableWithMessageHistory
2from langchain_core.chat_history import BaseChatMessageHistory
3from langchain_core.messages import BaseMessage
4from pydantic import BaseModel, Field
5from typing import List

In the code snippets below, we create a class to store the chat history in memory. This can be customised to store the messages from a database or any other suitable data store.

1class InMemoryHistory(BaseChatMessageHistory, BaseModel):
2 """In memory implementation of chat message history."""
3
4 messages: List[BaseMessage] = Field(default_factory=list)
5
6 def add_messages(self, messages: List[BaseMessage]) -> None:
7 """Add a list of messages to the store"""
8 self.messages.extend(messages)
9
10 def clear(self) -> None:
11 self.messages = []

In the below code snippet, we make use of the RunnableWithMessageHistory abstraction to wrap around the agent we’ve created above to provide the message history to the agent that we can now utilize by chatting with the agent_with_chat_history as shown below.

1store = {}
2def get_by_session_id(session_id: str):
3 if session_id not in store:
4 store[session_id] = InMemoryHistory()
5 return store[session_id]
6
7agent_with_chat_history = RunnableWithMessageHistory(agent, get_by_session_id, history_messages_key="chat_history")
8
9output = agent_with_chat_history.invoke({"input": "What station had the longest duration on 27th May 2024?", "date": datetime.now()}, config={"configurable": {"session_id": "foo"}})
10print(output["output"])
11
12# Answer: sstn2, with duration of 35 mins.
1output = agent_with_chat_history.invoke({"input": "Can you tell me when this station had downtime on 2024-04-03?", "date": datetime.now()}, config={"configurable": {"session_id": "foo"}})
2print(output["output"])
3
4# Answer: 21:52:00

We can see from the above code snippets that the agent is automatically able to infer and query with respect to ‘stn2’ in the above question without us having to specify it explicitly. This allows us to have more coherent conversations with the agent.

Conclusion

This tutorial demonstrated how to create a SQL agent using Cohere and LangChain. The agent can translate natural language queries coming from users into SQL, and execute them against a database. This powerful combination allows for intuitive interaction with databases without requiring direct SQL knowledge.

Built with