Creating a QA Bot From Technical Documentation

This notebook demonstrates how to create a chatbot (single turn) that answers user questions based on technical documentation made available to the model.

We use the aws-documentation dataset (link) for representativeness. This dataset contains 26k+ AWS documentation pages, preprocessed into 120k+ chunks, and 100 questions based on real user questions.

We proceed as follows:

  1. Embed the AWS documentation into a vector database using Cohere embeddings and llama_index
  2. Build a retriever using Cohere’s rerank for better accuracy, lower inference costs and lower latency
  3. Create model answers for the eval set of 100 questions
  4. Evaluate the model answers against the golden answers of the eval set

Setup

PYTHON
1%%capture
2!pip install cohere datasets llama_index llama-index-llms-cohere llama-index-embeddings-cohere
PYTHON
1import cohere
2import datasets
3from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage
4from llama_index.core.schema import TextNode
5from llama_index.embeddings.cohere import CohereEmbedding
6import pandas as pd
7
8import json
9from pathlib import Path
10from tqdm import tqdm
11from typing import List
PYTHON
1api_key = "" # <your api="" key="">
2co = cohere.Client(api_key=api_key)

1. Embed technical documentation and store as vector database

  • Load the dataset from HuggingFace
  • Compute embeddings using Cohere’s implementation in LlamaIndex, CohereEmbedding
  • Store inside a vector database, VectorStoreIndex from LlamaIndex

Because this process is lengthy (~2h for all documents on a MacBookPro), we store the index to disc for future reuse. We also provide a (commented) code snippet to index only a subset of the data. If you use this snippet, bear in mind that many documents will become unavailable to the model and, as a result, performance will suffer!

PYTHON
1data = datasets.load_dataset("sauravjoshi23/aws-documentation-chunked")
2print(data)
3
4map_id2index = {sample["id"]: index for index, sample in enumerate(data["train"])}
Output
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
DatasetDict({
train: Dataset({
features: ['id', 'text', 'source'],
num_rows: 187147
})
})
PYTHON
1overwrite = True # only compute index if it doesn't exist
2path_index = Path(".") / "aws-documentation_index_cohere"
3
4embed_model = CohereEmbedding(
5 cohere_api_key=api_key,
6 model_name="embed-english-v3.0",
7)
8
9if not path_index.exists() or overwrite:
10 # Documents are prechunked. Keep them as-is for now
11 stub_len = len("https://github.com/siagholami/aws-documentation/tree/main/documents/")
12 documents = [
13 # -- for indexing full dataset --
14 TextNode(
15 text=sample["text"],
16 title=sample["source"][stub_len:], # save source minus stub
17 id_=sample["id"],
18 ) for sample in data["train"]
19 # -- for testing on subset --
20 # TextNode(
21 # text=data["train"][index]["text"],
22 # title=data["train"][index]["source"][stub_len:],
23 # id_=data["train"][index]["id"],
24 # ) for index in range(1_000)
25 ]
26 index = VectorStoreIndex(documents, embed_model=embed_model)
27 index.storage_context.persist(path_index)
28
29else:
30 storage_context = StorageContext.from_defaults(persist_dir=path_index)
31 index = load_index_from_storage(storage_context, embed_model=embed_model)

2. Build a retriever using Cohere’s rerank

The vector database we built using VectorStoreIndex comes with an in-built retriever. We can call that retriever to fetch the top kk documents most relevant to the user question with:

PYTHON
1retriever = index.as_retriever(similarity_top_k=top_k)

We recently released Rerank-3 (April ‘24), which we can use to improve the quality of retrieval, as well as reduce latency and the cost of inference. To use the retriever with rerank, we create a thin wrapper around index.as_retriever as follows:

PYTHON
1class RetrieverWithRerank:
2 def __init__(self, retriever, api_key):
3 self.retriever = retriever
4 self.co = cohere.Client(api_key=api_key)
5
6 def retrieve(self, query: str, top_n: int):
7 # First call to the retriever fetches the closest indices
8 nodes = self.retriever.retrieve(query)
9 nodes = [
10 {
11 "text": node.node.text,
12 "llamaindex_id": node.node.id_,
13 }
14 for node
15 in nodes
16 ]
17 # Call co.rerank to improve the relevance of retrieved documents
18 reranked = self.co.rerank(query=query, documents=nodes, model="rerank-english-v3.0", top_n=top_n)
19 nodes = [nodes[node.index] for node in reranked.results]
20 return nodes
21
22
23top_k = 60 # how many documents to fetch on first pass
24top_n = 20 # how many documents to sub-select with rerank
25
26retriever = RetrieverWithRerank(
27 index.as_retriever(similarity_top_k=top_k),
28 api_key=api_key,
29)
PYTHON
1query = "What happens to my Amazon EC2 instances if I delete my Auto Scaling group?"
2
3documents = retriever.retrieve(query, top_n=top_n)
4
5resp = co.chat(message=query, model="command-r", temperature=0., documents=documents)
6print(resp.text)

This works! With co.chat, you get the additional benefit that citations are returned for every span of text. Here’s a simple function to display the citations inside square brackets.

PYTHON
1def build_answer_with_citations(response):
2 """ """
3 text = response.text
4 citations = response.citations
5
6 # Construct text_with_citations adding citation spans as we iterate through citations
7 end = 0
8 text_with_citations = ""
9
10 for citation in citations:
11 # Add snippet between last citatiton and current citation
12 start = citation.start
13 text_with_citations += text[end : start]
14 end = citation.end # overwrite
15 citation_blocks = " [" + ", ".join([stub[4:] for stub in citation.document_ids]) + "] "
16 text_with_citations += text[start : end] + citation_blocks
17 # Add any left-over
18 text_with_citations += text[end:]
19
20 return text_with_citations
21
22grounded_answer = build_answer_with_citations(resp)
23print(grounded_answer)

3. Create model answers for 100 QA pairs

Now that we have a running pipeline, we need to assess its performance.

The author of the repository provides 100 QA pairs that we can test the model on. Let’s download these questions, then run inference on all 100 questions. Later, we will use Command-R+ — Cohere’s largest and most powerful model — to measure performance.

PYTHON
1url = "https://github.com/siagholami/aws-documentation/blob/main/QA_true.csv?raw=true"
2qa_pairs = pd.read_csv(url)
3qa_pairs.sample(2)

We’ll use the fields as follows:

  • Question: the user question, passed to co.chat to generate the answer
  • Answer_True: treat as the ground gruth; compare to the model-generated answer to determine its correctness
  • Document_True: treat as the (single) golden document; check the rank of this document inside the model’s retrieved documents

We’ll loop over each question and generate our model answer. We’ll also complete two steps that will be useful for evaluating our model next:

  1. We compute the rank of the golden document amid the retrieved documents — this will inform how well our retrieval system performs
  2. We prepare the grading prompts — these will be sent to an LLM scorer to compute the goodness of responses
PYTHON
1LLM_EVAL_TEMPLATE = """## References
2{references}
3
4QUESTION: based on the above reference documents, answer the following question: {question}
5ANSWER: {answer}
6STUDENT RESPONSE: {completion}
7
8Based on the question and answer above, grade the studen't reponse. A correct response will contain exactly \
9the same information as in the answer, even if it is worded differently. If the student's reponse is correct, \
10give it a score of 1. Otherwise, give it a score of 0. Let's think step by step. Return your answer as \
11as a compilable JSON with the following structure:
12{{
13 "reasoning": <reasoning>,
14 "score: <score 0="" 1="" of="" or="">,
15}}"""
16
17
18def get_rank_of_golden_within_retrieved(golden: str, retrieved: List[dict]) -> int:
19 """
20 Returns the rank that the golden document (single) has within the retrieved documents
21 * `golden` contains the source of the document, e.g. 'amazon-ec2-user-guide/EBSEncryption.md'
22 * `retrieved` has a list of responses with key 'llamaindex_id', which links back to document sources
23 """
24 # Create {document: rank} map using llamaindex_id (count first occurrence of any document; they can
25 # appear multiple times because they're chunked)
26 doc_to_rank = {}
27 for rank, doc in enumerate(retrieved):
28 # retrieve source of document
29 _id = doc["llamaindex_id"]
30 source = data["train"][map_id2index[_id]]["source"]
31 # format as in dataset
32 source = source[stub_len:] # remove stub
33 source = source.replace("/doc_source", "") # remove /doc_source/
34 if source not in doc_to_rank:
35 doc_to_rank[source] = rank + 1
36
37 # Return rank of `golden`, defaulting to len(retrieved) + 1 if it's absent
38 return doc_to_rank.get(golden, len(retrieved) + 1)
PYTHON
1from tqdm import tqdm
2
3answers = []
4golden_answers = []
5ranks = []
6grading_prompts = [] # best computed in batch
7
8for _, row in tqdm(qa_pairs.iterrows(), total=len(qa_pairs)):
9 query, golden_answer, golden_doc = row["Question"], row["Answer_True"], row["Document_True"]
10 golden_answers.append(golden_answer)
11
12 # --- Produce answer using retriever ---
13 documents = retriever.retrieve(query, top_n=top_n)
14 resp = co.chat(message=query, model="command-r", temperature=0., documents=documents)
15 answer = resp.text
16 answers.append(answer)
17
18 # --- Do some prework for evaluation later ---
19 # Rank
20 rank = get_rank_of_golden_within_retrieved(golden_doc, documents)
21 ranks.append(rank)
22 # Score: construct the grading prompts for LLM evals, then evaluate in batch
23 # Need to reformat documents slightly
24 documents = [{"index": str(i), "text": doc["text"]} for i, doc in enumerate(documents)]
25 references_text = "\n\n".join("\n".join([f"{k}: {v}" for k, v in doc.items()]) for doc in documents)
26 # ^ snippet looks complicated, but all it does it unpack all kwargs from `documents`
27 # into text separated by \n\n
28 grading_prompt = LLM_EVAL_TEMPLATE.format(
29 references=references_text, question=query, answer=golden_answer, completion=answer,
30 )
31 grading_prompts.append(grading_prompt)

4. Evaluate model performance

We want to test our model performance on two dimensions:

  1. How good is the final answer? We’ll compare our model answer to the golden answer using Command-R+ as a judge.
  2. How good is the retrieval? We’ll use the rank of the golden document within the retrieved documents to this end.

Note that this pipeline is for illustration only. To measure performance in practice, we would want to run more in-depths tests on a broader, representative dataset.

PYTHON
1results = pd.DataFrame()
2results["answer"] = answers
3results["golden_answer"] = qa_pairs["Answer_True"]
4results["rank"] = ranks

4.1 Compare answer to golden answer

We’ll use Command-R+ as a judge of whether the answers produced by our model convey the same information as the golden answers. Since we’ve defined the grading prompts earlier, we can simply ask our LLM judge to evaluate that grading prompt. After a little bit of postprocessing, we can then extract our model scores.

PYTHON
1scores = []
2reasonings = []
3
4def remove_backticks(text: str) -> str:
5 """
6 Some models are trained to output JSON in Markdown formatting:
7 ```json {json object}```
8 Remove the backticks from those model responses so that they become
9 parasable by json.loads.
10 """
11 if text.startswith("```json"):
12 text = text[7:]
13 if text.endswith("```"):
14 text = text[:-3]
15 return text
16
17
18for prompt in tqdm(grading_prompts, total=len(grading_prompts)):
19 resp = co.chat(message=prompt, model="command-r-plus", temperature=0.)
20 # Convert response to JSON to extract the `score` and `reasoning` fields
21 # We remove backticks for compatibility with different LLMs
22 parsed = json.loads(remove_backticks(resp.text))
23 scores.append(parsed["score"])
24 reasonings.append(parsed["reasoning"])
PYTHON
1results["score"] = scores
2results["reasoning"] = reasonings
PYTHON
1print(f"Average score: {results['score'].mean():.3f}")

4.2 Compute rank

We’ve already computed the rank of the golden documents using get_rank_of_golden_within_retrieved. Here, we’ll plot the histogram of ranks, using blue when the answer scored a 1, and red when the answer scored a 0.

PYTHON
1import matplotlib.pyplot as plt
2import seaborn as sns
3
4sns.set_theme(style="darkgrid", rc={"grid.color": ".8"})
5
6results["rank_shifted_left"] = results["rank"] - 0.1
7results["rank_shifted_right"] = results["rank"] + 0.1
8
9f, ax = plt.subplots(figsize=(5, 3))
10sns.histplot(data=results.loc[results["score"] == 1], x="rank_shifted_left", color="skyblue", label="Correct answer", binwidth=1)
11sns.histplot(data=results.loc[results["score"] == 0], x="rank_shifted_right", color="red", label="False answer", binwidth=1)
12
13ax.set_xticks([1, 5, 0, 10, 15, 20])
14ax.set_title("Rank of golden document (max means golden doc. wasn't retrieved)")
15ax.set_xlabel("Rank")
16ax.legend();

We see that retrieval works well overall: for 80% of questions, the golden document is within the top 5 documents. However, we also notice that approx. half the false answers come from instances where the golden document wasn’t retrieved (rank = top_k = 20). This should be improved, e.g. by adding metadata to the documents such as their section headings, or altering the chunking strategy.

There is also a non-negligible instance of false answers where the top document was retrieved. On closer inspection, many of these are due to the model phrasing its answers more verbosely than the (very laconic) golden documents. This highlights the importance of checking eval results before jumping to conclusions about model performance.

Conclusions

In this notebook, we’ve built a QA bot that answers user questions based on technical documentation. We’ve learnt:

  1. How to embed the technical documentation into a vector database using Cohere embeddings and llama_index
  2. How to build a custom retriever that leverages Cohere’s rerank
  3. How to evaluate model performance against a predetermined set of golden QA pairs