Creating a QA Bot From Technical Documentation

Creating a QA Bot From Technical Documentation

This notebook demonstrates how to create a chatbot (single turn) that answers user questions based on technical documentation made available to the model.

We use the aws-documentation dataset (link) for representativeness. This dataset contains 26k+ AWS documentation pages, preprocessed into 120k+ chunks, and 100 questions based on real user questions.

We proceed as follows:

  1. Embed the AWS documentation into a vector database using Cohere embeddings and llama_index
  2. Build a retriever using Cohere's rerank for better accuracy, lower inference costs and lower latency
  3. Create model answers for the eval set of 100 questions
  4. Evaluate the model answers against the golden answers of the eval set

Setup

%%capture
!pip install cohere datasets llama_index llama-index-llms-cohere llama-index-embeddings-cohere
import cohere
import datasets
from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage
from llama_index.core.schema import TextNode
from llama_index.embeddings.cohere import CohereEmbedding
import pandas as pd

import json
from pathlib import Path
from tqdm import tqdm
from typing import List

api_key = "" # <your API key>
co = cohere.Client(api_key=api_key)

1. Embed technical documentation and store as vector database

  • Load the dataset from HuggingFace
  • Compute embeddings using Cohere's implementation in LlamaIndex, CohereEmbedding
  • Store inside a vector database, VectorStoreIndex from LlamaIndex

Because this process is lengthy (~2h for all documents on a MacBookPro), we store the index to disc for future reuse. We also provide a (commented) code snippet to index only a subset of the data. If you use this snippet, bear in mind that many documents will become unavailable to the model and, as a result, performance will suffer!

data = datasets.load_dataset("sauravjoshi23/aws-documentation-chunked")
print(data)

map_id2index = {sample["id"]: index for index, sample in enumerate(data["train"])}

/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: 
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
  warnings.warn(


DatasetDict({
    train: Dataset({
        features: ['id', 'text', 'source'],
        num_rows: 187147
    })
})

overwrite = True # only compute index if it doesn't exist
path_index = Path(".") / "aws-documentation_index_cohere"

embed_model = CohereEmbedding(
    cohere_api_key=api_key,
    model_name="embed-english-v3.0",
)

if not path_index.exists() or overwrite:
    # Documents are prechunked. Keep them as-is for now
    stub_len = len("https://github.com/siagholami/aws-documentation/tree/main/documents/")
    documents = [
        # -- for indexing full dataset --
        TextNode(
            text=sample["text"],
            title=sample["source"][stub_len:], # save source minus stub
            id_=sample["id"],
        ) for sample in data["train"]
        # -- for testing on subset --
        # TextNode(
        #     text=data["train"][index]["text"],
        #     title=data["train"][index]["source"][stub_len:],
        #     id_=data["train"][index]["id"],
        # ) for index in range(1_000)
    ]
    index = VectorStoreIndex(documents, embed_model=embed_model)
    index.storage_context.persist(path_index)

else:
    storage_context = StorageContext.from_defaults(persist_dir=path_index)
    index = load_index_from_storage(storage_context, embed_model=embed_model)

2. Build a retriever using Cohere's rerank

The vector database we built using VectorStoreIndex comes with an in-built retriever. We can call that retriever to fetch the top $k$ documents most relevant to the user question with:

retriever = index.as_retriever(similarity_top_k=top_k)

We recently released Rerank-3 (April '24), which we can use to improve the quality of retrieval, as well as reduce latency and the cost of inference. To use the retriever with rerank, we create a thin wrapper around index.as_retriever as follows:

class RetrieverWithRerank:
    def __init__(self, retriever, api_key):
        self.retriever = retriever
        self.co = cohere.Client(api_key=api_key)

    def retrieve(self, query: str, top_n: int):
        # First call to the retriever fetches the closest indices
        nodes = self.retriever.retrieve(query)
        nodes = [
            {
                "text": node.node.text,
                "llamaindex_id": node.node.id_,
            }
            for node
            in nodes
        ]
        # Call co.rerank to improve the relevance of retrieved documents
        reranked = self.co.rerank(query=query, documents=nodes, model="rerank-english-v3.0", top_n=top_n)
        nodes = [nodes[node.index] for node in reranked.results]
        return nodes


top_k = 60 # how many documents to fetch on first pass
top_n = 20 # how many documents to sub-select with rerank

retriever = RetrieverWithRerank(
    index.as_retriever(similarity_top_k=top_k),
    api_key=api_key,
)

query = "What happens to my Amazon EC2 instances if I delete my Auto Scaling group?"

documents = retriever.retrieve(query, top_n=top_n)

resp = co.chat(message=query, model="command-r", temperature=0., documents=documents)
print(resp.text)

This works! With co.chat, you get the additional benefit that citations are returned for every span of text. Here's a simple function to display the citations inside square brackets.

def build_answer_with_citations(response):
    """ """
    text = response.text
    citations = response.citations

    # Construct text_with_citations adding citation spans as we iterate through citations
    end = 0
    text_with_citations = ""

    for citation in citations:
        # Add snippet between last citatiton and current citation
        start = citation.start
        text_with_citations += text[end : start]
        end = citation.end  # overwrite
        citation_blocks = " [" + ", ".join([stub[4:] for stub in citation.document_ids]) + "] "
        text_with_citations += text[start : end] + citation_blocks
    # Add any left-over
    text_with_citations += text[end:]

    return text_with_citations

grounded_answer = build_answer_with_citations(resp)
print(grounded_answer)

3. Create model answers for 100 QA pairs

Now that we have a running pipeline, we need to assess its performance.

The author of the repository provides 100 QA pairs that we can test the model on. Let's download these questions, then run inference on all 100 questions. Later, we will use Command-R+ -- Cohere's largest and most powerful model -- to measure performance.

url = "https://github.com/siagholami/aws-documentation/blob/main/QA_true.csv?raw=true"
qa_pairs = pd.read_csv(url)
qa_pairs.sample(2)

We'll use the fields as follows:

  • Question: the user question, passed to co.chat to generate the answer
  • Answer_True: treat as the ground gruth; compare to the model-generated answer to determine its correctness
  • Document_True: treat as the (single) golden document; check the rank of this document inside the model's retrieved documents

We'll loop over each question and generate our model answer. We'll also complete two steps that will be useful for evaluating our model next:

  1. We compute the rank of the golden document amid the retrieved documents -- this will inform how well our retrieval system performs
  2. We prepare the grading prompts -- these will be sent to an LLM scorer to compute the goodness of responses

LLM_EVAL_TEMPLATE = """## References
{references}

QUESTION: based on the above reference documents, answer the following question: {question}
ANSWER: {answer}
STUDENT RESPONSE: {completion}

Based on the question and answer above, grade the studen't reponse. A correct response will contain exactly \
the same information as in the answer, even if it is worded differently. If the student's reponse is correct, \
give it a score of 1. Otherwise, give it a score of 0. Let's think step by step. Return your answer as \
as a compilable JSON with the following structure:
{{
    "reasoning": <reasoning>,
    "score: <score of 0 or 1>,
}}"""


def get_rank_of_golden_within_retrieved(golden: str, retrieved: List[dict]) -> int:
    """
    Returns the rank that the golden document (single) has within the retrieved documents
    * `golden` contains the source of the document, e.g. 'amazon-ec2-user-guide/EBSEncryption.md'
    * `retrieved` has a list of responses with key 'llamaindex_id', which links back to document sources
    """
    # Create {document: rank} map using llamaindex_id (count first occurrence of any document; they can
    # appear multiple times because they're chunked)
    doc_to_rank = {}
    for rank, doc in enumerate(retrieved):
        # retrieve source of document
        _id = doc["llamaindex_id"]
        source = data["train"][map_id2index[_id]]["source"]
        # format as in dataset
        source = source[stub_len:]  # remove stub
        source = source.replace("/doc_source", "")  # remove /doc_source/
        if source not in doc_to_rank:
            doc_to_rank[source] = rank + 1

    # Return rank of `golden`, defaulting to len(retrieved) + 1 if it's absent
    return doc_to_rank.get(golden, len(retrieved) + 1)

from tqdm import tqdm

answers = []
golden_answers = []
ranks = []
grading_prompts = []  # best computed in batch

for _, row in tqdm(qa_pairs.iterrows(), total=len(qa_pairs)):
    query, golden_answer, golden_doc = row["Question"], row["Answer_True"], row["Document_True"]
    golden_answers.append(golden_answer)

    # --- Produce answer using retriever ---
    documents = retriever.retrieve(query, top_n=top_n)
    resp = co.chat(message=query, model="command-r", temperature=0., documents=documents)
    answer = resp.text
    answers.append(answer)

    # --- Do some prework for evaluation later ---
    # Rank
    rank = get_rank_of_golden_within_retrieved(golden_doc, documents)
    ranks.append(rank)
    # Score: construct the grading prompts for LLM evals, then evaluate in batch
    # Need to reformat documents slightly
    documents = [{"index": str(i), "text": doc["text"]} for i, doc in enumerate(documents)]
    references_text = "\n\n".join("\n".join([f"{k}: {v}" for k, v in doc.items()]) for doc in documents)
    # ^ snippet looks complicated, but all it does it unpack all kwargs from `documents`
    # into text separated by \n\n
    grading_prompt = LLM_EVAL_TEMPLATE.format(
        references=references_text, question=query, answer=golden_answer, completion=answer,
    )
    grading_prompts.append(grading_prompt)

4. Evaluate model performance

We want to test our model performance on two dimensions:

  1. How good is the final answer? We'll compare our model answer to the golden answer using Command-R+ as a judge.
  2. How good is the retrieval? We'll use the rank of the golden document within the retrieved documents to this end.

Note that this pipeline is for illustration only. To measure performance in practice, we would want to run more in-depths tests on a broader, representative dataset.

results = pd.DataFrame()
results["answer"] = answers
results["golden_answer"] = qa_pairs["Answer_True"]
results["rank"] = ranks

4.1 Compare answer to golden answer

We'll use Command-R+ as a judge of whether the answers produced by our model convey the same information as the golden answers. Since we've defined the grading prompts earlier, we can simply ask our LLM judge to evaluate that grading prompt. After a little bit of postprocessing, we can then extract our model scores.

scores = []
reasonings = []

def remove_backticks(text: str) -> str:
  """
  Some models are trained to output JSON in Markdown formatting:
  ```json {json object}```
  Remove the backticks from those model responses so that they become
  parasable by json.loads.
  """
  if text.startswith("```json"):
      text = text[7:]
  if text.endswith("```"):
      text = text[:-3]
  return text


for prompt in tqdm(grading_prompts, total=len(grading_prompts)):
  resp = co.chat(message=prompt, model="command-r-plus", temperature=0.)
  # Convert response to JSON to extract the `score` and `reasoning` fields
  # We remove backticks for compatibility with different LLMs
  parsed = json.loads(remove_backticks(resp.text))
  scores.append(parsed["score"])
  reasonings.append(parsed["reasoning"])

results["score"] = scores
results["reasoning"] = reasonings
print(f"Average score: {results['score'].mean():.3f}")

4.2 Compute rank

We've already computed the rank of the golden documents using get_rank_of_golden_within_retrieved. Here, we'll plot the histogram of ranks, using blue when the answer scored a 1, and red when the answer scored a 0.

import matplotlib.pyplot as plt
import seaborn as sns

sns.set_theme(style="darkgrid", rc={"grid.color": ".8"})

results["rank_shifted_left"] = results["rank"] - 0.1
results["rank_shifted_right"] = results["rank"] + 0.1

f, ax = plt.subplots(figsize=(5, 3))
sns.histplot(data=results.loc[results["score"] == 1], x="rank_shifted_left", color="skyblue", label="Correct answer", binwidth=1)
sns.histplot(data=results.loc[results["score"] == 0], x="rank_shifted_right", color="red", label="False answer", binwidth=1)

ax.set_xticks([1, 5, 0, 10, 15, 20])
ax.set_title("Rank of golden document (max means golden doc. wasn't retrieved)")
ax.set_xlabel("Rank")
ax.legend();

We see that retrieval works well overall: for 80% of questions, the golden document is within the top 5 documents. However, we also notice that approx. half the false answers come from instances where the golden document wasn't retrieved (rank = top_k = 20). This should be improved, e.g. by adding metadata to the documents such as their section headings, or altering the chunking strategy.

There is also a non-negligible instance of false answers where the top document was retrieved. On closer inspection, many of these are due to the model phrasing its answers more verbosely than the (very laconic) golden documents. This highlights the importance of checking eval results before jumping to conclusions about model performance.

Conclusions

In this notebook, we've built a QA bot that answers user questions based on technical documentation. We've learnt:

  1. How to embed the technical documentation into a vector database using Cohere embeddings and llama_index
  2. How to build a custom retriever that leverages Cohere's rerank
  3. How to evaluate model performance against a predetermined set of golden QA pairs