A Deeper Dive Into Semantic Search

In this chapter, you’ll dive deeper into building a semantic search model using the Embed endpoint. You’ll use this model to search for answers in a large text dataset.

Colab Notebook

This chapter comes with a corresponding Colab notebook, and we encourage you to follow it along as you read the chapter.

For the setup, please refer to the Setting Up chapter at the beginning of this module.

Introduction

In module 2, you learned about semantic search, and then in a previous chapter in this module, you built a simple semantic search model using text embeddings. In this chapter, you’ll build a similar semantic search model in a much larger dataset, which is made up of questions. Since the dataset is larger, we’ll use a tool that will speed up the nearest neighbors algorithm here.

As you’ve seen before, semantic search goes way beyond keyword search. The applications of semantic search go beyond building a web search engine. They can empower a private search engine for internal documents or records. It can be used to power features like StackOverflow’s “similar questions” feature.

Contents

  • Get the archive of questions
  • Embed the archive
  • Search using an index and nearest neighbour search
  • Visualize the archive based on the embeddings.

1. Download the Dependencies

PYTHON
1#title Import libraries (Run this cell to execute required code) {display-mode: "form"}
2
3import cohere
4import numpy as np
5import re
6import pandas as pd
7from tqdm import tqdm
8from datasets import load_dataset
9import umap
10import altair as alt
11from sklearn.metrics.pairwise import cosine_similarity
12from annoy import AnnoyIndex
13import warnings
14warnings.filterwarnings('ignore')
15pd.set_option('display.max_colwidth', None)

2. Get the Archive of Questions

We’ll use the trec dataset which is made up of questions and their categories.

PYTHON
1# Get dataset
2dataset = load_dataset("trec", split="train")
3# Import into a pandas dataframe, take only the first 1000 rows
4df = pd.DataFrame(dataset)[:1000]
5# Preview the data to ensure it has loaded correctly
6df.head(10)
label-coarselabel-finetext
00How did serfdom develop in and then leave Russia ?
11What films featured the character Popeye Doyle ?
22How can I find a list of celebrities ’ real names ?
33What fowl grabs the spotlight after the Chinese Year of the Monkey ?
44What is the full form of .com ?
55What contemptible scoundrel stole the cork from my lunch ?
66What team did baseball ‘s St. Louis Browns become ?
77What is the oldest profession ?
88What are liver enzymes ?
99Name the scar-faced bounty hunter of The Old West .

3. Embed the Archive

Let’s now embed the text of the questions.

To get a thousand embeddings of this length should take a few seconds.

PYTHON
1# Paste your API key here. Remember to not share publicly
2api_key = ''
3
4# Create and retrieve a Cohere API key from dashboard.cohere.ai/welcome/register
5co = cohere.Client(api_key)
6
7# Get the embeddings
8embeds = co.embed(texts=list(df['text']),
9 model='embed-english-v2.0').embeddings

Let’s build an index using the library called annoy. Annoy is a library created by Spotify to do nearest neighbour search; nearest neighbour search is an optimization problem of finding the point in a given set that is closest (or most similar) to a given point.

PYTHON
1# Create the search index, pass the size of embedding
2search_index = AnnoyIndex(np.array(embeds).shape[1], 'angular')
3# Add all the vectors to the search index
4for i in range(len(embeds)):
5 search_index.add_item(i, embeds[i])
6search_index.build(10) # 10 trees
7search_index.save('test.ann')

After building the index, we can use it to retrieve the nearest neighbours either of existing questions (section 3.1), or of new questions that we embed (section 3.2).

4a. Find the Neighbours of an Example from the Dataset

If we’re only interested in measuring the similarities between the questions in the dataset (no outside queries), a simple way is to calculate the similarities between every pair of embeddings we have.

PYTHON
1# Choose an example (we'll retrieve others similar to it)
2example_id = 92
3# Retrieve nearest neighbors
4similar_item_ids = search_index.get_nns_by_item(example_id,10,
5 include_distances=True)
6# Format and print the text and distances
7results = pd.DataFrame(data={'texts': df.iloc[similar_item_ids[0]]['text'],
8 'distance': similar_item_ids[1]}).drop(example_id)
9print(f"Question:'{df.iloc[example_id]['text']}'\nNearest neighbors:")
10results
PYTHON
1# Output:
2Question:'What are bear and bull markets ?'
3Nearest neighbors:
textsdistance
614What animals do you find in the stock market ?0.896121
137What are equity securities ?0.970260
601What is “ the bear of beers ” ?0.978348
307What does NASDAQ stand for ?0.997819
683What is the rarest coin ?1.027727
112What are the world ‘s four oceans ?1.049661
864When did the Dow first reach ?1.050362
547Where can stocks be traded on-line ?1.053685
871What are the Benelux countries ?1.054899

4b. Find the Neighbours of a User Query

We’re not limited to searching using existing items. If we get a query, we can embed it and find its nearest neighbours from the dataset.

PYTHON
1query = "What is the tallest mountain in the world?"
2
3# Get the query's embedding
4query_embed = co.embed(texts=[query],
5 model="embed-english-v2.0").embeddings
6
7# Retrieve the nearest neighbors
8similar_item_ids = search_index.get_nns_by_vector(query_embed[0],10,
9 include_distances=True)
10# Format the results
11results = pd.DataFrame(data={'texts': df.iloc[similar_item_ids[0]]['text'],
12 'distance': similar_item_ids[1]})
13
14
15print(f"Query:'{query}'\nNearest neighbors:")
16results
textsdistance
236What is the name of the tallest mountain in the world ?0.431913
670What is the highest mountain in the world ?0.436290
907What mountain range is traversed by the highest railroad in the world ?0.715265
435What is the highest peak in Africa ?0.717943
354What ocean is the largest in the world ?0.762917
412What was the highest mountain on earth before Mount Everest was discovered ?0.767649
109Where is the highest point in Japan ?0.784319
114What is the largest snake in the world ?0.789743
656What ‘s the tallest building in New York City ?0.793982
901What ‘s the longest river in the world ?0.794352

5. Visualize the archive

PYTHON
1#@title Plot the archive {display-mode: "form"}
2
3# UMAP reduces the dimensions from 1024 to 2 dimensions that we can plot
4reducer = umap.UMAP(n_neighbors=20)
5umap_embeds = reducer.fit_transform(embeds)
6# Prepare the data to plot and interactive visualization
7# using Altair
8df_explore = pd.DataFrame(data={'text': df['text']})
9df_explore['x'] = umap_embeds[:,0]
10df_explore['y'] = umap_embeds[:,1]
11
12# Plot
13chart = alt.Chart(df_explore).mark_circle(size=60).encode(
14 x=#'x',
15 alt.X('x',
16 scale=alt.Scale(zero=False)
17 ),
18 y=
19 alt.Y('y',
20 scale=alt.Scale(zero=False)
21 ),
22 tooltip=['text']
23).properties(
24 width=700,
25 height=400
26)
27chart.interactive()
topics?

Conclusion

This concludes this introductory guide to semantic search using sentence embeddings. As you continue the path of building a search product additional considerations arise (like dealing with long texts, or training to better improve the embeddings for a specific use case).

Original Source

This material comes from the post Semantic Search