Basic Semantic Search
Language models give computers the ability to search by meaning and go beyond searching by matching keywords. This capability is called semantic search.
In this notebook, we’ll build a simple semantic search engine. The applications of semantic search go beyond building a web search engine. They can empower a private search engine for internal documents or records. It can also be used to power features like StackOverflow’s “similar questions” feature.
- Get the archive of questions
- Embed the archive
- Search using an index and nearest neighbor search
- Visualize the archive based on the embeddings
And if you’re running an older version of the SDK, you might need to upgrade it like so:
Get your Cohere API key by signing up here. Paste it in the cell below.
1. Getting Set Up
You’ll need your API key for this next cell. Sign up to Cohere and get one if you haven’t yet.
2. Get The Archive of Questions
We’ll use the trec dataset which is made up of questions and their categories.
label-coarse | label-fine | text | |
---|---|---|---|
0 | 0 | 0 | How did serfdom develop in and then leave Russia ? |
1 | 1 | 1 | What films featured the character Popeye Doyle ? |
2 | 0 | 0 | How can I find a list of celebrities ’ real names ? |
3 | 1 | 2 | What fowl grabs the spotlight after the Chinese Year of the Monkey ? |
4 | 2 | 3 | What is the full form of .com ? |
5 | 3 | 4 | What contemptible scoundrel stole the cork from my lunch ? |
6 | 3 | 5 | What team did baseball ‘s St. Louis Browns become ? |
7 | 3 | 6 | What is the oldest profession ? |
8 | 0 | 7 | What are liver enzymes ? |
9 | 3 | 4 | Name the scar-faced bounty hunter of The Old West . |
2. Embed the archive
The next step is to embed the text of the questions.
To get a thousand embeddings of this length should take about fifteen seconds.
3. Search using an index and nearest neighbor search
Let’s now use Annoy to build an index that stores the embeddings in a way that is optimized for fast search. This approach scales well to a large number of texts (other options include Faiss, ScaNN, and PyNNDescent).
After building the index, we can use it to retrieve the nearest neighbors either of existing questions (section 3.1), or of new questions that we embed (section 3.2).
3.1. Find the neighbors of an example from the dataset
If we’re only interested in measuring the distance between the questions in the dataset (no outside queries), a simple way is to calculate the distance between every pair of embeddings we have.
texts | distance | |
---|---|---|
614 | What animals do you find in the stock market ? | 0.904278 |
137 | What are equity securities ? | 0.992819 |
513 | What do economists do ? | 1.066583 |
307 | What does NASDAQ stand for ? | 1.080738 |
363 | What does it mean “ Rupee Depreciates ” ? | 1.086724 |
932 | Why did the world enter a global depression in 1929 ? | 1.099370 |
547 | Where can stocks be traded on-line ? | 1.105368 |
922 | What is the difference between a median and a mean ? | 1.141870 |
601 | What is “ the bear of beers ” ? | 1.154140 |
3.2. Find the neighbors of a user query
We’re not limited to searching using existing items. If we get a query, we can embed it and find its nearest neighbors from the dataset.
texts | distance | |
---|---|---|
236 | What is the name of the tallest mountain in the world ? | 0.447309 |
670 | What is the highest mountain in the world ? | 0.552254 |
412 | What was the highest mountain on earth before Mount Everest was discovered ? | 0.801252 |
907 | What mountain range is traversed by the highest railroad in the world ? | 0.929516 |
435 | What is the highest peak in Africa ? | 0.930806 |
109 | Where is the highest point in Japan ? | 0.977315 |
901 | What ‘s the longest river in the world ? | 1.064209 |
114 | What is the largest snake in the world ? | 1.076390 |
962 | What ‘s the second-largest island in the world ? | 1.088034 |
27 | What is the highest waterfall in the United States ? | 1.091145 |
4. Visualizing the archive
Finally, let’s plot out all the questions onto a 2D chart so you’re able to visualize the semantic similarities of this dataset!
Hover over the points to read the text. Do you see some of the patterns in clustered points? Similar questions, or questions asking about similar topics?
This concludes this introductory guide to semantic search using sentence embeddings. As you continue the path of building a search product additional considerations arise (like dealing with long texts, or finetuning to better improve the embeddings for a specific use case).
We can’t wait to see what you start building! Share your projects or find support on Discord.