Preparing the Rerank Fine-tuning Data

In this section, we will walk through how you can prepare your data for fine-tuning for Rerank.

Data format

First, ensure your data is in jsonl format. There are three required fields:

  • query: This contains the question or target.
  • relevant_passages: This contains a list of documents or passages that contain information that answers the query.
  • hard_negatives: This contains examples that appear to be relevant to the query but ultimately are not because they don’t contain the answer. They differ from easy negatives, which are totally unrelated to the query. Hard negatives are optional, but providing them lead to improvements in the overall performance. We believe roughly five hard negatives leads to meaningful improvement, so include that many if you’re able to.

Here are a few example lines from a dataset that could be used to train a model that finds the paraphrased question most relevant to a target question.

JSON
1{"query": "What are your views on the supreme court's decision to make playing national anthem mandatory in cinema halls?", "relevant_passages": ["What are your views on Supreme Court decision of must National Anthem before movies?"], "hard_negatives": ["Is the decision of SC justified by not allowing national anthem inside courts but making it compulsory at cinema halls?", "Why has the supreme court of India ordered that cinemas play the national anthem before the screening of all movies? Is it justified?", "Is it a good decision by SC to play National Anthem in the theater before screening movie?", "Why is the national anthem being played in theaters?", "What does Balaji Vishwanathan think about the compulsory national anthem rule?"]}
2{"query": "Will Google's virtual monopoly in web search ever end? When?", "relevant_passages": ["Is Google's search monopoly capable of being disrupted?"], "hard_negatives": ["Who is capable of ending Google's monopoly in search?", "What is the future of Google?", "When will the Facebook era end?", "When will Facebook stop being the most popular?", "What happened to Google Search?"]}

Data Requirements

To pass the validation tests Cohere performs on uploaded data, ensure that:

  • There is at least one relevant_passage for every query.
  • Your dataset contains at least 256 unique queries, in total.
  • Your data is encoded in UTF-8.

Evaluation Datasets

Evaluation data is utilized to calculate metrics that depict the performance of your fine-tuned model. You have the option of generating a validation dataset yourself, or you can opt instead to allow us to divide your training file into separate train and evaluation datasets.

Create a Dataset with the Python SDK

If you intend to fine-tune through our UI you can skip to the next chapter. Otherwise continue reading to learn how to create datasets for fine-tuning via our Python SDK. Before you start we recommend that you read about the dataset API. Below you will find some code samples on how create datasets via the SDK:

PYTHON
1import cohere
2
3# instantiate the Cohere client
4co = cohere.Client("YOUR_API_KEY")
5
6rerank_dataset = co.create_dataset(name="rerank-dataset",
7 data=open("path/to/train.jsonl, "rb"),
8 type="reranker-finetune-input")
9print(rerank_dataset.await_validation())
10
11rerank_dataset_with_eval = co.create_dataset(name="rerank-dataset-with-eval",
12 data=open("path/to/train.jsonl, "rb"),
13 eval_data=open("path/to/eval.jsonl, "rb"),
14 type="reranker-finetune-input")
15print(rerank_dataset_with_eval.await_validation())