Preparing the Classify Fine-tuning data

In this section, we will walk through how you can prepare your data for fine-tuning models for Classification.

For classification fine-tuning jobs we can choose between two types of datasets:

  1. Single-label data
  2. Multi-label data

To be able to start a fine-tuning job, you need at least 40 examples. Each label needs to have at least 5 examples and there should be at least 2 unique labels.

Single-label Data

Single-label data consists of a text and a label. Here’s an example:

  • text: This movie offers that rare combination of entertainment and education
  • label: positive

Please notice that both text and label are required fields. When it comes to single-label data, you have the option to save your information in either a .jsonl or .csv format.

JSONL
1{"text":"This movie offers that rare combination of entertainment and education", "label":"positive"}
2{"text":"Boring movie that is not as good as the book", "label":"negative"}
3{"text":"We had a great time watching it!", "label":"positive"}
CSV
text,label
This movie offers that rare combination of entertainment and education,positive
Boring movie that is not as good as the book,negative
We had a great time watching it!,positive

Multi-label Data

Multi-label data differs from single-label data in the following ways:

  • We only accept jsonl format
  • An example might have more than one label
  • An example might also have 0 labels
JSONL
1{"text":"About 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus.", "label":["biology", "physics"]}
2{"text":"The square root of a number is defined as the value, which gives the number when it is multiplied by itself", "label":["mathematics"]}
3{"text":"Hello world!", "label":[]}

Clean your Dataset

To achieve optimal results, we suggest cleaning your dataset before beginning the fine-tuning process. Here are some things you might want to fix:

  • Make sure that your dataset does not contain duplicate examples.
  • Make sure that your examples are utf-8 encoded

If some of your examples don’t pass our validation checks, we’ll filter them out so that your fine-tuning job can start without interruption. As long as you have a sufficient number of valid training examples, you’re good to go.

Evaluation Datasets

Evaluation data is utilized to calculate metrics that depict the performance of your fine-tuned model. You have the option of generating a validation dataset yourself, or you can opt instead to allow us to divide your training file into separate train and evaluation datasets on our end.

Create a Dataset with the Python SDK

If you intend to fine-tune through our UI you can skip to the next chapter. Otherwise continue reading to learn how to create datasets for fine-tuning via our Python SDK. Before you start, we recommend that you read about the dataset API. Below you will find some code samples on how create datasets via the SDK:

PYTHON
1import cohere
2
3# instantiate the Cohere client
4co = cohere.Client("YOUR_API_KEY")
5
6
7## single-label dataset
8single_label_dataset = co.datasets.create(name="single-label-dataset",
9 data=open("path/to/train.csv", "rb"),
10 type="single-label-classification-finetune-input")
11
12print(co.wait(single_label_dataset))
13
14## multi-label dataset
15
16multi_label_dataset = co.datasets.create(name="multi-label-dataset",
17 data=open("path/to/train.jsonl", "rb"),
18 type="multi-label-classification-finetune-input")
19
20print(co.wait(multi_label_dataset))
21
22## add an evaluation dataset
23
24multi_label_dataset_with_eval = co.datasets.create(name="multi-label-dataset-with-eval",
25 data=open("path/to/train.jsonl", "rb"),
26 eval_data=open("path/to/eval.jsonl", "rb"),
27 type="multi-label-classification-finetune-input")
28
29print(co.wait(multi_label_dataset_with_eval))