Finetuning on AWS Sagemaker

Mike MaoMike Mao

Finetune and deploy a custom Command-R model

This sample notebook shows you how to finetune and deploy a custom Command-R model using Amazon SageMaker.

Note: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.

Pre-requisites:

  1. Note: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.
  2. Ensure that IAM role used has AmazonSageMakerFullAccess
  3. To deploy this ML model successfully, ensure that:
    1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used:
      1. aws-marketplace:ViewSubscriptions
      2. aws-marketplace:Unsubscribe
      3. aws-marketplace:Subscribe
    2. or your AWS account has a subscription to the packages for Cohere Command R Finetuning. If so, skip step: Subscribe to the finetune algorithm

Contents:

  1. Subscribe to the finetune algorithm
  2. Upload data and finetune Command-R Model
  3. Create an endpoint for inference with the custom model
    1. Create an endpoint
    2. Perform real-time inference
  4. Clean-up
    1. Delete the endpoint
    2. Unsubscribe to the listing (optional)

Usage instructions

You can run this notebook one cell at a time (By using Shift+Enter for running a cell).

1. Subscribe to the finetune algorithm

To subscribe to the model algorithm:

  1. Open the algorithm listing page Cohere Command R Finetuning
  2. On the AWS Marketplace listing, click on the Continue to Subscribe button.
  3. On the Subscribe to this software page, review and click on “Accept Offer” if you and your organization agrees with EULA, pricing, and support terms. On the “Configure and launch” page, make sure ARN displayed in your region match with the ARN in the following cell.
1!pip install "cohere>=5.11.0"
2
3import cohere
4import boto3
5import sagemaker as sage
6from sagemaker.s3 import S3Uploader

The algorithm is available in the list of AWS regions specified below.

1region = boto3.Session().region_name
2
3cohere_package = ""
4# cohere_package = "cohere-command-r-ft-v-0-1-2-bae2282f0f4a30bca8bc6fea9efeb7ca"
5
6# Mapping for algorithms
7algorithm_map = {
8 "us-east-1": f"arn:aws:sagemaker:us-east-1:865070037744:algorithm/{cohere_package}",
9 "us-east-2": f"arn:aws:sagemaker:us-east-2:057799348421:algorithm/{cohere_package}",
10 "us-west-2": f"arn:aws:sagemaker:us-west-2:594846645681:algorithm/{cohere_package}",
11 "eu-central-1": f"arn:aws:sagemaker:eu-central-1:446921602837:algorithm/{cohere_package}",
12 "ap-southeast-1": f"arn:aws:sagemaker:ap-southeast-1:192199979996:algorithm/{cohere_package}",
13 "ap-southeast-2": f"arn:aws:sagemaker:ap-southeast-2:666831318237:algorithm/{cohere_package}",
14 "ap-northeast-1": f"arn:aws:sagemaker:ap-northeast-1:977537786026:algorithm/{cohere_package}",
15 "ap-south-1": f"arn:aws:sagemaker:ap-south-1:077584701553:algorithm/{cohere_package}",
16}
17if region not in algorithm_map.keys():
18 raise Exception(f"Current boto3 session region {region} is not supported.")
19
20arn = algorithm_map[region]

2. Upload data and finetune Command-R

Select a path on S3 to store the training and evaluation datasets and update the s3_data_dir below:

1s3_data_dir = "s3://..." # Do not add a trailing slash otherwise the upload will not work

Upload sample training data to S3:

Note:

You’ll need your data in a .jsonl file that contains chat-formatted data. Doc

Example:

JSONL:

{
"messages": [
{
"role": "System",
"content": "You are a chatbot trained to answer to my every question."
},
{
"role": "User",
"content": "Hello"
},
{
"role": "Chatbot",
"content": "Greetings! How can I help you?"
},
{
"role": "User",
"content": "What makes a good running route?"
},
{
"role": "Chatbot",
"content": "A sidewalk-lined road is ideal so that you\u2019re up and off the road away from vehicular traffic."
}
]
}
1sess = sage.Session()
2# TODO[Optional]: change it to your data
3# You can download following example datasets from https://github.com/cohere-ai/notebooks/tree/main/notebooks/data and upload them
4# to the root of this juyter notebook
5train_dataset = S3Uploader.upload("./scienceQA_train.jsonl", s3_data_dir, sagemaker_session=sess)
6# optional eval dataset
7eval_dataset = S3Uploader.upload("./scienceQA_eval.jsonl", s3_data_dir, sagemaker_session=sess)
8print("traint_dataset", train_dataset)
9print("eval_dataset", eval_dataset)

Note: If eval dataset is absent, we will auto-split the training dataset into training and evaluation datasets with the ratio of 80:20.

Each dataset must contain at least 1 examples. If an evaluation dataset is absent, training dataset must cointain at least 2 examples.

We recommend using a dataset than contains at least 100 examples but a larger dataset is likely to yield high quality finetunes. Be aware that a larger dataset would mean that the time to finetune would also be longer.

Specify a directory on S3 where finetuned models should be stored. Make sure you do not reuse the same directory across multiple runs.

1# TODO update this with a custom S3 path
2# DO NOT add a trailing slash at the end
3s3_models_dir = f"s3://..."

Create Cohere client:

1co = cohere.SagemakerClient(region_name=region)

Optional: Define hyperparameters

  • train_epochs: Integer. This is the maximum number of training epochs to run for. Defaults to 1
DefaultMinMax
1110
  • learning_rate: Float. The initial learning rate to be used during training. Default to 0.0001
DefaultMinMax
0.00010.0000050.1
  • train_batch_size: Integer. The batch size used during training. Defaults to 16 for Command.
DefaultMinMax
16832
  • early_stopping_enabled: Boolean. Enables early stopping. When set to true, the final model is the best model found based on the validation set. When set to false, the final model is the last model of training. Defaults to true.

  • early_stopping_patience: Integer. Stop training if the loss metric does not improve beyond ‘early_stopping_threshold’ for this many times of evaluation. Defaults to 10

DefaultMinMax
10115
  • early_stopping_threshold: Float. How much the loss must improve to prevent early stopping. Defaults to 0.001.
DefaultMinMax
0.0010.0010.1

If the algorithm is command-r-0824-ft, you have the option to define:

  • lora_rank': Integer. Lora adapter rank. Defaults to 32
DefaultMinMax
32832
1# Example of how to pass hyperparameters to the fine-tuning job
2train_parameters = {
3 "train_epochs": 1,
4 "early_stopping_patience": 2,
5 "early_stopping_threshold": 0.001,
6 "learning_rate": 0.01,
7 "train_batch_size": 16,
8}

Create fine-tuning jobs for the uploaded datasets. Add a field for eval_data if you have pre-split your dataset and uploaded both training and evaluation datasets to S3. Remember to use p4de for Command-R Finetuning.

1finetune_name = "test-finetune"
2co.sagemaker_finetuning.create_finetune(arn=arn,
3 name=finetune_name,
4 train_data=train_dataset,
5 eval_data=eval_dataset,
6 s3_models_dir=s3_models_dir,
7 instance_type="ml.p4de.24xlarge",
8 training_parameters=train_parameters,
9 role="ServiceRoleSagemaker",
10)

The finetuned weights for the above will be store in a tar file {s3_models_dir}/test-finetune.tar.gz where the file name is the same as the name used during the creation of the finetune.

3. Create an endpoint for inference with the custom model

A. Create an endpoint

The Cohere AWS SDK provides a built-in method for creating an endpoint for inference. This will automatically deploy the model you finetuned earlier.

Note: This is equivalent to creating and deploying a ModelPackage in SageMaker’s SDK.

1endpoint_name="test-finetune"
2co.sagemaker_finetuning.create_endpoint(arn=arn,
3 endpoint_name=endpoint_name,
4 s3_models_dir=s3_models_dir,
5 recreate=True,
6 instance_type="ml.p4de.24xlarge",
7 role="ServiceRoleSagemaker",
8)
9
10# If the endpoint is already created, you just need to connect to it
11co.connect_to_endpoint(endpoint_name=endpoint_name)

B. Perform real-time inference

Now, you can access all models deployed on the endpoint for inference:

1message = "Classify the following text as either very negative, negative, neutral, positive or very positive: mr. deeds is , as comedy goes , very silly -- and in the best way."
2
3result = co.sagemaker_finetuning.chat(message=message)
4print(result)

[Optional] Now let’s evaluate our finetuned model using the evaluation dataset.

1import json
2from tqdm import tqdm
3total = 0
4correct = 0
5for line in tqdm(open('./sample_finetune_scienceQA_eval.jsonl').readlines()):
6 total += 1
7 question_answer_json = json.loads(line)
8 question = question_answer_json["messages"][0]["content"]
9 answer = question_answer_json["messages"][1]["content"]
10 model_ans = co.sagemaker_finetuning.chat(message=question, temperature=0).text
11 if model_ans == answer:
12 correct +=1
13
14print(f"Accuracy of finetuned model is %.3f" % (correct / total))

4. Clean-up

A. Delete the endpoint

After you’ve successfully performed inference, you can delete the deployed endpoint to avoid being charged continuously. This can also be done via the Cohere AWS SDK:

1co.delete_endpoint()
2co.close()

Unsubscribe to the listing (optional)

If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any deployable models created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.

Steps to unsubscribe to product from AWS Marketplace:

  1. Navigate to Machine Learning tab on Your Software subscriptions page
  2. Locate the listing that you want to cancel the subscription for, and then choose Cancel Subscription to cancel the subscription.