Fine-Tuning

Introduction

Our ready-to-use large language models, such as Command R, as well as Command R+, are very good at producing responses to natural language prompts. However, there are many cases in which getting the best model performance requires performing an additional round of training on custom user data. Creating a custom model using this process is called fine-tuning.

Why Fine-tune?

Fine-tuning is recommended when you want to teach the model a new task, or leverage your company’s unique knowledge base. Fine-tuning models is also helpful for generating a specific writing style or format.

If you are aiming to use a language model to draft responses to customer-support inquiries, for example, using a model fine-tuned on old conversations with customers will likely improve the quality of the output.

Note that there might be pricing differences when using fine-tuned models. You can use our Pricing Calculator to estimate the costs.

How to Create Fine-tuned Models

Cohere offers two methods of creating fine-tuned models: via the Cohere Dashboard, via the Fine-tuning API, and via the Python SDK. The fine-tuning process generally unfolds in four main stages:

  • Preparing and uploading training data.
  • Training the new Fine-tuned model.
  • Evaluating the Fine-tuned model (and possibly repeating the training).
  • Deploying the Fine-tuned model.

Once you Fine-tune a model, it will start appearing in the model selection dropdown on the Playground, and can be used in API calls.

Types of Fine-tuning

Models are fine-tuned for use in specific Cohere APIs. To be compatible with the Chat API, for example, a model needs to be fine-tuned on a dataset of conversations. APIs that support fine-tuned models are:

Fine-Tuning Directory

For your convenience, we’ve collected all the URLs relevant to fine-tuning Cohere models below. Think of this as being like a fine-tuning table of contents.

Fine-tuning for Chat

Fine-tuning for Classify

Fine-tuning for Rerank