Improving the Chat Fine-tuning Results

There are several things you need to take into account to achieve the best fine-tuned model for Chat:

Refining data quality

If your fine-tuned model is not learning well, try these steps to improve your training data:

  • Add more specific examples: If the model struggles with certain tasks, include examples that clearly demonstrate how to do those tasks.
  • Check your data for errors: If the model makes grammar or logic mistakes, your data might have similar errors. If, for example, it incorrectly says 'I will schedules this meeting,' check if your data mistakenly taught it to say such things.
  • Balance your data: Make sure your data reflects how you'll use the model. If your data has many examples of a response you rarely need, the model might use that response too often.
  • Ensure your data contains complete information: Include all necessary information in your examples. If the model needs to respond based on certain information, ensure that this information is in your training data.
  • Ensure data consistency : If different people helped prepare your data, make sure they all followed the same guidelines. Inconsistent data can limit how well your model learns.
  • Keep a standard format : All your training examples should be in the format you plan to use when you actually use the model.
  • Include real data : If you have actual user data or human-created examples, consider using it as opposed to fake LLM-generated ones. This will allow you to capture the nuances of human interaction and improve the model beyond what it’s capable of generating already.

Iterating on Hyperparameters

We allow you to specify the following hyper-parameters:

  • epochs
  • learning rate
  • batch size

We suggest starting your training without setting any specific parameters. There are some issues you might run into, which you can resolve with the following:

  • If the model outputs are too similar or lack diversity, reduce the epoch number by 1 or 2.
  • If the model does not appear to be converging, increase the learning rate.
  • If you want to change your batch size, you can use 8, 24 or 32.

Troubleshooting

We have a dedicated guide for troubleshooting fine-tuned models which is consistent for all the different model types and endpoints. Check it out here.