Overview
The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, as well as our processes around harm prevention. We provide model cards to communicate the strengths and weaknesses of our models and to encourage responsible use (motivated by Mitchell, 2019). We also provide a data statement describing our pre-training datasets (motivated by Bender and Friedman, 2018).
Model Cards:
If you have feedback or questions, please feel free to let us know — we are here to help.
Harm Prevention
We aim to mitigate adverse use of our models with the following:
- Responsible AI Research: We’ve established a dedicated safety team which conducts research and development to build safer language models, and we’re investing in technical (e.g., usage monitoring) and non-technical (e.g., a dedicated team reviewing use cases) measures to mitigate potential harms.
- Cohere Responsibility Council: We’ve established an external advisory council made up of experts who work with us to ensure that the technology we’re building is deployed safely for everyone.
- No online learning: The models used to power these endpoints do not learn from user inputs. This prevents the underlying models from being poisoned with harmful content by adversarial actors.
Updated 3 months ago