Cohere Releases Arabic-Optimized Command Model!
Cohere is thrilled to announce the release of Command R7B Arabic (c4ai-command-r7b-12-2024
). This is an open weights release of an advanced, 8-billion parameter custom model optimized for the Arabic language (MSA dialect), in addition to English. As with Cohere’s other command models, this one comes with context length of 128,000 tokens; it excels at a number of critical enterprise tasks — instruction following, length control, retrieval-augmented generation (RAG), minimizing code-switching — and it demonstrates excellent general purpose knowledge and understanding of the Arabic language and culture.
Try Command R7B Arabic
If you want to try Command R7B Arabic, it’s very easy: you can use it through the Cohere playground or in our dedicated Hugging Face Space.
Alternatively, you can use the model in your own code. To do that, first install the transformers
library from its source repository:
Then, use this Python snippet to run a simple text-generation task with the model:
Chat Capabilities
Command R7B Arabic can be operated in two modes, “conversational” and “instruct” mode:
- Conversational mode conditions the model on interactive behaviour, meaning it is expected to reply in a conversational fashion, provide introductory statements and follow-up questions, and use Markdown as well as LaTeX where appropriate. This mode is optimized for interactive experiences, such as chatbots, where the model engages in dialogue.
- Instruct mode conditions the model to provide concise yet comprehensive responses, and to not use Markdown or LaTeX by default. This mode is designed for non-interactive, task-focused use cases such as extracting information, summarizing text, translation, and categorization.
Multilingual RAG Capabilities
Command R7B Arabic has been trained specifically for Arabic and English tasks, such as the generation step of Retrieval Augmented Generation (RAG).
Command R7B Arabic’s RAG functionality is supported through chat templates in Transformers. Using our RAG chat template, the model takes a conversation (with an optional user-supplied system preamble) and a list of document snippets as input. The resulting output contains a response with in-line citations. Here’s what that looks like:
You can then generate text from this input as normal.
Notes on Usage
We recommend document snippets be short chunks (around 100-400 words per chunk) instead of long documents. They should also be formatted as key-value pairs, where the keys are short descriptive strings and the values are either text or semi-structured.
You may find that simply including relevant documents directly in a user message works as well as or better than using the documents
parameter to render the special RAG template (though the template is a strong default for those wanting citations). We encourage users to experiment with both approaches, and to evaluate which mode works best for their specific use case.