Conclusion - Text Generation

Congratulations! In this module, you've learned how to build a chatbot with Cohere’s Chat endpoint, and you’ve developed a conceptual understanding of how it works. You also learned how to design prompts, adjust parameters, and fine-tune on custom data to get the output that you want.

What’s Next?

LLM University (LLMU) has many modules that you can use to take your chatbots to the next level.

Prompt Engineering Module

As you learned in the Prompt Engineering Basics chapter of this module, when working with LLM chatbots, the prompt is the key to getting the desired response. A well-designed prompt will result in useful and accurate responses from a model, and it will considerably improve your experience interacting with it. In the Prompt Engineering module, you’ll go deeper into prompt engineering techniques and apply them to Cohere’s Command model.

Retrieval-Augmented Generation (RAG) Module

In the Introduction to RAG chapter of this module, you learned basic concepts about RAG, a method for allowing a chatbot to access external data sources – like the internet or a company’s internal technical documentation – which leads to better, more factual generations. In the Retrieval-Augmented Generation module, you will learn how to use the Chat endpoint’s RAG feature to build your own RAG-powered chatbots.