Hello, and welcome to Module 3, from your instructors Meor and Jay!
This module is all about using generative learning to generate text. First you'll get started with a codelab that teaches you how to use the generate endpoint. Then you'll learn all about prompt engineering. This is the art of writing the best prompts in order to squeeze the most power out of your generative model. In the second part, you’ll go through some codelabs that will show you how to build several applications of generative modeling, such as text summarization, or entity extraction.
Here is what we’ll cover throughout the next few chapters:
- What is Generative AI?: Generation has quickly become one of the most exciting fields in ML, as it deals with creating new text, new images, videos, etc. In the first chapter you'll get an idea of what generative AI is all about.
- Getting Started with Generative AI: The next 5 chapters are dedicated to getting started with generative AI.
- Prompt Engineering: In this chapter you'll learn how to prompt a generative model to get the best outputs. This is becoming one of the most important skills of any LLM practitioner, and it consists of developing the best text prompts to really harness the power of the generative model for your task at hand. This is done in the playground, so no code is involved.
- Use Case Ideation: Here you'll learn several of the most common use-cases of LLMs, and how they apply to problems in real life.
- The Generate Endpoint: This codelab teaches you how to use Cohere's generate endpoint to create text.
- Creating Custom Models: This chapter teaches you how to create your own models to excel at specific tasks.
- Chaining Prompts: Chaining prompts is a powerful way to concatenate tasks in order to solve bigger problems. In this chapter you'll learn how to chain prompts in order to get the generative model to write a story.
Let's get started with text generation!
Updated 30 days ago
First, what is Generative AI?