Prompt Engineering Basics

In this chapter, you’ll learn the basics of prompt engineering and how to craft effective prompts to obtain desirable outputs for various tasks.

We’ll use Cohere’s Python SDK for the code examples. Follow along in this notebook.

Note: This chapter covers the basics of prompt engineering. If you want to explore this topic further, we have a dedicated LLMU module on prompt engineering as well as further documentation on prompt engineering.

Prompting is at the heart of working with LLMs. The prompt provides context for the text that we want the model to generate. The prompts we create can be anything from simple instructions to more complex pieces of text, and they are used to encourage the model to produce a specific type of output.

Coming up with a good prompt is a bit of both science and art. On the one hand, we know the broad patterns that enable us to construct a prompt that will generate the output that we want. But on the other hand, there is so much room for creativity and imagination, as you’ll see in the examples in this section.

Setup

To set up, we first import the Cohere module and create a client.

import cohere
co = cohere.Client("COHERE_API_KEY") # Your Cohere API key

Let's also define a function generate_text() to take a user message, call the Chat endpoint, and stream the response.

def generate_text(message):
    stream = co.chat_stream(message=message)
    for event in stream:
        if event.event_type == "text-generation":
            print(event.text, end='')

Writing a Basic Prompt

The best way to design prompts for a model like Command is to give a command or an instruction. One way to do this is by using imperative verbs, for example: generate, write, list, provide, and other variations.

For instance, let’s say that we are creating the product description copy for a wireless earbuds product. We can write the prompt as follows.

generate_text("Generate a concise product description for the product: wireless earbuds.")
# RESPONSE

Experience the freedom of wireless audio with our state-of-the-art earbuds! Designed to deliver exceptional sound quality and all-day comfort, these earbuds are the perfect companion for your on-the-go lifestyle. Enjoy seamless connectivity, easy controls, and a secure fit, making them your go-to for both work and play.

That’s not bad. With a simple, one-line prompt, we already have a piece of product description that can make a digital marketer proud!

Layering Additional Instructions

But perhaps we want to be more specific regarding what we want the output to look like. For this, we can layer additional instructions onto the model in the prompt.

Let’s say we want the model to write the product description in a particular format with specific information. In this case, we can append this specific instruction in the prompt.

generate_text("""
    Generate a concise product description for the product: wireless earbuds. 
    Use the following format: Hook, Solution, Features and Benefits, Call to Action.
    """)
# RESPONSE

Hook: Upgrade your listening experience with the ultimate in wireless freedom! 

Solution: Introducing our state-of-the-art wireless earbuds, the ultimate companion for audio enthusiasts on the go. Designed to deliver exceptional sound quality and all-day comfort, these earbuds are the perfect solution for hands-free, wire-free listening. 

Features and Benefits: 
- Superior Sound: Immerse yourself in crystal-clear, high-definition audio. Our earbuds use advanced drivers for deep bass and crystal-clear trebles, elevating your music, podcast, or call experience. 
- Secure Fit: Ergonomically designed with multiple tip sizes to ensure a secure and comfortable fit, no matter your ear shape or size. 
- Long-Lasting Battery: Enjoy up to 24 hours of non-stop listening with a portable charging case, keeping your earbuds powered throughout your day. Quick charge functionality also ensures a fast turnaround for those unexpected moments of low battery. 
- Seamless Connectivity: Instantly connect to your favorite devices with the latest Bluetooth 5.2 technology, ensuring a stable and reliable connection for uninterrupted listening. 

Call to Action: Experience the future of wireless audio. Buy now and unlock exceptional sound, anytime, anywhere!

The model returns an output following the format that we wanted.

Adding Context to a Prompt

The prompt can also be constructed as a combination of an instruction and some context. Let’s see this in action with another example: emails. We can create a prompt to summarize an email, which is included in the prompt for context.

generate_text("""
    Summarize this email in one sentence.
    Dear [Team Members],
    I am writing to thank you for your hard work and dedication in organizing our recent community meetup. The event was a great success and it would not have been possible without your efforts.
    I am especially grateful for the time and energy you have invested in making this event a reality. Your commitment to ensuring that everything ran smoothly and that our guests had a great time is greatly appreciated.
    I am also thankful for the support and guidance you have provided to me throughout the planning process. Your insights and ideas have been invaluable in ensuring that the event was a success.
    I am confident that our community will benefit greatly from this event and I am excited to see the positive impact it will have.
    Thank you again for your hard work and dedication. I am looking forward to working with you on future events.
    Sincerely,
    [Your Name]
    """)
# RESPONSE

The email expresses sincere gratitude and appreciation to the team members for their hard work and dedication in organizing a successful community meetup.

This instruction–context prompt format is extremely useful as it means that we can supply additional information as context to help ground the model's output. One such example is a question-answering system for, let's say, a company's knowledge base. Given a question (the instruction), the model will only be able to provide accurate answers if provided with the knowledge base (the context).

Extracting Information

Let's move to another example — an extraction task, which a generative model can do very well. Given context, which in this case is a description of a movie, we want the model to extract the movie title.

generate_text("""
    Extract the movie title from the text below.
    Deadpool 2 | Official HD Deadpool's "Wet on Wet" Teaser | 2018
    """)
# RESPONSE

The movie title is "Deadpool 2".

Rewriting Text

The model is also effective at tasks that involve taking a piece of text and rewriting it into another format that we need.

Here's an example. We have a one-line instruction followed by the context, which in this case is a blog excerpt. The instruction is to generate a list of frequently asked questions (FAQ) based on the passage, which involves a mixture of several tasks, such as extraction and rewriting.

generate_text("""
    Given the following text, write down a list of potential frequently asked questions (FAQ), together with the answers.
    The Cohere Platform provides an API for developers and organizations to access cutting-edge LLMs without needing machine learning know-how. 
    The platform handles all the complexities of curating massive amounts of text data, model development, distributed training, model serving, and more. 
    This means that developers can focus on creating value on the applied side rather than spending time and effort on the capability-building side.
    
    There are two key types of language processing capabilities that the Cohere Platform provides — text generation and text embedding — and each is served by a different type of model.
    
    With text generation, we enter a piece of text, or prompt, and get back a stream of text as a completion to the prompt. 
    One example is asking the model to write a haiku (the prompt) and getting an originally written haiku in return (the completion).
    
    With text embedding, we enter a piece of text and get back a list of numbers that represents its semantic meaning (we’ll see what “semantic” means in a section below). 
    This is useful for use cases that involve “measuring” what a passage of text represents, for example, in analyzing its sentiment.
    """)
# RESPONSE

Here are some potential FAQs with answers:

1. What does the Cohere Platform offer?
   - The Cohere Platform provides a simple API that gives developers access to advanced LLMs without requiring expertise in machine learning. The platform also handles the complexities of text data curation, model development, training, and serving, so developers can focus on creating practical applications.

2. What are the key language processing capabilities the Cohere Platform provides?
   - The Cohere Platform offers two primary language processing capabilities: text generation and text embedding.

3. How does text generation work?
   - Text generation takes a prompt, such as a few words or a sentence, and generates a stream of text as a response. An example could be receiving an original haiku poem as a completion to the prompt "Write a haiku."

4. What is text embedding, and what are its use cases?
   - Text embedding takes a piece of text and returns a list of numbers that represent its semantic meaning. This can be applied to various situations, including sentiment analysis, to "measure" the text's meaning.

5. Do I need machine learning expertise to use the Cohere Platform?
   - No, the Cohere Platform is designed to remove the need for machine learning know-how. It simplifies the process, allowing developers to access advanced LLMs easily.

By now, we can see how versatile our model is at performing various forms of tasks — not just freeform text generation, but also following instructions, working with contextual information, summarizing long passages, extracting information, rewriting text into different formats, and more.

This is just a taste of what kinds of prompts you can design. You can keep layering your instructions to be as specific as you want, and see the output generated by the model. And there is really no right or wrong way to design a prompt. It’s really about applying an idea and continuing to iterate the prompt until you get the outcome you are looking for.

After completing this module, we encourage you to take a look at LLMU’s Prompt Engineering module to go deeper into prompt engineering techniques and apply them to Cohere’s Command model.

Conclusion

In this chapter, you learned how to prompt a model — probably the most important and definitely the most fun part of working with large language models.


What’s Next

Continue to the next chapter to learn how to fine-tune the Chat endpoint model on custom datasets, enhancing its performance on specific tasks.