Use Case Patterns

In this chapter, you'll learn about the common use case patterns in text generation, including prompt examples.

In the previous chapter, we took the vertical direction and learned different ways to construct a prompt. In this chapter, we'll take a horizontal approach by looking at different use case patterns for applying these prompts.

In this chapter, we'll go through several use case patterns for the Command model. Though they won’t cover all the possible ways you can use the model, they are good starting points for understanding the patterns of tasks where the model works well.

Setting Up

The examples in this post will be shown in Python. For each use case, we’ll look at ideas on how a prompt can be constructed and the associated model settings. This chapter comes with a Google Colaboratory notebook that lets you get hands-on with the code.

First, install the Cohere package, get the Cohere API key, and set up the client.

! pip install cohere

import cohere
co = cohere.Client("COHERE_API_KEY") # Your Cohere API key

Let’s also define a function to take a prompt and a temperature value and then call the Generate endpoint, which is how we can access the Command model. Here, we select the model to be command. We set a default temperature value of 0, which nudges the response to be more predictable and less random. This function returns the text response generated by the model.

def generate_text(prompt, temp=0):
  response = co.generate(
    model='command',
    prompt=prompt,
    max_tokens=300,
    temperature=temp)
  return response.generations[0].text

Our examples will revolve around a company’s activities for launching a new wireless headphone product, such as getting the word out, managing customer interactions, and so on. For this, let’s define a text snippet containing the product description. We’ll be utilizing this snippet in several examples throughout this chapter.

product="""The CO-1T is a wireless headphone product that uses Bluetooth technology to connect to your devices. \
It has a long battery life and can be quickly charged using the included USB cable. The headphone is \
lightweight and comfortable, ideal for long periods of use. It has a built-in microphone for making calls, \
and a button to control the volume. The CO-1T is a great choice for anyone looking for a wireless headphone \
product with great battery life."""

Writing

We’ll start with the most general type of use case: writing.

Here we can ask the model to write freeform text, for example, with this prompt: “Create an email about the launch of the wireless headphone product.” But that alone might not be very useful in practical applications because the generated text can go in very different directions. We may want to add more context and specificity to the prompt.

Here’s how we can do that. Let’s say we’re building an application for users to enter some bullet points and get a complete email written. We can set up the prompt in the following way: create a variable for the user to input some text and merge that, together with the product description, into the main prompt.

user_input ="""
- announce product launch
- create a call to action
- mention live chat for support
"""

prompt = f"""{product}
Create an email about the product above mentioning the following:
{user_input}
"""

response = generate_text(prompt, temp=0.5)
print(response)

Adding this context helps to guide the model in the direction we want it to go. Here is a sample response:

Hello,

We are excited to announce the launch of our new wireless headphone product, the CO-1T. This product uses Bluetooth technology to connect to your devices and has a long battery life. It is lightweight and comfortable, making it ideal for long periods of use. The CO-1T also has a built-in microphone for making calls, and a button to control the volume.

We are offering a special discount for the first 100 customers to purchase the CO-1T. So, if you are looking for a great wireless headphone product with great battery life, then the CO-1T is the perfect choice for you.

Don't miss out on this opportunity to try out our new product. Order now and experience the convenience of wireless headphones.

If you have any questions about the CO-1T, please feel free to contact us through our live chat support. We are always happy to help.

Thank you,
[Your Name]

In this example, we changed the temperature value to 0.5. And throughout this chapter, you’ll see different temperature values being used in different situations. Increasing the temperature value tells the model to generate less predictable responses and instead be more “creative.” And since this task does require some form of creativity (expanding brief keywords into an entire passage), we increased the temperature value. There is no one right value for a use case, so this is a setting you should try and iterate to get the best outcome for your task.

We have looked at one example, but writing as a use case can mean many different things. So instead of beginning your prompt with “Create,” you can try other ways to extract different forms of content and formats that your application needs, such as using “Explain,” “Generate,” “Write,” “Provide,” “Design,” “Describe,” and so on.

Question Answering

This use case is about answering a user's question. This could be a single-turn, question-answering scenario or a multi-turn, chatbot setting.

Question answering can take place in either a closed or open setting. In a closed-book question-answering setting, we rely on the model to answer questions based on the general knowledge from which it has been trained. Here’s one example:

user_input ="What features should I consider when choosing a wireless headphone"
prompt = user_input

response = generate_text(prompt, temp=0.5)
print(response)

Sample response:

There are a few features to consider when choosing a wireless headphone. These include:

1. Sound quality: It is important to choose a headphone that has good sound quality. This will ensure that you enjoy your music or podcasts.
2. Comfort: It is important to choose a headphone that is comfortable to wear. This will ensure that you can wear it for long periods of time without experiencing any discomfort.
3. Battery life: It is important to choose a headphone that has a long battery life. This will ensure that you can use it for long periods of time without having to recharge it.
4. Price: It is important to choose a headphone that is within your budget. There are a variety of options available at different price points.

This works well for a question that is open-ended and doesn’t have one correct answer. But let’s say we have a customer asking for specific information about the wireless headphone, such as its feature set. For this, we need to turn to an extractive question-answering setting.

In this setting, we can get the model to refer to specific knowledge bases to help it do its job well. This way, we can design a system that can handle questions that require factual responses.

Here, a customer asks a product question. We can append the customer’s question and the product description to the prompt, as follows.

user_input ="How do I control the sound levels"

prompt = f"""{product}
Given the product description above, answer this question: {user_input}"""

response = generate_text(prompt, temp=0)
print(response)

And the model picks up the relevant information from the prompt to provide the correct answer.

Sample response:

The sound levels can be controlled by using the button on the headphones.

Brainstorming

Another form of writing is brainstorming, where we want the model to generate a list of options based on a given prompt. This can be for writing outlines, generating ideas, providing critical analysis, and so on. This use case forces the model to go broad and cover different perspectives of a situation.

In this example, we want the model to act as an assistant to a customer support agent in identifying possible ways to troubleshoot a technical problem that a customer is facing.

user_input = "I can't get the Bluetooth connection working"
prompt = f"""{product}
A customer provided the following complaint about this product: {user_input}.
Provide a bulleted list of possible ways to troubleshoot so we can advise the customer accordingly.
"""

response = generate_text(prompt, temp=0.9)
print(response)

Sample response:

- Check to see if the headphones are fully charged
- Try resetting the headphones by holding the power button for 5 seconds
- Make sure that your device is within range of the headphones
- Try restarting your device
- Check to see if the headphones are properly paired with your device
- If you are still having trouble connecting, please contact customer service for further assistance.

Transforming

The first thing that comes to mind when thinking about generative models is their ability to write a fresh piece of text, but one rather understated aspect is their ability to synthesize an existing piece of text.

One example is transforming a passage of text into a different form, making it reusable for different purposes.

For example, creating a list of Frequently Asked Questions (FAQs) about wireless headphones is crucial but requires some effort. We can cut short this process by getting the model to generate a list of FAQs based on the product description, as follows:

prompt =f"""Turn the following product description into a list of frequently asked questions (FAQ).

Product description: {product}
"""
response = generate_text(prompt, temp=0)
print(response)

Sample response:

Frequently Asked Questions (FAQ)

What is the CO-1T?
The CO-1T is a wireless headphone product that uses Bluetooth technology to connect to your devices. It has a long battery life and can be quickly charged using the included USB cable. The headphone is lightweight and comfortable, ideal for long periods of use. It has a built-in microphone for making calls, and a button to control the volume. The CO-1T is a great choice for anyone looking for a wireless headphone product with great battery life.

How long does the battery last?
The battery life of the CO-1T is long, and it can be quickly charged using the included USB cable.

Is the CO-1T comfortable to wear?
The CO-1T is designed to be lightweight and comfortable, ideal for long periods of use.

Does the CO-1T have a built-in microphone?
Yes, the CO-1T has a built-in microphone for making calls.

Does the CO-1T have a button to control the volume?
Yes, the CO-1T has a button to control the volume.

Summarizing

One popular use case for synthesizing text is summarization. Here, we take a long passage of text and summarize it to its essence. These can be articles, conversation transcripts, reports, meeting notes, and so on.

In this example, we create a prompt to summarize a list of customer reviews about the wireless headphone.

user_input ="""Customer reviews of the CO-1T wireless headphones:

"The CO-1T is a great pair of headphones! The design is sleek and modern, and the headphones are \
very comfortable to wear. The sound quality is excellent, and I can hear every detail of my music. \
The built-in microphone means I can make calls without having to take my phone out of my pocket. I \
highly recommend the CO-1T to anyone looking for a great pair of wireless headphones!"

"I'm very disappointed with the CO-1T. The design is nice, but the battery life is terrible. I can \
only use them for a few hours before they need to be recharged. This is very inconvenient, and I'm \
not sure if I can recommend them to anyone."

"The CO-1T is a mixed bag. The speaker quality is great, but the built-in microphone's quality is \
poor. I can hear every detail of my music, but my voice sounds distorted when I make calls. The \
design is nice, and the headphones are comfortable to wear, but the battery life is only average. \
I'm not sure if I can recommend them to anyone."
"""

prompt = f"""Summarize the following.

{user_input}
"""

response = generate_text(prompt, temp=0)
print(response)

Example response:

The CO-1T wireless headphones have mixed reviews. Some people love the design and sound quality, while others are disappointed with the battery life and microphone quality.

Alternatively, there is a more streamlined way to generate quality summaries, and that is via the Summarize endpoint. It builds upon the Command model and is designed specifically to perform summarization. It supports a much longer context length than the Generate endpoint at 100,000 characters maximum. It also provides a much easier way to define the types of summaries, such as long vs. short, or paragraphs vs. bullets, without having to define the prompts.

Here’s how to call the Summarize endpoint to summarize the customer reviews:

response = co.summarize(
  text=user_input,
  length='short',
  format='paragraph',
  model='summarize-xlarge',
  temperature=0.3,
  additional_command="analyzing these customer reviews"
)
print(response.summary)

Sample response:

The CO-1T wireless headphones have received mixed reviews from customers. Some people love the design and sound quality, while others are disappointed with the battery life and microphone quality.

Rewriting

Rewriting text is another useful use case where you need to modify some aspects of the text while maintaining its overall meaning. One example is changing the tone of a piece of text to tailor it to a specific audience. Here we want to rewrite the product description so it’s more relatable to students.

user_input = "college students"

prompt = f"""Create a version of this product description that's tailored towards {user_input}.

{product}"""

response = generate_text(prompt, temp=0.5)
print(response)

Sample response:

The CO-1T is a great wireless headphone option for college students who are always on the go. It uses Bluetooth technology to connect to your devices, so you don't have to worry about getting tangled up in cords. The battery life is long, and it can be quickly charged using the included USB cable. The headphone is lightweight and comfortable, making it ideal for long periods of use. It has a built-in microphone for making calls, and a button to control the volume. Plus, the price is just right for a college student's budget.

Another extremely useful way of looking at text synthesis is information extraction. Here, we leverage the model’s ability to capture the context of a piece of text to extract the correct information as specified by the prompt.

Here is an example of an email that a customer is, unfortunately, asking for a refund for the wireless headphones. We can have the model process this email by getting it to extract information, such as the product name, refund reason, and pick-up address.

user_input ="""I am writing to request a refund for a recent CO-1T purchase I made on your platform. \
Unfortunately, the produce has not met my expectations due to its poor battery life. \
Please arrange for the pick-up at this address: to 171 John Street, Toronto ON, M5T 1X2."""

prompt =f"""Extract the product, refund reason and pick-up address from this email:

{user_input}
"""

response = generate_text(prompt, temp=0)
print(response)

Sample response:

Product: CO-1T
Refund reason: Poor battery life
Pick-up address:  171 John Street, Toronto ON, M5T 1X2

Classifying

One of the most widely deployed use cases in NLP is text classification. Here, the task is to classify a piece of text into one of a few predefined classes. In this example, we want to classify incoming customer messages into one of three categories: Order, Support, or Refunds.

We can create the prompt as follows.

user_input ="""The battery drains fast"""

prompt = f"""The following is a user message to a customer support agent.
Classify the message into one of the following categories: Order, Support, or Refunds.

{user_input}
"""

response = generate_text(prompt, temp=0)
print(response)

Here’s a sample response where the generative model correctly classifies the text into the right category:

Support

Alternatively, the Classify endpoint provides a simple API for running text classification. The endpoint leverages Cohere’s embeddings models and makes it easy to add training examples and even create custom models that are specifically tailored to your task.

Here’s how we can use the Classify endpoint. It requires a minimum of two examples per class, which is passed as an argument to the API call. We have six examples altogether – two for each class.

from cohere.responses.classify import Example

response = co.classify(
  model='embed-english-v3.0',
  inputs=[user_input],
  examples=[Example("I can\'t connect to the bluetooth", "Support"), 
  Example("Why is max volume so low", "Support"), 
  Example("When will my order arrive", "Order"), 
  Example("How much is the shipping cost", "Order"), 
  Example("What is your refund policy", "Refunds"), 
  Example("How do I return my product", "Refunds")])

print(response.classifications[0].prediction)

Sample response:

Support

Conclusion

This chapter provides a starting point for understanding the range of use cases you can build with the Cohere Command model. Take these use cases as the initial set of examples to further experiment on what’s possible.

Original Source

This material comes from the post: Command Model Use Case Patterns.


What’s Next

Next, learn about prompt-chaining to get even more out of an LLM.