6.1
Classifying Emails
templateIn this exercise, we'll be instructing the LLM to sort emails into the following categories:
For the first part of the exercise, change the prompt in the YELLOW highlighted prompt template box to make the LLM output the correct classification. Use Chain of Thought. To be marked as correct, the LLM's answer needs to include the letter (A - D) of the correct choice, with the parentheses, as well as the name of the category.
Refer to the "Correct Classification" in column K to know which emails should be classified under what category. the LLM's response will turn GREEN if your prompt yields the correct answer.
Tip: Use precognition and other techniques you've learned leading up to this chapter! Remember, thinking only counts when it's out loud!
BONUS QUESTION: Time to think like a data scientist! Why is the second email the trickiest one to classify correctly? If the classification is debatable for humans, it's likely also tough for the LLM!
1.
2.
3.
4.
The conditional formatting in this exercise is looking for the correct categorization letter + the closing parentheses and the first letter of the name of the category, such as "C) B" or "B) B" etc.
⁉️
Answer: Below is one way you could go about doing this.
{{EMAIL}}
LLM Response
Submit your prompt to see the response here.
6.2
Email Classification Formatting
templateIn this exercise, we're going to refine the output of the above prompt to yield an answer formatted exactly how we want it.
Use your favorite output formatting technique to make the LLM wrap just the letter of the correct classification in "<answer></answer> tags. Refer to the above exercise if you forget which letter is correct for each email.
the LLM's response will turn GREEN if your prompt yields the correct answer. For instance, the answer to the first email should contain the exact string "<answer>B</answer>".
Tip: As a first step, copy the final correct version of your prompt from Exercise 1 down into the highlighted prompt template box below. Then edit and refine your initial prompt from there.
Note: In this exercise, you can see that the LLM in Sheets is a powerful prompt evaluation tool. Using substitutions, you can easily check how well a prompt does in multiple contexts by only modifying one prompt and yielding several responses from the LLM as a result. Here, we evaluate the prompt across four instances, but you can easily expand this evaluation to as many rows as needed.
← Chapter 6: Precognition (Thinking Step by Step)
Table of Contents
{{EMAIL}}
LLM Response
Submit your prompt to see the response here.