Complex Prompts for Coding
Exercise 9.2 - Codebot
In this exercise, we will write up a prompt for a coding assistance and teaching bot that reads code and offers guiding corrections when appropriate. Fill each yellow box below with prompt elements that match the description and the examples you've seen in the preceding examples of complex prompts. Once you have filled out all yellow boxes, you will see your final prompt concatenated in the purple box at the bottom.
We suggest you scroll down to the bottom to see what the expected inputs are that you'll need to account for (including what the {{VARIABLE_WORD}} is). Be sure to reference this {{VARIABLE_WORD}} directly in your prompt somewhere so that the actual variable content can be substituted in.
User:
Complex Prompt Elements
Build complex prompts by combining these 10 elements in order:
Open your CLAUDEMESSAGES() prompt with "User:".
User:This is mandatory! Prompts to Claude using CLAUDEMESSAGES() always need to begin with this.
Give Claude context about the role it should take on or what goals and overarching tasks you want it to undertake with the prompt.
It's best to put context early in the body of the prompt.
If important to the interaction, tell Claude what tone it should use.
This element may not be necessary depending on the task.
Expand on the specific tasks you want Claude to do, as well as any rules that Claude must follow. This is also where you can give Claude an "out" if it doesn't have an answer or doesn't know.
It's ideal to show this description and rules to a friend to make sure it is laid out logically and that any ambiguous words are clearly defined.
Provide Claude with at least one example of an ideal response that it can emulate. Encase this in <example></example> XML tags. Feel free to provide multiple examples. If you do provide multiple examples, give Claude context about what it is an example of, and enclose each example in its own set of XML tags.
Examples are probably the single most effective tool in knowledge work for getting Claude to behave as desired. Make sure to give Claude examples of common edge cases. If your prompt uses a scratchpad, it's effective to give examples of how the scratchpad should look. Generally more examples = more reliable responses at the cost of latency and tokens. We have only one example here to make the prompt easier to read.
Use "{{CODE}}" to represent the code you want Claude to examine. Remember to surround it in XML tags.
This element may not be necessary depending on task. Ordering is also flexible. When input data is long, it's best to put it before the instructions.
"Remind" Claude or tell Claude exactly what it's expected to immediately do to fulfill the prompt's task. This is also where you would put in additional variables like the user's question.
It generally doesn't hurt to reiterate to Claude its immediate task. It's best to do this toward the end of a long prompt. This will yield better results than putting this at the beginning. It is also generally good practice to put the user's query close to the bottom of the prompt.
For tasks with multiple steps, it's good to tell Claude to think step by step before giving an answer. Sometimes, you might have to even say "Before you give your answer..." just to make sure Claude does this first.
Not necessary with all prompts. Increases intelligence of responses but also increases latency by adding to the length of the output.
If there is a specific way you want Claude's response formatted, clearly tell Claude what that format is.
This element may not be necessary depending on the task.
A space to start off Claude's answer with some prefilled words to steer Claude's behavior or response.
"Assistant:" is only necessary if you want to prefill Claude's response. Otherwise, it can be left off.
Examples
Full concatenated prompt
Response will be generated by the LLM