3.3: Update Prompt Template¶
OPTIONAL:
If you get stuck, you can skip this step and copy over a pre-edited file.
Click to expand this section to see the hidden commands to do this.
Tip: Use the files icon at far right to copy the text | |
---|---|
1 |
|
1. Copy Prompty to Iterate¶
To mimic the iterative process of ideation, we start each step by copying the Prompty from the previous step (chat-0.prompty
) to a new file (chat-1.prompty
) to make edits.
1 |
|
2. Set the Temperature Parameter¶
Temperature is one of the parameters you can use to modify the behavior of Generative AI models. It controls the degree of randomness in the response, from 0.0 (deterministic) to 1.0 (maximum variability).
-
Open the file
chat-1.prompty
in the editor. -
Add the a temperature parameter to the model configuration as shown:
1 2 3
parameters: max_tokens: 3000 temperature: 0.2
3. Provide Sample Input File¶
The sample property of a Prompty asset provides the data to be used in test execution. It can be defined inline (with an object) or as an external file (with a string providing the file pathname)
In this example we use a JSON file to contain the sample input values, allowing us to shape the data we need for the RAG pattern as we go. Once we have our final prompt template, we can determine how to fetch that type of data from real-world sources (databases, search indexes, user query) using function tools or API calls.
-
Copy a JSON file with sample data to provide as context in our Prompty.
1
cp ../docs/workshop/src/1-build/chat-1.json .
-
Open the JSON file and review the contents
- It has the customer's name, age, membership level, and purchase history.
- It has a sample question:
What cold-weather sleeping bag would go well with what I have already purchased?"
-
Update the sample section of
chat-1.prompty
with the following: (preserve indentation)1 2 3 4 5 6
inputs: customer: type: object question: type: string sample: ${file:chat-1.json}
-
This declares the inputs to the prompty:
customer
(a JSON object) andquestion
(a string). - It also declares that sample data for these inputs is to be found in the file
chat-1.json
.
4. Update the System Prompt¶
The system section of a Prompty file specifies the "meta-prompt". This additional text is added to the user's actual question to provide the context necessary to answer accurately. With some Generative AI models like the GPT family, this is passed to a special "system prompt", which guides the AI model in its response to the question, but does not generate a response directly.
- You can use the sytem section to provide guidance on how the model should behave, and to provide information the model can use as context.
- Prompty constructs the meta-prompt from the inputs before passing it to the model. Parameters like
{{firstName}}
are replaced by the corresponding input. You can also use syntax like{{customer.firstName}}
to extract named elements from objects.
4.1 Update the template¶
Update the template section of chat-1.prompty
with the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
4.2 Run chat-1.prompty
¶
Run the updated prompty as before:
- In the OUTPUT pane, you see: a valid response to the question: "What cold-weather sleeping bag would go well with what I have already purchased?"
- Observe: The Generative AI model knows customer's name from
{{customer.firstName}}
in thechat-1.json
file, provided via the# Customer Context
in the meta-prompt. - The model knows the customers previous orders, which have been insterted into the meta-prompt under the heading
# Previous Orders
.
In the meta-prompt, organize information under text headings like # Customer Info
. This helps many generative AI models find information more reliably, because they have been trained on Markdown-formatted data with this structure.
4.3 Ideate on your own¶
Try these ideas at home, in your own time, to get more intuition for system prompt usage:
- Add
Provide responses in a bullet list of items
to the end of thesystem:
section - What happens to the output?
You can also change the parameters
section to configure generative AI model parameters:
- Change
max_tokens
to "150". What happens to response length or quality? - Change
temperature
to 0.7 (or to other values between 0.0 and 0.9). What happens now?
CONGRATULATIONS. You updated the Prompty template & added sample test data!