Skip to content

3.3: Update Prompt Template

OPTIONAL:
If you get stuck, you can skip this step and copy over a pre-edited file.
Click to expand this section to see the hidden commands to do this.
1
cp ../docs/workshop/src/1-build/chat-1.prompty .

1. Copy Prompty to Iterate

To mimic the iterative process of ideation, we start each step by copying the Prompty from the previous step (chat-0.prompty) to a new file (chat-1.prompty) to make edits.

1
cp chat-0.prompty chat-1.prompty

2. Set the Temperature Parameter

Temperature is one of the parameters you can use to modify the behavior of Generative AI models. It controls the degree of randomness in the response, from 0.0 (deterministic) to 1.0 (maximum variability).

  1. Open the file chat-1.prompty in the editor.

  2. Look for the parameters section under model (in frontmatter)

  3. Add a new temperature paramters with value of 0.2 as shown below

  4. Your model section should now look as follows (note the indented lines)

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    model:
        api: chat
        configuration:
            type: azure_openai
            azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
            azure_deployment: ${env:AZURE_OPENAI_CHAT_DEPLOYMENT}
            api_version: 2024-07-01-preview
        parameters:
            max_tokens: 3000
            temperature: 0.2
    


3. Provide Sample Input File

The sample property of a Prompty asset provides the data to be used in test execution. It can be defined inline (with an object) or as an external file (with a string providing the file pathname)

In this example, we'll use a JSON file to provide the sample test inputs for the Prompty asset. This allows us to test the Prompty execution by rendering the prompt template using the data in this file to fill in the placeholder variables. Later, when we convert the Prompty asset to code, we'll use functions to populate this data from real sources (databases, search indexes, user query).

  1. Copy a JSON file with sample data to provide as context in our Prompty.
    1
    cp ../docs/workshop/src/1-build/chat-1.json .
    
  2. Open the JSON file and review the contents

    • It has the customer's name, age, membership level, and purchase history.
    • It has the default customer question for our chatbot: What cold-weather sleeping bag would go well with what I have already purchased?"
  3. Replace the sample: section of chat-1.prompty (lines 16-18) with the following:

    1
    2
    3
    4
    5
    6
    inputs:
      customer:
        type: object
      question:
        type: string
    sample: ${file:chat-1.json}
    

    This declares the inputs to the prompty: customer (a JSON object) and question (a string). It also declares that sample data for these inputs is to be found in the file chat-1.json.


4. Update the System Prompt

The system section of a Prompty file specifies the "meta-prompt". This additional text is added to the user's actual question to provide the context necessary to answer accurately. With some Generative AI models like the GPT family, this is passed to a special "system prompt", which guides the AI model in its response to the question, but does not generate a response directly.

You can use the sytem section to provide guidance on how the model should behave, and to provide information the model can use as context.

Prompty constructs the meta-prompt from the inputs before passing it to the model. Parameters like {{firstName}} are replaced by the corresponding input. You can also use syntax like {{customer.firstName}} to extract named elements from objects.

  1. Update the system section of chat-1.prompty with the text below. Note that the commented lines (like "# Customer") are not part of the Prompty file specification -- that text is passed directly to the Generative AI model. (Experience suggests AI models perform more reliably if you organize the meta-prompt with Markdown-style headers.)

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    system:
    You are an AI agent for the Contoso Outdoors products retailer. 
    As the agent, you answer questions briefly, succinctly,
    and in a personable manner using markdown, the customers name 
    and even add some personal flair with appropriate emojis. 
    
    # Documentation
    Make sure to reference any documentation used in the response.
    
    # Previous Orders
    Use their orders as context to the question they are asking.
    {% for item in customer.orders %}
    name: {{item.name}}
    description: {{item.description}}
    {% endfor %} 
    
    # Customer Context
    The customer's name is {{customer.firstName}} {{customer.lastName}} and is {{customer.age}} years old.
    {{customer.firstName}} {{customer.lastName}} has a "{{customer.membership}}" membership status.
    
    # user
    {{question}}
    
  2. Run chat-1.prompty

    In the OUTPUT pane, you see: a valid response to the question: "What cold-weather sleeping bag would go well with what I have already purchased?"

    Note the following:

    • The Generative AI model knows the customer's name, drawn from {{customer.firstName}} in the chat-1.json file and provided in section headed # Customer Context in the meta-prompt.
    • The model knows the customers previous orders, which have been insterted into the meta-prompt under the heading # Previous Orders.

    In the meta-prompt, organize information under text headings like # Customer Info. This helps many generative AI models find information more reliably, because they have been trained on Markdown-formatted data with this structure.

  3. Ideate on your own!

    You can change the system prompt to modify the style and tone of the responses from the chatbot.

    • Try adding Provide responses in a bullet list of items to the end of the system: section. What happens to the output?

    You can also change the parameters passed to the generative AI model in the parameters: section.

    • Try changing max_tokens to "150" and observe the response. How does this impact the length and quality of response (e.g., is is truncated?)
    • Try changing temperature to 0.7. Try some other values between 0.0 and 1.0. What happens to the output?

CONGRATULATIONS. You updated the Prompty template & added sample test data!