Create Prompty Recipes¶
Recipe Template¶
RECIPE SUMMARY
- About: What am I making?
- Pre-Requisites: What are my ingredients?
- Process: What are the step-by-step instructions?
- Proof-of-Concept: Screenshot, video etc. to show end result.
- Practices: Tips, tricks and troubleshooting guidance here.
PROMPTY ASSET
1 |
|
Example With OpenAI¶
This is an example of the most basic Prompty asset for invoking an openAI model for chat completions. The starting Prompty asset (iteration 0) is in 01-basic-openai.prompty
. Let's iterate on this in steps, to understand how to configure various parameters for the chat completion step.
Setup Environment¶
For convenience:
- Create a
.env
file in the repository with anOPENAI_API_KEY=
line - Set the value to the
sk-proj...
API_KEY value from OpenAI. - Install the Prompty VS Code Extension
- Open the Prompty asset in editor, click Play icon (F5)
You should see the VS Code Terminal switch to the Output tab to show results.
- Keep the tab open and iterate on the asset in the editor
- Click Play to execute revised asset
- This is the basic flow for prompt engineering with Prompty
S0: Basic Prompt¶
Execute the basic asset. This just validates our setup (with the API key) and helps setup the basic scaffold for a working asset. This is what we see in frontmatter (metadata):
- The
sample
data provides the defaultquestion
input for the run. - The
model
configuration specifies the provider type and deployment name - The
api_key
for configuration is read from the.env
environment variable
This is what we see in the content (template):
- The
system
section lets you define system messages for the execution - The
user
section identifies the initial user question - The space in between can be updated with more context sections as needed.
Iteration 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Response 0
Bash | |
---|---|
1 2 3 |
|
S1: Configure Parameters¶
Try configuring model parameters and observing impact on generated prompt responses. For example, you can explore the impact of max_tokens:
- Try switching "max_tokens" between 100 and 1000
- Observe how truncated output (100) is now completed (1000).
Try adding new context sections that can be bound to input data (currently inline). For example, you can now provide placeholders for the user name, and use those as variables when guiding model response. Variable values are filled in from sample by default, unless CLI or code invoked.
Iteration 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
S2: Specify Tools¶
For this exercise, you will need to have a WeatherAPI account - the free version will do!
- Create an account and look up the api key
- Save that to the
.env
file as the value forWEATHER_API_KEY=
- Modify the prompty asset, and run it.
Iteration 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
Response 2
Bash | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
S3: Import Files¶
Steps to Use:
- Prepare the Environment: Ensure the .env file contains OPENAI_API_KEY=
. - Place the Image: Save the image you want to analyze in the ./assets/ directory with the name sample_image.jpg.
- Run the Asset: Use the Prompty CLI or VS Code extension to execute the asset.
Iteration 2
1 |
|