3.6 Let's Connect The Dots! 💡¶
CONGRATULATIONS. You just learned prompt engineering with Prompty!
Let's recap the iterative steps of our ideate process:
- First, create a base prompt → configure the model, parameters
- Next, modify meta-prompt → personalize usage, define inputs & test sample
- Then, modify the body → reflect system context, instructions and template structure
- Finally, create executable code → run Prompty from Python, from command-line or in automated workflows
We saw how these simple tools can help us implement safety guidance for our prompts and iterate on our prompt template design quickly and flexibly, to get to our first prototype. The sample data file provides a test input for rapid iteration, and it allows us understand the "shape" of data we will need, to implement this application in production.
Let's Connect The Dots¶
This section is OPTIONAL. Please skip this if time is limited. You can revisit this section at home, in you personal repo copy, to get insights into how the sample data is replaced with live data bindings in Contoso Chat.
In the ideation step, we will end up with three files:
xxx.prompty
- the prompt asset that defines our template and model configurationxxx.json
- the sample data file that effectively defines the "shape" of data we need for RAGxxx.py
- the Python script that loads and executes the prompt asset in a code-first manner
Let's compare this to the contents of the src/api/contoso_chat
folder which implements our actual copilot and see if we can connect the dots. The listing below shows the relevant subset of files from the folder for our discussion.
1 2 3 4 5 6 7 8 9 10 |
|
Explore: Chat Prompt¶
The chat.prompty
and chat.json
files will be familiar based on the exercise you completed. If you click the play button in the prompty file, it will run using the json sample file (just as before) for independent template testing. But how do we then replace the sample data with real data from our RAG workflow.
This is when we take the python script generated from the prompty file and enhance it to orchestrate the steps required to fetch data, populate the template, and execute it. Expand the sections below to get a better understanding of the details.
Let's investigate the chat_request.py
file - click to expand
For clarity, I've removed some of the lines of code and left just the key elements here for discussion:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
Now let's unpack the details in the code
- The copilot is defined by the get_response function in line 40
- It gets inputs (question, customerId, chat_history) from some caller (here: main)
- In line 42 it calls the get_customer function with the customerId
- This function is defined in line 18 and fetches data from CosmosDB
- The returned results are bound to the customer data in the prompty
- In line 44 it calls the product.find_products function with the question
- This function is defined in products/product.py - explore the code yourself
- It uses the question to extract query terms - and expands on them
- It uses embeddings to convert query terms - into vectorized queries
- It uses vectorized queries - to search product index for matching items
- It returns matching items - using semantic ranking for ordering
- The returned results are bound to the context data in the prompty
- This function is defined in products/product.py - explore the code yourself
- In line 49 it explictly sets chat model configuration (override prompty default)
- In line 54 it executes the prompty, sending the enhanced prompt to that chat model
- In line 60 it returns the result to the caller for use (or display)
Explore: Product Prompt¶
We'll leave this as an exercise for you to explore on your own.
Here is some guidance for unpacking this code
- Open the
products/product.py
file and look for these definitions:- find_products function - takes question as input, returns product items
- first, executes a prompty - converts question into query terms
- next, generates embeddings - converts query terms into vector query
- next, retrieve products - looks up specified index for query matches
- last, returns retrieved products to caller
- find_products function - takes question as input, returns product items
- Open the
products/product.prompty
file and look for these elements:- what does the system context say? (hint: create specialized queries)
- what does the response format say? (hint: return as JSON array)
- what does the output format say? (hint: return 5 terms)
Explore: FastAPI App¶
The python scripts above help you test the orchestrated flow locally - invoking it from the command line. But how do you now get this copilot function invoked from a hosted endpoint? This is where the FastAPI framework helps. Let's take a look at a simplified version of the code.
Let's investigate the src/api/main.py
file - click to expand
For clarity, I've removed some of the lines of code and left just the key elements here for discussion:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
Let's unpack what happens:
- In line 10 we instantiate a new FastAPI "app".
- In line 22 we define one route
/
that returns default content. - In line 27 we define another route
/api/create_response
that takes inputs sent to this endpoint, and converts them into parameters for an invocation to our copilot.
And that's it. Later on, we'll see how we can test the FastAPI endpoint locally (using fastapi dev src/api/main.py
) or by visiting the hosted version on Azure Container Apps. This takes advantage of the default Swagger UI on the /docs
endpoint which provides an interactive interface for trying out various routes on the app.
Cleanup your sandbox!
In this section, you saw how Prompty tooling supports rapid prototyping - starting with a basic prompty. Continue iterating on your own to get closer to the contoso_chat/chat.prompty
target. You can now delete the sandbox/
folder, to keep original app source in focus.