Prompty Specification¶
This page was AI-generated (using Claude Sonnet 3.5) with manual review for clarity and correctness.
Steps to regenerate:
- Head over to the Copilot Icon on your IDE to open up chat
- Open the
prompty.yaml
file, this ensures Copilot uses the file as reference - Switch the model to Claude 3.5 Sonnet (Preview)
- Add the prompt below to regenerate documentation
- Once done, evaluate and check for any inconsistencies
- If everything is fine, update the .mdx file and create a PR to update changes
Prompt to regenerate:
- Write comprehensive reference documentation for the provided YAML file. The documentation should be structured in Markdown format and include the following elements:
- Clearly describe each attribute, including its purpose and expected values.
- For sections containing multiple attributes, present them in a structured table format for readability.
- Provide relevant usage examples showcasing different configurations of the YAML file.
- Ensure proper mdx styling, including headers, code blocks, and bullet points where appropriate.
The Prompty yaml file spec can be found here. Below you can find a brief description of each section and the attributes within it.
Prompty description attributes:¶
Property | Type | Description | Required |
---|---|---|---|
name |
string | Name of the prompty | Yes |
description |
string | Description of the prompty | Yes |
version |
string | Version number | No |
authors |
array | List of prompty authors | No |
tags |
tags | Categorization tags | No |
Input/Output Specifications¶
Inputs¶
The inputs
object defines the expected input format for the prompty:
YAML | |
---|---|
1 2 3 |
|
Outputs¶
The outputs
object defines the expected output format:
YAML | |
---|---|
1 2 3 |
|
Template Engine¶
Currently supports:
- jinja2
(default) - Jinja2 template engine for text processing
Model Configuration¶
The model
section defines how the AI model should be configured and executed.
YAML | |
---|---|
1 2 3 4 5 6 7 |
|
Model API Types¶
chat
(default) - For chat-based interactionscompletion
- For text completion tasks
Response Types¶
This determines whether the full (raw) response or just the first response in the choice array is returned.
first
(default) - Returns only the first responseall
- Returns all response choices
Model Providers¶
Azure OpenAI Configuration¶
YAML | |
---|---|
1 2 3 4 5 6 |
|
OpenAI Configuration¶
YAML | |
---|---|
1 2 3 4 |
|
MaaS Configuration¶
YAML | |
---|---|
1 2 3 |
|
Model Parameters¶
Common parameters that can be configured for model execution:
Parameter | Type | Description |
---|---|---|
response_format |
object | An object specifying the format that the model must output. |
seed |
integer | For deterministic sampling. This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. |
max_tokens |
integer | The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. |
temperature |
number | Sampling temperature (0-1) |
frequency_penalty |
number | Penalty for frequent tokens |
presence_penalty |
number | Penalty for new tokens |
top_p |
number | Nucleus sampling probability |
stop |
array | Sequences to stop generation |
Setting to
{ "type": "json_object" }
enables JSON mode, which guarantees the message the model generates is valid JSON.
Sample Prompty¶
YAML | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|