Skip to content

1. Prompty Invoker

In this section, we cover the different built-in Prompty Invokers and walk you through how you can build your own custom invoker.


1. Prompty Invokers

The Prompty runtime comes with a set of built-in invokers that can be used to execute external models and APIs. Invokers trigger a call to the different models and return their output, ensuring standardization when it comes to handling models. The invokers currently supported are:

  1. azure: Invokes the Azure OpenAI API
  2. openai: Invokes the OpenAI API
  3. serverless: Invokes serverless models (e.g., GitHub Models) using the Azure AI Inference client library (currently only key-based authentication is supported with more managed identity support coming soon)

2. How Invokers Work

Invokers in Prompty are responsible for executing prompts against specified models or APIs. They ensure that the necessary configurations and inputs are correctly handled, making it possible to integrate prompt execution seamlessly into applications. Each invoker follows a standard interface, which includes methods for synchronous and asynchronous invocation.

Invoker Interface

An invoker must implement the following methods:

  • invoke(data: any): Promise<any>: Asynchronous method to invoke the invoker.
  • invokeSync(data: any): any: Synchronous method to invoke the invoker.

Built-in Invokers

Prompty provides several built-in invokers:

  • AzureInvoker: Executes prompts using the Azure OpenAI API.
  • OpenAIInvoker: Executes prompts using the OpenAI API.
  • ServerlessInvoker: Executes prompts using serverless models.

3. How Invokers Are Used

Azure Invoker Example

YAML
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# filepath: /workspaces/prompty/examples/azure.prompty
template: |
  What is the weather like in {{city}}?
model:
  api: azure
  configuration:
    endpoint: "https://api.openai.azure.com/v1/engines/davinci/completions"
    apiKey: "YOUR_AZURE_API_KEY"
sample:
  city: Seattle

OpenAI Invoker Example

YAML
1
2
3
4
5
6
7
8
9
# filepath: /workspaces/prompty/examples/openai.prompty
template: |
  Write a poem about {{subject}}.
model:
  api: openai
  configuration:
    apiKey: "YOUR_OPENAI_API_KEY"
sample:
  subject: nature

Serverless Invoker Example

YAML
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# filepath: /workspaces/prompty/examples/serverless.prompty
template: |
  Summarize the following text: {{text}}
model:
  api: serverless
  configuration:
    endpoint: "https://api.github.com/models/summarize"
    apiKey: "YOUR_SERVERLESS_API_KEY"
sample:
  text: "Serverless computing is a cloud-computing execution model in which the cloud provider dynamically manages the allocation of machine resources."

4. Creating a Custom Invoker

Creating a custom invoker involves extending the Invoker class and implementing the required methods. Below is a step-by-step guide to creating a custom invoker.

Step 1: Define the Invoker Class

Create a new class that extends the Invoker class and implement the invoke and invokeSync methods.

TypeScript
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
import { Invoker, Prompty } from "prompty";

class CustomInvoker extends Invoker {
  constructor(prompty: Prompty) {
    super(prompty);
  }

  async invoke(data: any): Promise<any> {
    // Custom logic for asynchronous invocation
    return Promise.resolve(data);
  }

  invokeSync(data: any): any {
    // Custom logic for synchronous invocation
    return data;
  }
}

Step 2: Register the Invoker

Register the custom invoker with the InvokerFactory.

TypeScript
1
2
3
4
import { InvokerFactory } from "prompty";

const factory = InvokerFactory.getInstance();
factory.register("custom", CustomInvoker);

Step 3: Use the Custom Invoker

Use the custom invoker in your application.

TypeScript
1
2
3
4
5
6
7
8
import { Prompty, InvokerFactory } from "prompty";

const prompty = new Prompty();
const factory = InvokerFactory.getInstance();

const data = { /* ... */ };
const result = await factory.call("custom", prompty, data);
console.log(result);

Example .Prompty File

YAML
1
2
3
4
5
6
7
8
9
# filepath: /workspaces/prompty/examples/custom.prompty
template: |
  Hello, {{name}}!
model:
  api: custom
  configuration:
    type: custom
sample:
  name: World

5. Hugging Face Invoker Tutorial

In this section, we will create a custom invoker for Hugging Face models.

Step 1: Install Dependencies

Install the necessary dependencies.

Bash
1
npm install @huggingface/hub

Step 2: Define the Invoker Class

Create a new class that extends the Invoker class and implements the required methods.

TypeScript
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import { Invoker, Prompty } from "prompty";
import { HfInference } from "@huggingface/hub";

class HuggingFaceInvoker extends Invoker {
  private hf: HfInference;

  constructor(prompty: Prompty) {
    super(prompty);
    this.hf = new HfInference("YOUR_HUGGING_FACE_API_KEY");
  }

  async invoke(data: any): Promise<any> {
    const result = await this.hf.textGeneration({
      model: "gpt2",
      inputs: data.prompt,
    });
    return result;
  }

  invokeSync(data: any): any {
    throw new Error("Synchronous invocation is not supported for Hugging Face models.");
  }
}

Step 3: Register the Invoker

Register the Hugging Face invoker with the InvokerFactory.

TypeScript
1
2
3
4
import { InvokerFactory } from "prompty";

const factory = InvokerFactory.getInstance();
factory.register("huggingface", HuggingFaceInvoker);

Step 4: Use the Invoker

Use the Hugging Face invoker in your application.

TypeScript
1
2
3
4
5
6
7
8
import { Prompty, InvokerFactory } from "prompty";

const prompty = new Prompty();
const factory = InvokerFactory.getInstance();

const data = { prompt: "Once upon a time" };
const result = await factory.call("huggingface", prompty, data);
console.log(result);

Example : basic_hf.prompty

Text Only
1
2
3
4
5
6
7
8
9
# filepath: /workspaces/prompty/examples/basic_hf.prompty
template: |
  Generate a story based on the following prompt: {{prompt}}
model:
  api: huggingface
  configuration:
    model: gpt2
sample:
  prompt: Once upon a time