Skip to content

Lab Recap

[!hint] What We Learned Today

We started with a simple goal: Build an LLM-based chat app that used Retrieval Augmented Generation (RAG) to answer questions relevant to a product catalog.

We learned about LLM Ops: Specifically, we identified a number of steps that need to be chained together in a workflow, to build, deploy & use performant LLM Apps.

We learned about Azure AI Studio: Specifically, we learned how to provision an Azure AI project using an Azure AI resource with selected Model deployments. We learned to build a RAG solution with Azure AI Search and Azure Cosmos DB. And we learned to upload, deploy, run, and test, prompt flows in Azure.

We learned about Prompt Flow: Specifically, we learned how to create, evalute, test, and deploy, a prompt flow using a VS Code extension to streamline end-to-end development for an LLM-based app. And we learned how to upload the flow to Azure AI Studio, and replicate the steps completely in the cloud.

Along the way, we learned what LLM Ops is and why having these tools to simplify and orchestrate end-to-end development workflows is critical for building the next generation of Generative AI applications at cloud scale.

[!hint] What We Can Try Next

  • Explore Next Steps for LLMOpss.
    • Add GitHub Actions, Explore Intents
    • See README: +++https://github.com/Azure-Samples/contoso-chat+++ README
  • Explore Usage in Real Application.
    • Integrate & use deployed endpoint in web app
    • See README: +++https://github.com/Azure-Samples/contoso-web+++

[!hint] Where Can You Learn More?


🏆 | THANK YOU FOR JOINING US!