Advent of 2024, Day 22 – Microsoft Azure AI – Prompt flow using VS Code and Python
This article is originally published at https://tomaztsql.wordpress.com
In this Microsoft Azure AI series:
- Dec 01: Microsoft Azure AI – What is Foundry?
- Dec 02: Microsoft Azure AI – Working with Azure AI Foundry
- Dec 03: Microsoft Azure AI – Creating project in Azure AI Foundry
- Dec 04: Microsoft Azure AI – Deployment in Azure AI Foundry
- Dec 05: Microsoft Azure AI – Deployment parameters in Azure AI Foundry
- Dec 06: Microsoft Azure AI – AI Services in Azure AI Foundry
- Dec 07: Microsoft Azure AI – Speech service in AI Services
- Dec 08: Microsoft Azure AI – Speech Studio in Azure with AI Services
- Dec 09: Microsoft Azure AI – Speech SDK with Python
- Dec 10: Microsoft Azure AI – Language and Translation in Azure AI Foundry
- Dec 11: Microsoft Azure AI – Language and Translation Python SDK
- Dec 12: Microsoft Azure AI – Vision and Document AI Service
- Dec 13: Microsoft Azure AI – Vision and Document Python SDK
- Dec 14: Microsoft Azure AI – Content safety AI service
- Dec 15: Microsoft Azure AI – Content safety Python SDK
- Dec 16: Microsoft Azure AI – Fine-tuning a model
- Dec 17: Microsoft Azure AI – Azure OpenAI service
- Dec 18: Microsoft Azure AI – Azure AI Hub and Azure AI Project
- Dec 19: Microsoft Azure AI – Azure AI Foundry management center
- Dec 20: Microsoft Azure AI – Models and endpoints in Azure AI Foundry
- Dec 21: Microsoft Azure AI – Prompt flow in Azure AI Foundry
Prompt Flow is particularly beneficial for organisations leveraging AI to streamline operations, enhance customer experiences, and innovate in digital transformation projects.
Why Use Prompt Flow?
- Ease of Use: Simplifies the complex process of prompt engineering and integration.
- Scalability: Makes it easier to deploy and scale AI workflows across applications.
- Cost-Effectiveness: Helps optimize prompts, reducing unnecessary API calls and improving efficiency.
With Python you can start using prompt flow by installing the package
pip install promptflow
and create a prompty file (save it as my_prompty_file.yaml):
---
name: Minimal Chat
model:
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo
api_key: ${env:AZURE_OPENAI_API_KEY}
api_version: ${env:AZURE_OPENAI_API_VERSION}
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
parameters:
temperature: 0.2
max_tokens: 1024
inputs:
question:
type: string
sample:
question: "Where can I get the most famous pasta in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.
# Safety
- You **should always** reference factual statements to search results based on [relevant documents]
- Search results based on [relevant documents] may be incomplete or irrelevant. You do not make assumptions
# Customer
You are helping to find answers to their questions.
Use their name to address them in your responses.
user:
{{question}}
And for the environment variables:
from promptflow.core import Prompty
# Load prompty with dict override
override_model = {
"configuration": {
"api_key": "${env:AZURE_OPENAI_API_KEY}",
"api_version": "${env:AZURE_OPENAI_API_VERSION}",
"azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}"
},
"parameters": {"max_tokens": 512}
}
prompty = Prompty.load(source="path/to/my_prompty_file.yaml", model=override_model)
And the prompty file (YAML) file is to be specified in the Prompty.load()
function. To orchestrate and call:
from promptflow.core import Prompty
prompty_obj = Prompty.load(source="path/to/my_prompty_file.yaml")
result = prompty_obj(first_name="John", last_name="Doh", question="What is the capital of France?")
Prompty gives you the ability to create an end-to-end solution, like RAG where you can chat with LLM over an article or document, where you can ask to classify the input data (list of URLs,…)
Prompty is a markdown file, structured in YAML and encapsulates a series of metadata fields pivotal for defining the model’s configuration and the inputs. After this front matter is the prompt template, articulated in the Jinja
format.
Field | Description |
---|---|
name | The name of the prompt. |
description | A description of the prompt. |
model | Details the prompty’s model configuration, including connection info and parameters for the LLM request. |
inputs | The input definition that passed to prompt template. |
outputs | Specify the fields in prompty result. (Only works when response_format is json_object). |
sample | Offers a dictionary or JSON file containing sample data for inputs. |
You can run this from VS Code using SDK or as CLI and also use Trace UI to analyse the flow and analyse the runs.
Tomorrow we will look into feature Tracing in Azure AI Foundry.
All of the code samples will be available on my Github.
Thanks for visiting r-craft.org
This article is originally published at https://tomaztsql.wordpress.com
Please visit source website for post related comments.