Build a custom copilots with prompt flow in the Azure AI Foundry portal
In this exercise, you’ll use Azure AI Foundry portal’s prompt flow to create a custom copilot that uses a user prompt and chat history as inputs, and uses a GPT model from Azure OpenAI to generate an output.
This exercise will take approximately 30 minutes.
Create an AI hub and project in the Azure AI Foundry portal
You start by creating an Azure AI Foundry portal project within an Azure AI hub:
- In a web browser, open https://ai.azure.com and sign in using your Azure credentials.
- In the home page, select + Create project.
-
In the Create a project wizard you can see all the Azure resources that will be automatically created with your project, or you can customize the following settings by selecting Customize before selecting Create:
- Hub name: A unique name
- Subscription: Your Azure subscription
- Resource group: A new resource group
- Location: Select Help me choose and then select gpt-35-turbo in the Location helper window and use the recommended region*
- Connect Azure AI Services or Azure OpenAI: (New) Autofills with your selected hub name
- Connect Azure AI Search: Skip connecting
* Azure OpenAI resources are constrained at the tenant level by regional quotas. The listed regions in the location helper include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit. In the event of a quota limit being reached later in the exercise, there’s a possibility you may need to create another resource in a different region. Learn more about model availability per region
- If you selected Customize, select Next and review your configuration.
- Select Create and wait for the process to complete.
Deploy a GPT model
To use a language model in prompt flow, you need to deploy a model first. The Azure AI Foundry portal allows you to deploy OpenAI models that you can use in your flows.
- In the navigation pane on the left, under My assets, select the Models + endpoints page.
- Create a new deployment of the gpt-35-turbo model with the following settings:
- Deployment name: A unique name for your model deployment
- Deployment type: Standard
- Model version: Select the default version
- AI resource: Select the resource created previously
- Tokens per Minute Rate Limit (thousands): 5K
- Content filter: DefaultV2
- Enable dynamic quota: Disabled
- Wait for the model to be deployed. When the deployment is ready, select Open in playground.
-
In the chat window, enter the query
What can you do?
.Note that the answer is generic because there are no specific instructions for the assistant. To make it focused on a task, you can change the system prompt.
-
Change the Give the model instructions and context message to the following:
**Objective**: Assist users with travel-related inquiries, offering tips, advice, and recommendations as a knowledgeable travel agent. **Capabilities**: - Provide up-to-date travel information, including destinations, accommodations, transportation, and local attractions. - Offer personalized travel suggestions based on user preferences, budget, and travel dates. - Share tips on packing, safety, and navigating travel disruptions. - Help with itinerary planning, including optimal routes and must-see landmarks. - Answer common travel questions and provide solutions to potential travel issues. **Instructions**: 1. Engage with the user in a friendly and professional manner, as a travel agent would. 2. Use available resources to provide accurate and relevant travel information. 3. Tailor responses to the user's specific travel needs and interests. 4. Ensure recommendations are practical and consider the user's safety and comfort. 5. Encourage the user to ask follow-up questions for further assistance.
- Select Apply changes.
- In the chat window, enter the same query as before:
What can you do?
Note the change in response.
Now that you have played around with the system message for the deployed GPT model, you can further customize the application by working with prompt flow.
Create and run a chat flow in the Azure AI Foundry portal
You can create a new flow from a template, or create a flow based on your configurations in the playground. Since you were already experimenting in the playground, you’ll use this option to create a new flow.
Troubleshooting tip: Permissions error
If you receive a permissions error when you create a new prompt flow, try the following to troubleshoot:
- In the Azure portal, select the AI Services resource.
- Under Resource Management, in the Identity tab, confirm that it is system assigned managed identity.
- Navigate to the associated Storage Account. On the IAM page, add role assignment Storage blob data reader.
- Under Assign access to, choose Managed Identity, + Select members, and select the All system-assigned managed identities.
- Review and assign to save the new settings and retry the previous step.
- In the Chat playground, select Prompt flow from the top bar.
-
Enter
Travel-Chat
as folder name.A simple chat flow is created for you. Note there are two inputs (chat history and the user’s question), an LLM node that will connect with your deployed language model, and an output to reflect the response in the chat.
To be able to test your flow, you need compute.
- Select Start compute session from the top bar.
- The compute session will take 1-3 minutes to start.
-
Find the LLM node named chat. Note that the prompt already includes the system prompt you specified in the chat playground.
You still need to connect the LLM node to your deployed model.
- In the LLM node section, for Connection, select the connection that was created for you when you created the AI hub.
- For Api, select chat.
- For deployment_name, select the gpt-35-turbo model you deployed.
- For response_format, select {“type”:”text”}.
-
Review the prompt field and ensure it looks like the following:
system: **Objective**: Assist users with travel-related inquiries, offering tips, advice, and recommendations as a knowledgeable travel agent. **Capabilities**: - Provide up-to-date travel information, including destinations, accommodations, transportation, and local attractions. - Offer personalized travel suggestions based on user preferences, budget, and travel dates. - Share tips on packing, safety, and navigating travel disruptions. - Help with itinerary planning, including optimal routes and must-see landmarks. - Answer common travel questions and provide solutions to potential travel issues. **Instructions**: 1. Engage with the user in a friendly and professional manner, as a travel agent would. 2. Use available resources to provide accurate and relevant travel information. 3. Tailor responses to the user's specific travel needs and interests. 4. Ensure recommendations are practical and consider the user's safety and comfort. 5. Encourage the user to ask follow-up questions for further assistance. user:
Test and deploy the flow
Now that you’ve developed the flow, you can use the chat window to test the flow.
- Ensure the compute session is running.
- Select Save.
- Select Chat to test the flow.
-
Enter the query:
I have one day in London, what should I do?
and review the output.When you’re satisfied with the behavior of the flow you created, you can deploy the flow.
- Select Deploy to deploy the flow with the following settings:
- Basic settings:
- Endpoint: New
- Endpoint name: Enter a unique name
- Deployment name: Enter a unique name
- Virtual machine: Standard_DS3_v2
- Instance count: 3
- Inferencing data collection: Enabled
- Advanced settings:
- Use the default settings
- Basic settings:
- In Azure AI Foundry portal, in your project, in the navigation pane on the left, under My assets, select the Models + endpoints page.
- Note that by default the Model deployments are listed, including your deployed language model and deployed flow. It may take some time before the deployment is listed and successfully created.
- When the deployment has succeeded, select it. Then, on its Test page, enter the prompt
What is there to do in San Francisco?
and review the response. - Enter the prompt
Where else could I go?
and review the response. - View the Consume page for the endpoint, and note that it contains connection information and sample code that you can use to build a client application for your endpoint - enabling you to integrate the prompt flow solution into an application as a custom copilot.
Delete Azure resources
When you finish exploring the Azure AI Foundry portal, you should delete the resources you’ve created to avoid unnecessary Azure costs.
- Navigate to the Azure portal at
https://portal.azure.com
. - In the Azure portal, on the Home page, select Resource groups.
- Select the resource group that you created for this exercise.
- At the top of the Overview page for your resource group, select Delete resource group.
- Enter the resource group name to confirm you want to delete it, and select Delete.