Develop prompt and agent versions
This exercise takes approximately 30 minutes.
Note: This lab assumes a pre-configured lab environment with Visual Studio Code, Azure CLI, and Python already installed.
Introduction
In this exercise, you’ll deploy multiple versions of a Trail Guide Agent to Microsoft Foundry, each with progressively enhanced capabilities. You’ll use Python scripts to create agents with different system prompts, test their behavior in the portal, and run automated tests to compare their performance.
You’ll modify a single Python script to deploy three agent versions (V1, V2, and V3), review each deployment in the Microsoft Foundry portal, and analyze how prompt evolution affects agent behavior. This will help you understand version management strategies and the relationship between programmatic deployment and portal-based agent management.
Set up the environment
To complete the tasks in this exercise, you need:
- Visual Studio Code
- Azure subscription with Microsoft Foundry access
- Git and GitHub account
- Python 3.9 or later
- Azure CLI and Azure Developer CLI (azd) installed
All steps in this lab will be performed using Visual Studio Code and its integrated terminal.
Create repository from template
You’ll start by creating your own repository from the template to practice realistic workflows.
- In a web browser, navigate to
https://github.com/MicrosoftLearning/mslearn-genaiops. - Select Use this template > Create a new repository.
- Enter a name for your repository (e.g.,
mslearn-genaiops). - Set the repository to Public or Private based on your preference.
- Select Create repository.
Clone the repository in Visual Studio Code
After creating your repository, clone it to your local machine.
- In Visual Studio Code, open the Command Palette by pressing Ctrl+Shift+P.
- Type Git: Clone and select it.
- Enter your repository URL:
https://github.com/[your-username]/mslearn-genaiops.git - Select a location on your local machine to clone the repository.
- When prompted, select Open to open the cloned repository in VS Code.
Deploy Microsoft Foundry resources
Now you’ll use the Azure Developer CLI to deploy all required Azure resources.
-
In Visual Studio Code, open a terminal by selecting Terminal > New Terminal from the menu.
-
Authenticate with Azure Developer CLI:
azd auth loginThis opens a browser window for Azure authentication. Sign in with your Azure credentials.
-
Authenticate with Azure CLI:
az loginSign in with your Azure credentials when prompted.
-
Provision resources:
azd upWhen prompted, provide:
- Environment name (e.g.,
dev,test) - Used to name all resources - Azure subscription - Where resources will be created
- Location - Azure region (recommended: Sweden Central)
The command deploys the infrastructure from the
infra/folder, creating:- Resource Group - Container for all resources
- Foundry (AI Services) - The hub with access to models like GPT-4.1
- Foundry Project - Your workspace for creating and managing agents
- Log Analytics Workspace - Collects logs and telemetry data
- Application Insights - Monitors agent performance and usage
- Environment name (e.g.,
-
Create a
.envfile with the environment variables:azd env get-values > .envThis creates a
.envfile in your project root with all the provisioned resource information. -
Add the agent configuration to your
.envfile:AGENT_NAME=trail-guide MODEL_NAME=gpt-4.1
Install Python dependencies
With your Azure resources deployed, install the required Python packages to work with Microsoft Foundry.
-
In the VS Code terminal, create and activate a virtual environment:
python -m venv .venv .venv\Scripts\Activate.ps1 -
Install the required dependencies:
python -m pip install -r requirements.txtThis installs all necessary dependencies including:
azure-ai-projects- SDK for working with AI Foundry agentsazure-identity- Azure authenticationpython-dotenv- Load environment variables- Other evaluation, testing, and development tools
Deploy and test agent versions
You’ll deploy three versions of the Trail Guide Agent, each with different system prompts that progressively enhance capabilities.
Deploy trail guide agent V1
Start by deploying the first version of the trail guide agent.
-
In the VS Code terminal, navigate to the trail guide agent directory:
cd src\agents\trail_guide_agent -
Open the agent creation script (
trail_guide_agent.py) and locate the line that reads the prompt file:with open('prompts/v1_instructions.txt', 'r') as f: instructions = f.read().strip()Verify it’s configured to read from
v1_instructions.txt. -
Run the agent creation script:
python trail_guide_agent.pyYou should see output confirming the agent was created:
Agent created (id: agent_xxx, name: trail-guide, version: 1)Note the Agent ID for later use.
-
Commit your changes and tag the version:
git add trail_guide_agent.py git commit -m "Deploy trail guide agent V1" git tag v1
Test agent V1
Verify your agent is working by testing it in the Microsoft Foundry portal.
- In a web browser, open the Microsoft Foundry portal at
https://ai.azure.comand sign in using your Azure credentials. - Navigate to Agents in the left navigation.
- Select your trail-guide agent from the list.
- Test the agent by asking questions like:
- “What gear do I need for a day hike?”
- “Recommend a trail near Seattle for beginners”
Deploy trail guide agent V2
Next, deploy a second version with enhanced capabilities.
-
Open
trail_guide_agent.pyand update the prompt file path:Change:
with open('prompts/v1_instructions.txt', 'r') as f:To:
with open('prompts/v2_instructions.txt', 'r') as f: -
Run the agent creation script:
python trail_guide_agent.pyYou should see output confirming the agent was created:
Agent created (id: agent_yyy, name: trail-guide, version: 2)Note the Agent ID for later use.
-
Commit your changes and tag the version:
git add trail_guide_agent.py git commit -m "Deploy trail guide agent V2 with enhanced capabilities" git tag v2
Deploy trail guide agent V3
Finally, deploy the third version with production-ready features.
-
Open
trail_guide_agent.pyand update the prompt file path:Change:
with open('prompts/v2_instructions.txt', 'r') as f:To:
with open('prompts/v3_instructions.txt', 'r') as f: -
Run the agent creation script:
python trail_guide_agent.pyYou should see output confirming the agent was created:
Agent created (id: agent_zzz, name: trail-guide, version: 3)Note the Agent ID for later use.
-
Commit your changes and tag the version:
git add trail_guide_agent.py git commit -m "Deploy trail guide agent V3 with production features" git tag v3
Compare agent versions
Now that you have three agent versions deployed, compare their behavior and prompt evolution.
Review version history
Examine your Git tags to see the version history.
-
View all version tags:
git tagYou should see:
v1 v2 v3 -
View the commit history with tags:
git log --oneline --decorateThis shows each deployment milestone marked with its corresponding tag.
Review prompt differences
Examine the prompt files to understand how each version evolved.
- In VS Code, open the three prompt files in the
prompts/directory:v1_instructions.txt- Basic trail guide functionalityv2_instructions.txt- Enhanced with personalizationv3_instructions.txt- Production-ready with advanced capabilities
- Notice the evolution:
- V1 → V2: Added personalization and enhanced guidance
- V2 → V3: Added structured framework and enterprise features
-
In the Microsoft Foundry portal, test each agent version with the same question to observe behavior differences.
Try this question: “I’m planning a weekend hiking trip near Seattle. What should I know?”
Observe how each version responds:
- V1: Provides basic trail recommendations and general advice
- V2: Adds personalized suggestions based on experience level and preferences
- V3: Includes comprehensive guidance with safety considerations, weather factors, and detailed planning steps
Clean up
To avoid incurring unnecessary Azure costs, delete the resources you created in this exercise.
-
In the VS Code terminal, run the following command:
azd down -
When prompted, confirm that you want to delete the resources.
Next steps
Continue your learning journey by exploring agent evaluation techniques.
In the next lab, you’ll learn to evaluate these agent versions using manual testing processes to determine which performs better for different scenarios and customer segments.