In this lab, you develop the code for an AI-powered assistant designed to automate development operations and help streamline tasks. You use the Semantic Kernel SDK to build the AI assistant and connect it to the large language model (LLM) service. The Semantic Kernel SDK allows you to create a smart application that can interact with the LLM service, respond to natural language queries, and provide personalized insights to the user. For this exercise, mock functions are provided to represent typical devops tasks. Let’s get started!
This lab combines all of the skills learned in the Develop generative AI apps with Azure OpenAI and Semantic Kernel learning path.
This exercise takes approximately 30 minutes to complete.
In a web browser, open the Azure AI Foundry portal at https://ai.azure.com
and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Azure AI Foundry logo at the top left to navigate to the home page, which looks similar to the following image (close the Help pane if it’s open):
gpt-4o
model; which we’ll use in our project.* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there’s a possibility you may need to create another resource in a different region.
In the navigation pane on the left, select Overview to see the main page for your project; which looks like this:
Note: If an *Insufficient permissions** error is displayed, use the Fix me button to resolve it.
Under the Libraries section of the overview page, select Azure OpenAI
You’ll use the data here in the next task to build your kernel. Remember to keep your keys private and secure!
Now that you deployed a model, you can use the Semantic Kernel SDK to create a client application that chats with it. Let’s get started!
Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the Azure portal at https://portal.azure.com
; signing in with your Azure credentials if prompted.
Close any welcome notifications to see the Azure portal home page.
Use the [>_] button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a PowerShell environment with no storage in your subscription.
The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in.
Note: If you have previously created a cloud shell that uses a Bash environment, switch it to PowerShell.
In the cloud shell toolbar, in the Settings menu, select Go to Classic version (this is required to use the code editor).
Ensure you've switched to the classic version of the cloud shell before continuing.
In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text):
rm -r semantic-kernel -f
git clone https://github.com/MicrosoftLearning/mslearn-ai-semantic-kernel mslearn-ai-semantic-kernel
Tip: As you paste commands into the cloudshell, the output may take up a large amount of the screen buffer. You can clear the screen by entering the
cls
command to make it easier to focus on each task.
After the repo has been cloned, navigate to the folder containing the chat application code files:
Note: Follow the steps for your chosen programming language.
Python
cd mslearn-ai-semantic-kernel/Labfiles/05-ai-assistant/Python
C#
cd mslearn-ai-semantic-kernel/Labfiles/05-ai-assistant/C-sharp
In the cloud shell command-line pane, enter the following command to install the libraries you’ll use:
Python
python -m venv labenv
./labenv/bin/Activate.ps1
pip install python-dotenv azure-identity semantic-kernel[azure]
C#
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.PromptTemplates.Handlebars
Enter the following command to edit the configuration file that has been provided:
Python
code .env
C#
code appsettings.json
The file should open in a code editor.
In the code file, replace the your_project_endpoint and your_project_api_key placeholders with the Azure OpenAI endpoint and API key for your project (copied from the project Overview page in the Azure AI Foundry portal), and replace the your_deployment_name placeholder with the name you assigned to your gpt-4o model.
After you replace the placeholders, in the code editor, use the CTRL+S command or Right-click > Save to save your changes and then use the CTRL+Q command or Right-click > Quit to close the code editor while keeping the cloud shell command line open.
Enter the following command to edit the code file that has been provided:
Python
code devops.py
C#
code Program.cs
Add the following code under the comment Create a kernel builder with Azure OpenAI chat completion:
Python
# Create a kernel builder with Azure OpenAI chat completion
kernel = Kernel()
chat_completion = AzureChatCompletion(
api_key=api_key,
endpoint=endpoint,
deployment_name=deployment_name
)
kernel.add_service(chat_completion)
C#
// Create a kernel builder with Azure OpenAI chat completion
var builder = Kernel.CreateBuilder();
builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
var kernel = builder.Build();
Near the bottom of the file, find the comment Create a kernel function to build the stage environment, and add the following code to create a mock plugin functin that will build the staging environment:
Python
# Create a kernel function to build the stage environment
@kernel_function(name="BuildStageEnvironment")
def build_stage_environment(self):
return "Stage build completed."
C#
// Create a kernel function to build the stage environment
[KernelFunction("BuildStageEnvironment")]
public string BuildStageEnvironment()
{
return "Stage build completed.";
}
The KernelFunction
decorator declares your native function. You use a descriptive name for the function so that the AI can call it correctly.
Navigate to the comment Import plugins to the kernel and add the following code:
Python
# Import plugins to the kernel
kernel.add_plugin(DevopsPlugin(), plugin_name="DevopsPlugin")
C#
// Import plugins to the kernel
kernel.ImportPluginFromType<DevopsPlugin>();
Under the comment Create prompt execution settings, add the following code to automatically invoke the function:
Python
# Create prompt execution settings
execution_settings = AzureChatPromptExecutionSettings()
execution_settings.function_choice_behavior = FunctionChoiceBehavior.Auto()
C#
// Create prompt execution settings
OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
Using this setting will allow the kernel to automatically invoke functions without the need to specify them in the prompt.
Add the following code under the comment Create chat history:
Python
# Create chat history
chat_history = ChatHistory()
C#
// Create chat history
var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
ChatHistory chatHistory = [];
Uncomment the code block located after the comment User interaction logic
Use the CTRL+S command to save your changes to the code file.
In the cloud shell command-line pane, enter the following command to sign into Azure.
az login
You must sign into Azure - even though the cloud shell session is already authenticated.
Note: In most scenarios, just using az login will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the –tenant parameter. See Sign into Azure interactively using the Azure CLI for details.
When prompted, follow the instructions to open the sign-in page in a new tab and enter the authentication code provided and your Azure credentials. Then complete the sign in process in the command line, selecting the subscription containing your Azure AI Foundry hub if prompted.
After you have signed in, enter the following command to run the application:
Python
python devops.py
C#
dotnet run
When prompted, enter the following prompt Please build the stage environment
You should see a response similar to the following output:
Assistant: The stage environment has been successfully built.
Next, enter the following prompt Please deploy the stage environment
You should see a response similar to the following output:
Assistant: The staging site has been deployed successfully.
Press Enter to end the program.
Add the following code under the comment Create a kernel function to deploy the staging environment
Python
# Create a kernel function to deploy the staging environment
deploy_stage_function = KernelFunctionFromPrompt(
prompt="""This is the most recent build log:
If there are errors, do not deploy the stage environment. Otherwise, invoke the stage deployment function""",
function_name="DeployStageEnvironment",
description="Deploy the staging environment"
)
kernel.add_function(plugin_name="DeployStageEnvironment", function=deploy_stage_function)
C#
// Create a kernel function to deploy the staging environment
var deployStageFunction = kernel.CreateFunctionFromPrompt(
promptTemplate: @"This is the most recent build log:
If there are errors, do not deploy the stage environment. Otherwise, invoke the stage deployment function",
functionName: "DeployStageEnvironment",
description: "Deploy the staging environment"
);
kernel.Plugins.AddFromFunctions("DeployStageEnvironment", [deployStageFunction]);
Use the CTRL+S command to save your changes to the code file.
In the cloud shell command-line pane, enter the following command to run the application:
Python
python devops.py
C#
dotnet run
When prompted, enter the following prompt Please deploy the stage environment
You should see a response similar to the following output:
Assistant: The stage environment cannot be deployed because the earlier stage build failed due to unit test errors. Deploying a faulty build to stage may cause eventual issues and compromise the environment.
Your response from the LLM may vary, but still prevent you from deploying the stage site.
Add the following code under the comment Create a handlebars prompt:
Python
# Create a handlebars prompt
hb_prompt = """<message role="system">Instructions: Before creating a new branch for a user, request the new branch name and base branch name/message>
<message role="user">Can you create a new branch?</message>
<message role="assistant">Sure, what would you like to name your branch? And which base branch would you like to use?</message>
<message role="user"></message>
<message role="assistant">"""
C#
// Create a handlebars prompt
string hbprompt = """
<message role="system">Instructions: Before creating a new branch for a user, request the new branch name and base branch name/message>
<message role="user">Can you create a new branch?</message>
<message role="assistant">Sure, what would you like to name your branch? And which base branch would you like to use?</message>
<message role="user"></message>
<message role="assistant">
""";
In this code, you create a few-shot prompt using the Handlebars template format. The prompt will guide the model to retrieve more information from the user before creating a new branch.
Add the following code under the comment Create the prompt template config using handlebars format:
Python
# Create the prompt template config using handlebars format
hb_template = HandlebarsPromptTemplate(
prompt_template_config=PromptTemplateConfig(
template=hb_prompt,
template_format="handlebars",
name="CreateBranch",
description="Creates a new branch for the user",
input_variables=[
InputVariable(name="input", description="The user input", is_required=True)
]
),
allow_dangerously_set_content=True,
)
C#
// Create the prompt template config using handlebars format
var templateFactory = new HandlebarsPromptTemplateFactory();
var promptTemplateConfig = new PromptTemplateConfig()
{
Template = hbprompt,
TemplateFormat = "handlebars",
Name = "CreateBranch",
};
This code creates a Handlebars template configuration from the prompt. You can use it to create a plugin function.
Add the following code under the comment Create a plugin function from the prompt:
Python
# Create a plugin function from the prompt
prompt_function = KernelFunctionFromPrompt(
function_name="CreateBranch",
description="Creates a branch for the user",
template_format="handlebars",
prompt_template=hb_template,
)
kernel.add_function(plugin_name="BranchPlugin", function=prompt_function)
C#
// Create a plugin function from the prompt
var promptFunction = kernel.CreateFunctionFromPrompt(promptTemplateConfig, templateFactory);
var branchPlugin = kernel.CreatePluginFromFunctions("BranchPlugin", [promptFunction]);
kernel.Plugins.Add(branchPlugin);
This code creates a plugin function for the prompt and adds it to the kernel. Now you’re ready to invoke your function.
Use the CTRL+S command to save your changes to the code file.
In the cloud shell command-line pane, enter the following command to run the application:
Python
python devops.py
C#
dotnet run
When prompted, enter the following text: Please create a new branch
You should see a response similar to the following output:
Assistant: Could you please provide the following details?
1. The name of the new branch.
2. The base branch from which the new branch should be created.
Enter the following text feature-login main
You should see a response similar to the following output:
Assistant: The new branch `feature-login` has been successfully created from `main`.
Near the bottom of the file, find the comment Create a function filter, and add the following code:
Python
# Create a function filter
async def permission_filter(context: FunctionInvocationContext, next: Callable[[FunctionInvocationContext], Awaitable[None]]) -> None:
await next(context)
result = context.result
# Check the plugin and function names
C#
// Create a function filter
class PermissionFilter : IFunctionInvocationFilter
{
public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next)
{
// Check the plugin and function names
await next(context);
}
}
Add the following code under the comment Check the plugin and function names to detect when the DeployToProd
function is invoked:
Python
# Check the plugin and function names
if context.function.plugin_name == "DevopsPlugin" and context.function.name == "DeployToProd":
# Request user approval
# Proceed if approved
C#
// Check the plugin and function names
if ((context.Function.PluginName == "DevopsPlugin" && context.Function.Name == "DeployToProd"))
{
// Request user approval
// Proceed if approved
}
This code uses the FunctionInvocationContext
object to determine which plugin and function were invoked.
Add the following logic to request the user’s permission to book the flight:
Python
# Request user approval
print("System Message: The assistant requires approval to complete this operation. Do you approve (Y/N)")
should_proceed = input("User: ").strip()
# Proceed if approved
if should_proceed.upper() != "Y":
context.result = FunctionResult(
function=result.function,
value="The operation was not approved by the user",
)
C#
// Request user approval
Console.WriteLine("System Message: The assistant requires an approval to complete this operation. Do you approve (Y/N)");
Console.Write("User: ");
string shouldProceed = Console.ReadLine()!;
// Proceed if approved
if (shouldProceed != "Y")
{
context.Result = new FunctionResult(context.Result, "The operation was not approved by the user");
return;
}
Navigate to the comment Add filters to the kernel and add the following code:
Python
# Add filters to the kernel
kernel.add_filter('function_invocation', permission_filter)
C#
// Add filters to the kernel
kernel.FunctionInvocationFilters.Add(new PermissionFilter());
Use the CTRL+S command to save your changes to the code file.
In the cloud shell command-line pane, enter the following command to run the application:
Python
python devops.py
C#
dotnet run
Enter a prompt to deploy the build to production. You should see a response similar to the following:
User: Please deploy the build to prod
System Message: The assistant requires an approval to complete this operation. Do you approve (Y/N)
User: N
Assistant: I'm sorry, but I am unable to proceed with the deployment.
In this lab, you created an endpoint for the large language model (LLM) service, built a Semantic Kernel object, and ran prompts using the Semantic Kernel SDK. You also created plugins and leveraged system messages to guide the model. Congratulations on completing this lab!