Develop a multimodal generative AI app
In this exercise, you use the Phi-4-multimodal-instruct generative AI model to generate responses to prompts that include text, images, and audio. You’ll develop an app that provides AI assistance with fresh produce in a grocery store by using Azure AI Foundry and the Azure AI Model Inference service.
This exercise takes approximately 30 minutes.
Note: This exercise is based on pre-release SDKs, which may be subject to change. Where necessary, we’ve used specific versions of packages; which may not reflect the latest available versions. You may experience some unexpected behavior, warnings, or errors.
Create an Azure AI Foundry project
Let’s start by creating an Azure AI Foundry project.
-
In a web browser, open the Azure AI Foundry portal at
https://ai.azure.com
and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Azure AI Foundry logo at the top left to navigate to the home page, which looks similar to the following image: - In the home page, select + Create project.
- In the Create a project wizard, enter a valid name for your project and if an existing hub is suggested, choose the option to create a new one. Then review the Azure resources that will be automatically created to support your hub and project.
- Select Customize and specify the following settings for your hub:
- Hub name: A valid name for your hub
- Subscription: Your Azure subscription
- Resource group: Create or select a resource group
- Location: Select any of the following regions*:
- East US
- East US 2
- North Central US
- South Central US
- Sweden Central
- West US
- West US 3
- Connect Azure AI Services or Azure OpenAI: Create a new AI Services resource
- Connect Azure AI Search: Skip connecting
* At the time of writing, the Microsoft Phi-4-multimodal-instruct model we’re going to use in this exercise is available in these regions. You can check the latest regional availability for specific models in the Azure AI Foundry documentation. In the event of a regional quota limit being reached later in the exercise, there’s a possibility you may need to create another resource in a different region.
- Select Next and review your configuration. Then select Create and wait for the process to complete.
-
When your project is created, close any tips that are displayed and review the project page in Azure AI Foundry portal, which should look similar to the following image:
Deploy a model
Now you’re ready to deploy a Phi-4-multimodal-instruct model to support multimodal prompts.
- In the toolbar at the top right of your Azure AI Foundry project page, use the Preview features (⏿) icon to ensure that the Deploy models to Azure AI model inference service feature is enabled. This feature ensures your model deployment is available to the Azure AI Inference service, which you’ll use in your application code.
- In the pane on the left for your project, in the My assets section, select the Models + endpoints page.
- In the Models + endpoints page, in the Model deployments tab, in the + Deploy model menu, select Deploy base model.
- Search for the Phi-4-multimodal-instruct model in the list, and then select and confirm it.
- Agree to the license agreement if prompted, and then deploy the model with the following settings by selecting Customize in the deployment details:
- Deployment name: A valid name for your model deployment
- Deployment type: Global Standard
- Deployment details: Use the default settings
- Wait for the deployment provisioning state to be Completed.
Create a client application
Now that you’ve deployed the model, you can use the deployment in a client application.
Tip: You can choose to develop your solution using Python or Microsoft C#. Follow the instructions in the appropriate section for your chosen language.
Prepare the application configuration
- In the Azure AI Foundry portal, view the Overview page for your project.
- In the Project details area, note the Project connection string. You’ll use this connection string to connect to your project in a client application.
-
Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the Azure portal at
https://portal.azure.com
; signing in with your Azure credentials if prompted.Close any welcome notifications to see the Azure portal home page.
-
Use the [>_] button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a PowerShell environment with no storage in your subscription.
The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in.
Note: If you have previously created a cloud shell that uses a Bash environment, switch it to PowerShell.
-
In the cloud shell toolbar, in the Settings menu, select Go to Classic version (this is required to use the code editor).
Ensure you've switched to the classic version of the cloud shell before continuing.
-
In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text):
rm -r mslearn-ai-foundry -f git clone https://github.com/microsoftlearning/mslearn-ai-studio mslearn-ai-foundry
Tip: As you paste commands into the cloudshell, the output may take up a large amount of the screen buffer. You can clear the screen by entering the
cls
command to make it easier to focus on each task. -
After the repo has been cloned, navigate to the folder containing the application code files:
Python
cd mslearn-ai-foundry/labfiles/multimodal/python
C#
cd mslearn-ai-foundry/labfiles/multimodal/c-sharp
-
In the cloud shell command-line pane, enter the following command to install the libraries you’ll use:
Python
python -m venv labenv ./labenv/bin/Activate.ps1 pip install python-dotenv azure-identity azure-ai-projects azure-ai-inference
C#
dotnet add package Azure.Identity dotnet add package Azure.AI.Projects --version 1.0.0-beta.3 dotnet add package Azure.AI.Inference --version 1.0.0-beta.3
-
Enter the following command to edit the configuration file that has been provided:
Python
code .env
C#
code appsettings.json
The file is opened in a code editor.
- In the code file, replace the your_project_connection_string placeholder with the connection string for your project (copied from the project Overview page in the Azure AI Foundry portal), and the your_model_deployment placeholder with the name you assigned to your Phi-4-multimodal-instruct model deployment.
- After you’ve replaced the placeholders, in the code editor, use the CTRL+S command or Right-click > Save to save your changes and then use the CTRL+Q command or Right-click > Quit to close the code editor while keeping the cloud shell command line open.
Write code to connect to your project and get a chat client for your model
Tip: As you add code, be sure to maintain the correct indentation.
-
Enter the following command to edit the code file that has been provided:
Python
code chat-app.py
C#
code Program.cs
-
In the code file, note the existing statements that have been added at the top of the file to import the necessary SDK namespaces. Then, under the comment Add references, add the following code to reference the namespaces in the libraries you installed previously:
Python
# Add references from dotenv import load_dotenv from azure.identity import DefaultAzureCredential from azure.ai.projects import AIProjectClient from azure.ai.inference.models import ( SystemMessage, UserMessage, TextContentItem, ImageContentItem, ImageUrl, AudioContentItem, InputAudio, AudioContentFormat, )
C#
// Add references using Azure.Identity; using Azure.AI.Projects; using Azure.AI.Inference;
- In the main function, under the comment Get configuration settings, note that the code loads the project connection string and model deployment name values you defined in the configuration file.
-
Under the comment Initialize the project client, add the following code to connect to your Azure AI Foundry project using the Azure credentials you’re currently signed in with:
Python
# Get configuration settings project_client = AIProjectClient.from_connection_string( conn_str=project_connection, credential=DefaultAzureCredential())
C#
// Get configuration settings var projectClient = new AIProjectClient(project_connection, new DefaultAzureCredential());
-
Under the comment Get a chat client, add the following code to create a client object for chatting with your model:
Python
# Get a chat client chat_client = project_client.inference.get_chat_completions_client(model=model_deployment)
C#
// Get a chat client ChatCompletionsClient chat = projectClient.GetChatCompletionsClient();
Write code to use a text-based prompt
-
Note that the code includes a loop to allow a user to input a prompt until they enter “quit”. Then in the loop section, under the comment Get a response to text input, add the following code to submit a text-based prompt and retrieve the response from your model:
Python
# Get a response to text input response = chat_client.complete( messages=[ SystemMessage(system_message), UserMessage(content=[TextContentItem(text= prompt)]) ]) print(response.choices[0].message.content)
C#
// Get a response to text input var requestOptions = new ChatCompletionsOptions() { Model = model_deployment, Messages = { new ChatRequestSystemMessage(system_message), new ChatRequestUserMessage(prompt), } }; Response<ChatCompletions> response = chat.Complete(requestOptions); Console.WriteLine(response.Value.Content);
-
Use the CTRL+S command to save your changes to the code file - don’t close it yet though.
-
In the cloud shell command-line pane beneath the code editor, enter the following command to run the app:
Python
python chat-app.py
C#
dotnet run
- When prompted, enter
1
to use a text-based prompt and then enter the promptI want to make an apple pie. What kind of apple should I use?
- Review the response. Then enter
quit
to exit the program.
Write code to use an image-based prompt
-
In the code editor for the chat-app.py file, in the loop section, under the comment Get a response to image input, add the following code to submit a prompt that includes the following image:
Python
# Get a response to image input image_url = "https://github.com/microsoftlearning/mslearn-ai-studio/raw/refs/heads/main/labfiles/multimodal/orange.jpg" image_format = "jpeg" request = Request(image_url, headers={"User-Agent": "Mozilla/5.0"}) image_data = base64.b64encode(urlopen(request).read()).decode("utf-8") data_url = f"data:image/{image_format};base64,{image_data}" response = chat_client.complete( messages=[ SystemMessage(system_message), UserMessage(content=[ TextContentItem(text=prompt), ImageContentItem(image_url=ImageUrl(url=data_url)) ]), ] ) print(response.choices[0].message.content)
C#
// Get a response to image input string imageUrl = "https://github.com/microsoftlearning/mslearn-ai-studio/raw/refs/heads/main/labfiles/multimodal/orange.jpg"; ChatCompletionsOptions requestOptions = new ChatCompletionsOptions() { Messages = { new ChatRequestSystemMessage(system_message), new ChatRequestUserMessage([ new ChatMessageTextContentItem(prompt), new ChatMessageImageContentItem(new Uri(imageUrl)) ]), }, Model = model_deployment }; var response = chat.Complete(requestOptions); Console.WriteLine(response.Value.Content);
-
Use the CTRL+S command to save your changes to the code file - don’t close it yet though.
-
In the cloud shell command-line pane beneath the code editor, enter the following command to run the app:
Python
python chat-app.py
C#
dotnet run
- When prompted, enter
2
to use an image-based prompt and then enter the promptI don't know what kind of fruit this is. Can you identify it, and tell me what kinds of food I could make with it?
- Review the response. Then enter
quit
to exit the program.
Write code to use an audio-based prompt
-
In the code editor for the chat-app.py file, in the loop section, under the comment Get a response to audio input, add the following code to submit a prompt that includes the following audio:
Python
# Get a response to audio input file_path="https://github.com/microsoftlearning/mslearn-ai-studio/raw/refs/heads/main/labfiles/multimodal/manzanas.mp3" response = chat_client.complete( messages=[ SystemMessage(system_message), UserMessage( [ TextContentItem(text=prompt), { "type": "audio_url", "audio_url": {"url": file_path} } ] ) ] ) print(response.choices[0].message.content)
C#
// Get a response to audio input string audioUrl="https://github.com/microsoftlearning/mslearn-ai-studio/raw/refs/heads/main/labfiles/multimodal/manzanas.mp3"; var requestOptions = new ChatCompletionsOptions() { Messages = { new ChatRequestSystemMessage(system_message), new ChatRequestUserMessage( new ChatMessageTextContentItem(prompt), new ChatMessageAudioContentItem(new Uri(audioUrl))), }, Model = model_deployment }; var response = chat.Complete(requestOptions); Console.WriteLine(response.Value.Content);
-
Use the CTRL+S command to save your changes to the code file. You can also close the code editor (CTRL+Q) if you like.
-
In the cloud shell command-line pane beneath the code editor, enter the following command to run the app:
Python
python chat-app.py
C#
dotnet run
- When prompted, enter
3
to use an audio-based prompt and then enter the promptWhat is this customer saying in English?
- Review the response.
-
You can continue to run the app, choosing different prompt types and trying different prompts. When you’re finished, enter
quit
to exit the program.If you have time, you can modify the code to use a different system prompt and your own internet-accessible image and audio files.
Note: In this simple app, we haven’t implemented logic to retain conversation history; so the model will treat each prompt as a new request with no context of the previous prompt.
Summary
In this exercise, you used Azure AI Foundry and the Azure AI Inference SDK to create a client application uses a multimodal model to generate responses to text, images, and audio.
Clean up
If you’ve finished exploring Azure AI Foundry, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs.
- Return to the browser tab containing the Azure portal (or re-open the Azure portal at
https://portal.azure.com
in a new browser tab) and view the contents of the resource group where you deployed the resources used in this exercise. - On the toolbar, select Delete resource group.
- Enter the resource group name and confirm that you want to delete it.