Implement Azure AI Content Safety
In this exercise, you will provision a Content Safety resource, test the resource in Azure AI Studio, and test the resource in code.
Provision a Content Safety resource
If you don’t already have one, you’ll need to provision a Content Safety resource in your Azure subscription.
- Open the Azure portal at
https://portal.azure.com
, and sign in using the Microsoft account associated with your Azure subscription. - Select Create a resource.
- In the search field, search for Content Safety. Then, in the results, select Create under Azure AI Content Safety.
- Provision the resource using the following settings:
- Subscription: Your Azure subscription.
- Resource group: Choose or create a resource group.
- Region: Select East US
- Name: Enter a unique name.
- Pricing tier: Select F0 (free), or S (standard) if F0 is not available.
- Select Review + create, then select Create to provision the resource.
- Wait for deployment to complete, and then go to the resource.
- Select Access Control in the left navigation bar, then select + Add and Add role assignment.
- Scroll down to choose the Cognitive Services User role and select Next.
- Add your account to this role, and then select Review + assign.
- Select Resource Management in the left hand navigation bar and select Keys and Endpoint. Leave this page open so you can copy the keys later.
Use Azure AI Content Safety Prompt Shields
In this exercise you will use Azure AI Studio to test Content Safety Prompt Sheilds with two sample inputs. One simulates a user prompt, and the other simulates a document with potentially unsafe text embedded into it.
- In another browser tab, open the Content Safety page of Azure AI Studio and sign in.
- Under Moderate text content select Try it out.
- On the Moderate text content page, under Azure AI Services select the Content Safety resource you created earlier.
- Select Multiple risk categories in one sentence. Review the document text for potential issues.
- Select Run test and review the results.
- Optionally, alter the threshold levels and select Run test again.
- On the left navigation bar, select Protected material detection for text.
- Select Protected lyrics and note that these are the lyrics of a published song.
- Select Run test and review the results.
- On the left navigation bar, select Moderate image content.
- Select Self-harm content.
- Notice that all images are blurred by default in AI Studio. You should also be aware that the sexual content in the samples is very mild.
- Select Run test and review the results.
- On the left navigation bar, select Prompt shields.
- On the Prompt shields page, under Azure AI Services select the Content Safety resource you created earlier.
- Select Prompt & document attack content. Review the user prompt and document text for potential issues.
- Select Run test.
-
In View results, verify that Jailbreak attacks were detected in both the user prompt and the document.
[!TIP] Code is available for all of the samples in AI Studio.
- Under Next steps, under View the code select View code. The Sample code window is displayed.
- Use the down arrow to select either Python or C# and then select Copy to copy the sample code to the clipboard.
- Close the Sample code screen.
Configure your application
You will now create an application in either C# or Python.
C#
Prerequisites
- Visual Studio Code on one of the supported platforms.
- .NET 8 is the target framework for this exercise.
- The C# extension for Visual Studio Code.
Setting up
Perform the following steps to prepare Visual Studio Code for the exercise.
- Start Visual Studio Code and in the Explorer view, click Create .NET Project selecting Console App.
- Select a folder on your computer, and give the project a name. Select Create project and acknowledge the warning message.
- In the Explorer pane, expand Solution Explorer and select Program.cs.
- Build and run the project by selecting Run -> Run without Debugging.
- Under Solution Explorer, right-click the C# project and select Add NuGet Package.
- Search for Azure.AI.TextAnalytics and select the latest version.
- Search for a second NuGet Package: Microsoft.Extensions.Configuration.Json 8.0.0. The project file should now list two NuGet packages.
Add code
- Paste the sample code you copied earlier under the ItemGroup section.
- Scroll down to find Replace with your own subscription _key and endpoint.
- In the Azure portal, on the Keys and Endpoint page, copy one of the Keys (1 or 2). Replace **
** with this value. - In the Azure portal, on the Keys and Endpoint page, copy the Endpoint. Paste this value into your code to replace **
**. - In Azure AI Studio, copy the User prompt value. Paste this into your code to replace **
**. - Scroll down to **
** and delete this line of code. - In Azure AI Studio, copy the Document value.
- Scroll down to **
** and paste your document value. - Select Run -> Run without Debugging and verify that an attack was detected.
Python
Prerequisites
-
Visual Studio Code on one of the supported platforms.
-
The Python extension is installed for Visual Studio Code.
-
The requests module is installed.
- Create a new Python file with a .py extension and give it a suitable name.
- Paste the sample code you copied earlier.
- Scroll down to find the section titled Replace with your own subscription _key and endpoint.
- In the Azure portal, on the Keys and Endpoint page, copy one of the Keys (1 or 2). Replace **
** with this value. - In the Azure portal, on the Keys and Endpoint page, copy the Endpoint. Paste this value into your code to replace **
**. - In Azure AI Studio, copy the User prompt value. Paste this into your code to replace **
**. - Scroll down to **
** and delete this line of code. - In Azure AI Studio, copy the Document value.
- Scroll down to **
** and paste your document value. -
From the integrated terminal for your file, run the program, eg:
.\prompt-shield.py
- Validate that an attack is detected.
- Optionally, you can experiment with different test content and document values.