WWL Tenants - Terms of use
If you are being provided with a tenant as a part of an instructor-led training delivery, please note that the tenant is made available for the purpose of supporting the hands-on labs in the instructor-led training.
Tenants should not be shared or used for purposes outside of hands-on labs. The tenant used in this course is a trial tenant and cannot be used or accessed after the class is over and are not eligible for extension.
Tenants must not be converted to a paid subscription. Tenants obtained as a part of this course remain the property of Microsoft Corporation and we reserve the right to obtain access and repossess at any time.
Lab 4 - Exercise 1 - Protect data in AI environments
You are Joni Sherman, the Information Security Administrator for Contoso Ltd. As AI tools like Microsoft Copilot become more integrated into daily workflows, your team has been asked to assess and improve protections around sensitive data. In this lab, you’ll explore how Microsoft Purview DSPM for AI can help secure data interactions with AI tools through policy enforcement, risk detection, and exposure assessments.
Tasks:
- Use DSPM for AI to create a DLP policy for generative AI sites
- Create an insider risk policy to detect risky AI interactions
- Block Copilot from accessing labeled content
- Run a data assessment to detect unlabeled content
Task 1 – Use DSPM for AI to create a DLP policy for generative AI sites
To reduce the risk of data loss through AI assistants, you’ll start by creating a DLP policy using the Fortify your data security recommendation. This policy uses Adaptive Protection to restrict pasting or uploading sensitive data into AI tools like ChatGPT and Copilot in Edge, Chrome, and Firefox.
-
Sign into the Client 1 VM (SC-401-CL1) as the SC-401-cl1\admin account.
-
In Microsoft Edge, navigate to
https://purview.microsoft.com
and sign in as Joni Sherman,JoniS@WWLxZZZZZZ.onmicrosoft.com
(where ZZZZZZ is your unique tenant ID provided by your lab hosting provider). -
In Microsoft Purview, navigate to DSPM for AI by selecting Solutions > DSPM for AI > Recommendations
-
Select the Fortify your data security recommendation.
-
In the Data security for AI flyout page, review the summary, then select Create policies. This creates a preconfigured DLP policy targeting generative AI sites.
-
Once the policy has been created, select View policy.
-
In the Policy details section, select Edit policy in solution to open the Data Loss Prevention solution in Microsoft Purview.
-
On the Policies page, locate and select the DSPM for AI - Block sensitive info from AI sites policy.
-
In the flyout, select View simulation.
-
On the simulation dashboard, select Edit the policy.
-
Select Next until you reach the Choose where to apply the policy page. Confirm the policy is scoped to Devices.
-
Select Next.
-
On the Customize advanced DLP rules page, select the pencil icon next to Block with override for elevated risk users to view the rule.
- Review the configuration of the rule created by DSPM for AI:
- Under Conditions, note the sensitive info types included and that the rule uses Adaptive Protection based on elevated risk.
- Under Actions, for both the Upload and Paste activities, select Edit next to Sensitive service domain group restriction(s).
- In the service domain group configuration, confirm that Generative AI Websites is set to Block with override.
-
Select Cancel to exit the rule editor without changes.
-
Back on the Customize advanced DLP rules page, select Next.
-
On the Policy mode page, select Turn the policy on if it’s not edited within fifteen days of simulation, then select Next.
- On the Review and finish page, select Submit, then select Done.
You’ve created a policy that blocks high-risk users from sharing sensitive data on generative AI sites and confirmed the policy configuration set by DSPM for AI.
Task 2 – Create an insider risk policy to detect risky AI interactions
Next, you’ll create a policy that helps detect risky prompt behavior in Copilot.
-
In Microsoft Purview, navigate to DSPM for AI by selecting Solutions > DSPM for AI > Recommendations.
-
Select the Detect risky interactions in AI apps (preview) recommendation.
-
In the Detect risky interactions in AI apps (preview) flyout page, review the summary, then select Create policy.
-
Once the policy is created, select View policy.
-
In the Policy details section, select Edit policy in solution to open the Insider Risk Management area of Microsoft Purview.
-
On the Policies page, locate and select the DSPM for AI - Detect risky AI usage policy.
-
In the flyout, select Edit policy to review the full policy configuration.
-
On the Choose a policy template page, observe that the policy uses the Risky AI usage (preview) template.
-
Select Next until you reach the Choose triggering event for this policy page. Confirm that the triggering event is User account deleted from Microsoft Entra ID, which signals potential offboarding-related risks that might precede or follow risky AI activity.
-
Select Next.
-
On the Indicators page, expand the indicator categories to review which signals are selected:
- Browsed to generative AI websites
- Received sensitive response from Copilot
- Entered risky prompt in Copilot
-
Select Next until you reach the Review and finish page, then select Cancel to exit the editor without making changes.
You’ve created a policy that detects risky AI interactions, including prompts and responses, to help identify early signs of risky user behavior.
Task 3 – Block Copilot from accessing labeled content
You can further reduce risk by preventing Copilot from processing or responding with content protected by sensitivity labels.
-
In Microsoft Purview, navigate to DSPM for AI by selecting Solutions > DSPM for AI > Recommendations.
-
Select the Protect sensitive data referenced in Microsoft 365 Copilot and agents (preview) recommendation.
-
Review the guidance provided in this recommendation.
-
Navigate to Solutions > Data Loss Prevention > Policies.
-
Select + Create policy, then choose Custom policy.
-
On the Name your DLP policy page, enter:
- Name:
DLP - Block Copilot access to labeled content
- Description:
Prevents Microsoft 365 Copilot from processing or responding with content labeled using sensitivity labels.
- Name:
-
Select Next until you reach the Choose where to apply the policy page.
-
Select Microsoft 365 Copilot (preview) as the policy scope, then select Next until you reach the Customize advanced DLP rules page.
-
Select Create rule, and configure:
- Name:
Prevent Copilot from accessing labeled data
- Under Conditions, select Add condition > Content contains > Sensitivity labels. Add these sensitivity labels:
Trusted People
Project - Falcon
Financial Data
- Select Add
- Under Actions select Add an action > Prevent Copilot from processing content (preview)
- Select Save at the bottom of the Create rule flyout.
- Name:
-
Back on the Customize advanced DLP rules page, select Next.
-
On the Policy mode page, select Turn the policy on immediately, then select Next.
-
On the Review and finish page select Submit, then select Done on the New policy created page.
-
Return to DSPM for AI recommendations by selecting Solutions > DSPM for AI > Recommendations.
-
Select the Protect sensitive data referenced in Microsoft 365 Copilot and agents (preview) recommendation and select Mark as complete.
You’ve created a DLP policy that prevents labeled content from being used in Copilot prompts and responses.
Task 4 – Run a data risk assessment to detect unlabeled content
To understand potential gaps in labeling coverage, you’ll run a data risk assessment to identify files without sensitivity labels that may be accessed by Copilot.
-
In DSPM for AI, select the recommendation titled Protect sensitive data referenced in Copilot and agent responses.
-
In the Protect sensitive data referenced in Copilot and agent responses pane, review the summary, then select Go to assessments.
-
On the Data risk assessments page, select Create custom assessment
-
On the Basic details page, enter:
- Name:
Unlabeled File Exposure Assessment
- Description:
Identifies files without sensitivity labels that may be exposed in Microsoft 365 Copilot responses and provides recommendations to reduce oversharing risks.
- Name:
-
Select Next.
-
On the Add users page, select All, then select Next.
-
On the Add data sources to assess page, leave the default location of SharePoint selected, then select Next.
-
On the Review and run the data assessment scan page, select Save and run.
-
On the Data assessment successfully created page, select Done.
You’ve now used Microsoft Purview DSPM for AI to detect AI-related risks, enforce policies, and assess sensitive data exposure, helping your organization use AI securely.