Assess query performance using Query Store

In this exercise, learn how to query performance metrics using the Query Store in Azure Database for PostgreSQL.

Before you start

You need your own Azure subscription to complete the exercises in this module. If you don’t have an Azure subscription, you can set up a free trial account at Build in the cloud with an Azure free account.

Create the exercise environment

Deploy resources into your Azure subscription

This step guides you through using Azure CLI commands from the Azure Cloud Shell to create a resource group and run a Bicep script to deploy the Azure services necessary for completing this exercise into your Azure subscription.

  1. Open a web browser and navigate to the Azure portal.

  2. Select the Cloud Shell icon in the Azure portal toolbar to open a new Cloud Shell pane at the bottom of your browser window.

    Screenshot of the Azure toolbar with the Cloud Shell icon highlighted by a red box.

    If prompted, select the required options to open a Bash shell. If you have previously used a PowerShell console, switch it to a Bash shell.

  3. At the Cloud Shell prompt, enter the following to clone the GitHub repo containing exercise resources:

     git clone https://github.com/MicrosoftLearning/mslearn-postgresql.git
    
  4. Next, you run three commands to define variables to reduce redundant typing when using Azure CLI commands to create Azure resources. The variables represent the name to assign to your resource group (RG_NAME), the Azure region (REGION) into which resources will be deployed, and a randomly generated password for the PostgreSQL administrator login (ADMIN_PASSWORD).

    In the first command, the region assigned to the corresponding variable is eastus, but you can also replace it with a location of your preference.

     REGION=eastus
    

    The following command assigns the name to be used for the resource group that will house all the resources used in this exercise. The resource group name assigned to the corresponding variable is rg-learn-work-with-postgresql-$REGION, where $REGION is the location you specified above. However, you can change it to any other resource group name that suits your preference.

     RG_NAME=rg-learn-work-with-postgresql-$REGION
    

    The final command randomly generates a password for the PostgreSQL admin login. Make sure you copy it to a safe place so that you can use it later to connect to your PostgreSQL flexible server.

     a=()
     for i in {a..z} {A..Z} {0..9}; 
        do
        a[$RANDOM]=$i
     done
     ADMIN_PASSWORD=$(IFS=; echo "${a[*]::18}")
     echo "Your randomly generated PostgreSQL admin user's password is:"
     echo $ADMIN_PASSWORD
    
  5. If you have access to more than one Azure subscription, and your default subscription is not the one in which you want to create the resource group and other resources for this exercise, run this command to set the appropriate subscription, replacing the <subscriptionName|subscriptionId> token with either the name or ID of the subscription you want to use:

     az account set --subscription <subscriptionName|subscriptionId>
    
  6. Run the following Azure CLI command to create your resource group:

     az group create --name $RG_NAME --location $REGION
    
  7. Finally, use the Azure CLI to execute a Bicep deployment script to provision Azure resources in your resource group:

     az deployment group create --resource-group $RG_NAME --template-file "mslearn-postgresql/Allfiles/Labs/Shared/deploy-postgresql-server.bicep" --parameters adminLogin=pgAdmin adminLoginPassword=$ADMIN_PASSWORD databaseName=adventureworks
    

    The Bicep deployment script provisions the Azure services required to complete this exercise into your resource group. The resources deployed are an Azure Database for PostgreSQL - Flexible Server. The bicep script also creates a database - which can be configured on the commandline as a parameter.

    The deployment typically takes several minutes to complete. You can monitor it from the Cloud Shell or navigate to the Deployments page for the resource group you created above and observe the deployment progress there.

  8. Close the Cloud Shell pane once your resource deployment is complete.

Troubleshooting deployment errors

You may encounter a few errors when running the Bicep deployment script. The most common messages and the steps to resolve them are:

  • If you previously ran the Bicep deployment script for this learning path and subsequently deleted the resources, you may receive an error message like the following if you are attempting to rerun the script within 48 hours of deleting the resources:

      {"code": "InvalidTemplateDeployment", "message": "The template deployment 'deploy' is not valid according to the validation procedure. The tracking id is '4e87a33d-a0ac-4aec-88d8-177b04c1d752'. See inner errors for details."}
        
      Inner Errors:
      {"code": "FlagMustBeSetForRestore", "message": "An existing resource with ID '/subscriptions/{subscriptionId}/resourceGroups/rg-learn-postgresql-ai-eastus/providers/Microsoft.CognitiveServices/accounts/{accountName}' has been soft-deleted. To restore the resource, you must specify 'restore' to be 'true' in the property. If you don't want to restore existing resource, please purge it first."}
    

    If you receive this message, modify the azure deployment group create command above to set the restore parameter equal to true and rerun it.

  • If the selected region is restricted from provisioning specific resources, you must set the REGION variable to a different location and rerun the commands to create the resource group and run the Bicep deployment script.

      {"status":"Failed","error":{"code":"DeploymentFailed","target":"/subscriptions/{subscriptionId}/resourceGroups/{resourceGrouName}/providers/Microsoft.Resources/deployments/{deploymentName}","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-deployment-operations for usage details.","details":[{"code":"ResourceDeploymentFailure","target":"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}","message":"The resource write operation failed to complete successfully, because it reached terminal provisioning state 'Failed'.","details":[{"code":"RegionIsOfferRestricted","message":"Subscriptions are restricted from provisioning in this region. Please choose a different region. For exceptions to this rule please open a support request with Issue type of 'Service and subscription limits'. See https://review.learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-request-quota-increase for more details."}]}]}}
    
  • If the script is unable to create an AI resource due to the requirement to accept the responsible AI agreement, you may experience the following error; in which case use the Azure Portal user interface to create an Azure AI Services resource, and then re-run the deployment script.

      {"code": "InvalidTemplateDeployment", "message": "The template deployment 'deploy' is not valid according to the validation procedure. The tracking id is 'f8412edb-6386-4192-a22f-43557a51ea5f'. See inner errors for details."}
         
      Inner Errors:
      {"code": "ResourceKindRequireAcceptTerms", "message": "This subscription cannot create TextAnalytics until you agree to Responsible AI terms for this resource. You can agree to Responsible AI terms by creating a resource through the Azure Portal then trying again. For more detail go to https://go.microsoft.com/fwlink/?linkid=2164190"}
    

Connect to your database using psql in the Azure Cloud Shell

In this task, you connect to the adventureworks database on your Azure Database for PostgreSQL server using the psql command-line utility from the Azure Cloud Shell.

  1. In the Azure portal, navigate to your newly created Azure Database for PostgreSQL flexible server.

  2. In the resource menu, under Settings, select Databases select Connect for the adventureworks database.

    Screenshot of the Azure Database for PostgreSQL Databases page. Databases and Connect for the adventureworks database are highlighted by red boxes.

  3. At the “Password for user pgAdmin” prompt in the Cloud Shell, enter the randomly generated password for the pgAdmin login.

    Once logged in, the psql prompt for the adventureworks database is displayed.

  4. Throughout the remainder of this exercise, you continue working in the Cloud Shell, so it may be helpful to expand the pane within your browser window by selecting the Maximize button at the top right of the pane.

    Screenshot of the Azure Cloud Shell pane with the Maximize button highlighted by a red box.

Populate the database with data

  1. You need to create a table within the database and populate it with sample data so you have information to work with as you review locking in this exercise.
  2. Run the following command to create the production.workorder table for loading in data:

     /*********************************************************************************
     Create Schema: production
     *********************************************************************************/
     DROP SCHEMA IF EXISTS production CASCADE;
     CREATE SCHEMA production;
        
     /*********************************************************************************
     Create Table: production.workorder
     *********************************************************************************/
        
     DROP TABLE IF EXISTS production.workorder;
     CREATE TABLE production.workorder
     (
         workorderid integer NOT NULL,
         productid integer NOT NULL,
         orderqty integer NOT NULL,
         scrappedqty smallint NOT NULL,
         startdate timestamp without time zone NOT NULL,
         enddate timestamp without time zone,
         duedate timestamp without time zone NOT NULL,
         scrapreasonid smallint,
         modifieddate timestamp without time zone NOT NULL DEFAULT now()
     )
     WITH (
         OIDS = FALSE
     )
     TABLESPACE pg_default;
    
  3. Next, use the COPY command to load data from CSV files into the table you created above. Start by running the following command to populate the production.workorder table:

     \COPY production.workorder FROM 'mslearn-postgresql/Allfiles/Labs/08/Lab8_workorder.csv' CSV HEADER
    

    The command output should be COPY 72591, indicating that 72591 rows were written into the table from the CSV file.

  4. Close the Cloud Shell pane once the data has loaded

Connect to the database with Azure Data Studio

  1. If you haven’t installed Azure Data Studio yet, download and install Azure Data Studio.
  2. Start Azure Data Studio.
  3. If you haven’t installed the PostgreSQL extension in Azure Data Studio, install it now.
  4. Select Connections.
  5. Select Servers and select New connection.
  6. In Connection type, select PostgreSQL.
  7. In Server name, type the value that you specified when you deployed the server.
  8. In User name, type pgAdmin.
  9. In Password, type enter the randomly generated password for the pgAdmin login you generated
  10. Select Remember password.
  11. Click Connect

Create tables within the database

  1. Expand Databases, right-click adventureworks and select New Query.

    Screenshot of adventureworks database highlighting New Query context menu item

  2. Select the SQLQuery_1 tab, type the following query and select Run.

     SELECT * FROM production.workorder;
    

Task 1: Turn on query capture mode

  1. Navigate to the Azure portal and sign in.
  2. Select your Azure Database for PostgreSQL server for this exercise.
  3. In Settings, select Server parameters.
  4. Navigate to the pg_qs.query_capture_mode setting.
  5. Select TOP.

    Screenshot of settings to turn Query Store on

  6. Navigate to pgms_wait_sampling.query_capture_mode, select ALL, and select Save.

    Screenshot of settings to turn p g m s_wait_sampling.query_capture_mode on

  7. Wait for the server parameters to update.

View pg_stat data

  1. Start Azure Data Studio.
  2. Select Connect.

    Screenshot showing Connect icon

  3. Select your PostgreSQL server and select Connect.
  4. Type the following query and select Run.

     SELECT 
         pid,                    -- Process ID of the server process
         datid,                  -- OID of the database
         datname,                -- Name of the database
         usename,                -- Name of the user
         application_name,       -- Name of the application connected to the database
         client_addr,            -- IP address of the client
         client_hostname,        -- Hostname of the client (if available)
         client_port,            -- TCP port number that the client is using for the connection
         backend_start,          -- Timestamp when the backend process started
         xact_start,             -- Timestamp of the current transaction start, if any
         query_start,            -- Timestamp when the current query started, if any
         state_change,           -- Timestamp when the state was last changed
         wait_event_type,        -- Type of event the backend is waiting for, if any
         wait_event,             -- Event that the backend is waiting for, if any
         state,                  -- Current state of the session (e.g., active, idle, etc.)
         backend_xid,            -- Transaction ID, if active
         backend_xmin,           -- Transaction ID that the process is working with
         query,                  -- Text of the query being executed
         encode(backend_type::bytea, 'escape') AS backend_type,           -- Type of backend (e.g., client backend, autovacuum worker). We use encode(…, 'escape') to safely display raw data with invalid characters by converting it into a readable format, doing this prevents a UTF-8 conversion error in Azure Data Studio.
         leader_pid,             -- PID of the leader process, if this is a parallel worker
         query_id               -- Query ID (added in more recent PostgreSQL versions)
     FROM pg_stat_activity;
    
  5. Review the metrics that are available.
  6. Leave Azure Data Studio open for the next task.

Task 2: Examine query statistics

[!NOTE] For a newly created database, there might be limited statistics, if any. If you wait for 30 minutes there will be statistics from background processes.

  1. Select the azure_sys database.

    Screenshot of the database selector

  2. Type each of the following queries and select Run.

     SELECT * FROM query_store.query_texts_view;
    
     SELECT * FROM query_store.qs_view;
    
     SELECT * FROM query_store.runtime_stats_view;
    
     SELECT * FROM query_store.pgms_wait_sampling_view;
    
  3. Review the metrics that are available.

Exercise Clean-up

The Azure Database for PostgreSQL we deployed in this exercise will incur charges you can delete the server after this exercise. Alternatively, you can delete the rg-learn-work-with-postgresql-eastus resource group to remove all resources that we deployed as part of this exercise.