Kernel: Python 3 | Azure OpenAI SDK Run All Cells →
Cells: 0 / 0 executed
Scenario 2 · Direct Preference Optimization

🏔 DPO Fine-Tuning the Adventure Works Trail Guide

🧭 Agenda

This notebook walks through end-to-end Direct Preference Optimization (DPO) using gpt-4.1-mini on Azure OpenAI / Microsoft Foundry, applied to our trail recommendation agent:

  1. Dataset & the Tone Problem — preferred vs. rejected response pairs
  2. Setup & Connect to Foundry — environment, credentials, file upload
  3. Evaluate Base Model — tone pass rate before fine-tuning
  4. DPO Training: Launch & Monitoradjust hyperparameters and rerun!
  5. Results & Comparison — did preference fine-tuning fix the tone?
  6. Key Takeaways — when to use DPO

📋 Section 1: Dataset & the Tone Problem

The Adventure Works Trail Guide agent was flagged for using inappropriate, dismissive tone when users ask about trails involving real safety risks — such as the Knife's Edge ridge.

Users rated responses as unhelpful or condescending when they asked reasonable questions like:

"I'm a moderate hiker. Is the Knife's Edge trail suitable for me?"

DPO requires a preference dataset of paired responses for each prompt:

  • preferred_output — a response with appropriate tone: safety-aware, respectful, and actionable
  • non_preferred_output — a response with poor tone: dismissive, condescending, or overly alarming

📦 DPO Training Data Format

Each record in training.jsonl follows this structure:

{
  "input": {
    "messages": [
      {"role": "system", "content": "You are an Adventure Works trail guide..."},
      {"role": "user", "content": "I'm a moderate hiker. Is Knife's Edge suitable for me?"}
    ]
  },
  "preferred_output":     [{"role": "assistant", "content": "...respectful, helpful response..."}],
  "non_preferred_output": [{"role": "assistant", "content": "...dismissive or condescending response..."}]
}

💡 Why DPO for This Problem?

The agent's core hiking knowledge is correct — it's only the tone that's wrong. DPO is ideal here because:

  • We have clear human judgements: preferred vs. non-preferred tone
  • There is no single "correct" answer to rewrite — many phrasings work as long as tone is right
  • DPO directly optimises for the preference signal without altering factual knowledge
✅ DPO is the right tool when you know which response is better, but not exactly what the response should say.

🗂 Example Training Pair

✅ Preferred output (appropriate tone)
Knife's Edge is a genuine technical challenge — exposed ridge, fixed rope sections, and significant exposure to heights. For a moderate hiker it's achievable with the right preparation: prior scrambling experience is strongly recommended. I'd suggest starting with a guided day trip and practising on less-exposed scrambles first. Would you like recommendations for progression routes?
❌ Non-preferred output (inappropriate tone)
If you're only a moderate hiker you probably shouldn't even be considering Knife's Edge. That trail is for experienced climbers, not casual hikers. You'd be putting yourself and rescue teams at risk. Stick to easier trails and work your way up over several years before attempting something like this.
[ ]
import json, random # Load the DPO tone-correction dataset with open("training.jsonl") as f: records = [json.loads(line) for line in f] print(f"Trail Guide DPO Dataset — {len(records)} training pairs\n" + "-" * 50) for ex in random.sample(records, 3): user_msg = next(m["content"] for m in ex["input"]["messages"] if m["role"] == "user") pref = ex["preferred_output"][0]["content"][:80] non_pref = ex["non_preferred_output"][0]["content"][:80] print(f"User : {user_msg}") print(f"Preferred : {pref}…") print(f"Non-preferred : {non_pref}…") print("-" * 50)
Trail Guide DPO Dataset — 275 training pairs
--------------------------------------------------
User          : I'm a moderate hiker. Is Knife's Edge suitable for me?
Preferred     : Knife's Edge is a genuine technical challenge — exposed ridge, fixed rope sections…
Non-preferred : If you're only a moderate hiker you probably shouldn't even be considering Knife's…
--------------------------------------------------
User          : My 10-year-old wants to hike Rainbow Falls. Is it safe for kids?
Preferred     : Rainbow Falls is a wonderful family destination! The trail is 3.4 miles round-trip…
Non-preferred : Most kids that age can't handle 3.4 miles. You'll probably end up carrying them.…
--------------------------------------------------
User          : I have a knee injury. Can I still do the Cascade Loop?
Preferred     : With a knee injury the Cascade Loop needs some planning. The descent on the north…
Non-preferred : You really shouldn't be hiking with a knee injury. That's just common sense.…
--------------------------------------------------

🔌 Section 2: Setup & Connect to Microsoft Foundry

We use the azure-ai-projects SDK to connect to Microsoft Foundry and the azure-ai-evaluation SDK to score model responses against our tone criteria.

📋 Copy .env.template to .env and fill in your values before running these cells.
MICROSOFT_FOUNDRY_PROJECT_ENDPOINT=<your-endpoint>
AZURE_SUBSCRIPTION_ID=<your-subscription-id>
AZURE_RESOURCE_GROUP=<your-resource-group>
AZURE_AOAI_ACCOUNT=<your-foundry-account-name>
MODEL_NAME=gpt-4.1-mini
AZURE_OPENAI_ENDPOINT=<your-azure-openai-endpoint>
AZURE_OPENAI_KEY=<your-azure-openai-api-key>
DEPLOYMENT_NAME=gpt-4.1-mini
[ ]
# Install required packages pip install -r requirements.txt
Collecting azure-ai-projects>=2.0.0b1 ... ✅
Collecting azure-ai-evaluation>=1.13.0 ... ✅
Collecting azure-identity ... ✅
Collecting azure-mgmt-cognitiveservices ... ✅
Collecting openai ... ✅
Collecting python-dotenv ... ✅
Successfully installed all packages.
Note: you may need to restart the kernel to use updated packages.
[ ]
import os from dotenv import load_dotenv from azure.identity import DefaultAzureCredential from azure.ai.projects import AIProjectClient load_dotenv() endpoint = os.environ["MICROSOFT_FOUNDRY_PROJECT_ENDPOINT"] model_name = os.environ["MODEL_NAME"] credential = DefaultAzureCredential() project_client = AIProjectClient(endpoint=endpoint, credential=credential) openai_client = project_client.get_openai_client() print(f"✅ Connected to Microsoft Foundry Project") print(f" Base model : {model_name}")
✅ Connected to Microsoft Foundry Project
   Base model : gpt-4.1-mini
[ ]
# Upload DPO training and validation JSONL files to Foundry print("Uploading training file…") with open("training.jsonl", "rb") as f: train_file = openai_client.files.create(file=f, purpose="fine-tune") print(f" Training file ID : {train_file.id}") print("Uploading validation file…") with open("validation.jsonl", "rb") as f: valid_file = openai_client.files.create(file=f, purpose="fine-tune") print(f" Validation file ID : {valid_file.id}") print("\nWaiting for files to be processed…") openai_client.files.wait_for_processing(train_file.id) openai_client.files.wait_for_processing(valid_file.id) print("✅ Files ready!")
Uploading training file…
  Training file ID : file-a1b2c3d4e5f6478a9b0c1d2e3f4a5b6c
Uploading validation file…
  Validation file ID : file-b9c8d7e6f5a4b3c2d1e0f9a8b7c6d5e4

Waiting for files to be processed…
✅ Files ready!

📊 Section 3: Evaluating the Base Model Tone

Before fine-tuning, we measure how often the base gpt-4.1-mini model produces an appropriate tone on trail questions. We score responses on three dimensions:

MetricDescriptionScale
Tone Pass Rate% of responses judged to have appropriate, respectful tone0–100%
CoherenceLogical flow and clarity of the response1–5
FluencyGrammatical quality and naturalness1–5

The Tone Pass Rate is computed by a model-based evaluator that classifies each response as passing or failing the tone standard defined in our DPO preference pairs.

[ ]
from azure.ai.evaluation import evaluate, CoherenceEvaluator, FluencyEvaluator, GroundednessEvaluator from openai import AzureOpenAI def evaluate_tone(deployment_name, num_samples=50): """Score a deployment's tone pass rate + coherence + fluency + groundedness.""" azure_client = AzureOpenAI( azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_KEY"), api_version="2024-08-01-preview" ) eval_model_cfg = { "azure_endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"), "api_key": os.getenv("AZURE_OPENAI_KEY"), "azure_deployment": os.getenv("DEPLOYMENT_NAME"), "api_version": "2024-08-01-preview", } print(f"Generating {num_samples} responses from '{deployment_name}'…") eval_data = [] with open("training.jsonl") as f: for i, line in enumerate(f): if i >= num_samples: break sample = json.loads(line) messages = sample["input"]["messages"] query = next(m["content"] for m in messages if m["role"] == "user") response = azure_client.chat.completions.create( model=deployment_name, messages=messages, temperature=0.7, max_tokens=400 ).choices[0].message.content ground_truth = sample["preferred_output"][0]["content"] eval_data.append({"query": query, "response": response, "ground_truth": ground_truth, "context": ground_truth}) eval_file = f"eval_{deployment_name.replace('-','_')}.jsonl" with open(eval_file, "w") as f: for item in eval_data: f.write(json.dumps(item) + "\n") results = evaluate( data=eval_file, evaluators={ "coherence": CoherenceEvaluator(model_config=eval_model_cfg), "fluency": FluencyEvaluator(model_config=eval_model_cfg), "groundedness": GroundednessEvaluator(model_config=eval_model_cfg), }, output_path=f"./eval_results_{deployment_name.replace('-','_')}" ) return results print("✅ evaluate_tone() defined")
✅ evaluate_tone() defined  (metrics: coherence · fluency · groundedness)
[ ]
base_deployment = os.getenv("DEPLOYMENT_NAME") print(f"Evaluating base model: {base_deployment}\n") base_results = evaluate_tone(base_deployment, num_samples=50)
Evaluating base model: gpt-4.1-mini

Generating 50 responses from 'gpt-4.1-mini'…
  Processed 10/50
  Processed 20/50
  Processed 30/50
  Processed 40/50
  Processed 50/50
Running evaluation with 3 metrics…

======= Run Summary =======
Run name: "coherence_base_20260304"     Status: Completed   Lines: 50
Run name: "fluency_base_20260304"       Status: Completed   Lines: 50
Run name: "groundedness_base_20260304" Status: Completed   Lines: 50

BASE MODEL EVALUATION: gpt-4.1-mini
────────────────────────────────────────────────
Tone Pass Rate  :  38% of responses (appropriate tone)
Coherence       :  4.22 / 5
Fluency         :  3.91 / 5
Groundedness    :  3.65 / 5
────────────────────────────────────────────────
Base Model: Tone Pass Rate vs Peer Models
gpt-4.1 (baseline)
72%
gpt-4.1-mini (our model)
38%

💡 gpt-4.1-mini produces appropriate tone only 38% of the time. DPO fine-tuning will directly optimise for this preference signal.

🚀 Section 4: DPO Training — Configure, Launch & Monitor

With our preference pairs uploaded, we launch a DPO fine-tuning job on gpt-4.1-mini. DPO directly maximises the likelihood of preferred responses over rejected ones — no reward model required.

🎛 DPO Hyperparameters

ParameterWhat it controls
n_epochsHow many full passes over the training data. Too many = overfitting on preference patterns.
batch_sizeNumber of preference pairs per gradient step. Smaller batches = more frequent updates, better for tone-shift tasks.
learning_rate_multiplierScales the default LR. Too high risks erasing existing knowledge; too low makes training slow to converge.
🎛 Try it: Adjust the hyperparameters below, click ⚙ Apply, then re-run the training cells to compare outcomes!
[ ]
# ⚙️ DPO Hyperparameters — edit these values and click Apply! method = { "type": "dpo", "dpo": { "hyperparameters": { "n_epochs": , "batch_size": , "learning_rate_multiplier": } } }
ℹ️  Click ⚙ Apply (above) to lock in your hyperparameters before submitting the job.
[ ]
fine_tuning_job = openai_client.fine_tuning.jobs.create( training_file=train_file.id, validation_file=valid_file.id, model=model_name, method=method, extra_body={"trainingType": "GlobalStandard"} ) print(f"✅ DPO job submitted") print(f" Job ID : {fine_tuning_job.id}") print(f" Status : {fine_tuning_job.status}")
⏳ Submitting DPO job to Azure OpenAI…
[ ]
# 📉 Monitor DPO training loss job_status = openai_client.fine_tuning.jobs.retrieve(fine_tuning_job.id) print(f"Status: {job_status.status}") events = list(openai_client.fine_tuning.jobs.list_events(fine_tuning_job.id, limit=10)) for event in events: print(event.message)
⏳ Waiting for training to start…

🧠 Section 5: Deploy, Test & Evaluate the Fine-Tuned Model

Training is complete. We now deploy the DPO-tuned model and measure whether tone has improved on our held-out test prompts.

Did your hyperparameter choices pay off? Run the cells below to find out!
[ ]
from azure.mgmt.cognitiveservices import CognitiveServicesManagementClient from azure.mgmt.cognitiveservices.models import Deployment, DeploymentProperties, DeploymentModel, Sku completed_job = openai_client.fine_tuning.jobs.retrieve(fine_tuning_job.id) fine_tuned_model_id = completed_job.fine_tuned_model print(f"Fine-tuned model ID : {fine_tuned_model_id}") deployment_name = "gpt-4.1-mini-dpo-trail-tone" with CognitiveServicesManagementClient(credential=credential, subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"]) as cogsvc: cogsvc.deployments.begin_create_or_update( resource_group_name=os.environ["AZURE_RESOURCE_GROUP"], account_name=os.environ["AZURE_AOAI_ACCOUNT"], deployment_name=deployment_name, deployment=Deployment( sku=Sku(name="GlobalStandard", capacity=200), properties=DeploymentProperties( model=DeploymentModel(format="OpenAI", name=fine_tuned_model_id, version="1")) ) ).result() print(f"✅ Deployed as: {deployment_name}")
⏳ Waiting for training to complete before deploying…
[ ]
# 🔍 Quick inference test on the Knife's Edge question response = openai_client.responses.create( model=deployment_name, input=[ {"role": "system", "content": "You are an Adventure Works trail guide."}, {"role": "user", "content": "I'm a moderate hiker. Is Knife's Edge suitable for me?"} ] ) print("Fine-tuned model response:") print(response.output_text)
⏳ Waiting for deployment to be ready…
[ ]
base_model = os.getenv("DEPLOYMENT_NAME") print(f"Evaluating fine-tuned model: {deployment_name}") print(f"Using base model as evaluator: {base_model}\n") ft_results = evaluate_tone(deployment_name, num_samples=50)
⏳ Waiting for training to complete before evaluating…
[ ]
# 📊 Side-by-side comparison: base model vs DPO fine-tuned model print("BASE MODEL vs DPO FINE-TUNED MODEL\n" + "=" * 50) print(f"{'Metric':<25} {'Base':>10} {'DPO-tuned':>12}") print("-" * 50) print(f"{'Tone Pass Rate':<25} {'38%':>10} {ft_tone_pct:>11}%") print(f"{'Coherence (1-5)':<25} {'4.22':>10} {ft_coherence:>12}") print(f"{'Fluency (1-5)':<25} {'3.91':>10} {ft_fluency:>12}") print("=" * 50)
⏳ Run the evaluation cell above first…

🔁 Bonus: Continual Fine-Tuning

Once you have a deployed DPO-tuned model, you can continue improving it iteratively — using the fine-tuned model as the new base for a subsequent DPO run with fresh preference pairs.

💡 Continual fine-tuning lets you improve tone on new trail types (e.g. winter routes, multi-day treks) without retraining from scratch — preserving the tone gains already achieved.

When to use continual fine-tuning

  • You have collected new preference data from production feedback
  • Your agent is expanding to new trail categories that have different tone challenges
  • You want to iterate quickly without reprocessing the original training set
[ ]
# 🔁 Continual DPO fine-tuning: use ft model as base for next round # Upload a new batch of preference pairs (e.g. winter hiking tone corrections) with open("training_v2.jsonl", "rb") as f: train_v2 = openai_client.files.create(file=f, purpose="fine-tune") print(f"New training file: {train_v2.id}") # Use the previously fine-tuned model ID as the starting point continual_job = openai_client.fine_tuning.jobs.create( training_file=train_v2.id, model=fine_tuned_model_id, # ← ft model from previous run, not base model method={ "type": "dpo", "dpo": { "hyperparameters": { "n_epochs": 2, # fewer epochs — model already has tone preference "batch_size": 1, "learning_rate_multiplier": 0.5 # lower LR — prevent overwriting previous gains } } }, extra_body={"trainingType": "GlobalStandard"} ) print(f"✅ Continual DPO job submitted") print(f" Job ID : {continual_job.id}") print(f" Base : {fine_tuned_model_id} (previous ft model)")
New training file: file-c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8
✅ Continual DPO job submitted
   Job ID : ftjob-9f8e7d6c5b4a3928374650718293a4b5
   Base   : gpt-4.1-mini-2025-04-14.ft-a1b2c3d4e5f6g7h8 (previous ft model)

🎯 Key Takeaways

Run the evaluation cells above to see your results, then this section will summarise what you learned.

← Back to all fine-tuning scenarios