Optimize AI agents with fine-tuning
This exercise takes approximately 15 minutes.
Introduction
In this interactive lab, you analyze real agent quality problems at Adventure Works and select the right fine-tuning method to solve them. You compare three approaches—supervised fine-tuning (SFT), reinforcement fine-tuning (RFT), and direct preference optimization (DPO)—across the dimensions that determine whether fine-tuning succeeds: data requirements, cost structure, and the type of quality problem each method solves.
You can work through the scenarios in any order. Each scenario presents an agent quality problem with evaluation metrics, asks you to match it to the right method, and explains why that method fits the root cause.