ai-apps

Microsoft Learning AI Apps

This repository contains source code and published web apps for educational use. The apps are designed to support training modules on Microsoft Learn and are not intended (or supported) for use in production solutions. They are not supported Microsoft services or products, and are provided as-is without warranty of any kind.

Most of the apps (with two Azure-based exceptions) are designed to run locally in-browser. No data is uploaded to Microsoft, though some apps make use of external web services for speech support. To run the apps successfully, you need a modern browser, such as Microsoft Edge. In some cases, the full app functionality is only available on computers that include a GPU (integrated or dedicated). When using Windows on ARM64 computers, you may need to enable WebGPU in your browser flag settings (for example at edge://flags or chrome://flags). The GPU-based apps are designed to use a “fallback” mode with some functionality restrictions when no GPU is available.

Apps

Transparency Notes

The AI functionality in these apps was developed with Microsoft’s principles for responsible AI in mind. Models and prompts have been chosen to minimize the risk of harmful content generation, and ongoing automated code quality reviews are in place to mitigate potential abuse or accidental security issues. If you do encounter an issue, we encourage you to report it at https://github.com/MicrosoftLearning/ai-apps/issues.

Data privacy

The apps, including AI models (other than in Ask Azure and the Azure-based version of Computing History), run in your local browser and no data is shared with Microsoft. No data from your browser, such as cookies or configuration data, is collected by any of these apps.

In some cases, depending on the app mode configuration, input to models (i.e. prompts) may be sent to third-party APIs. Specifically:

Generative AI

Many of the apps use generative AI models. Reasonable precautions have been taken to mitigate any potential harmful output from these models, but it’s important to note that LLMs can produce unpredictable results.

IMPORTANT: Generative AI functionality in these apps is designed exclusively for educational use. Do not rely on the output from these apps for any real-world application, decision, or action.

Azure-based models (in Microsoft Foundry)

The Ask Azure and Azure-based Computing History apps use a model that you choose to deploy in your Microsoft Foundry resource. We recommend deploying a GPT 4.1 Mini model. When used in Microsoft Foundry, default content safety guardrails are applied to mitigate the risk of offensive or harmful content generation.

Local (in-browser) LLMs

Some apps use the Microsoft Phi-3-mini-4k-instruct generative AI model (specifically Microsoft Phi-3-mini-4k-instruct-q4f16_1-MLC). No additional training or fine-tuning has been performed on the model. You can view the model card for this model for details, including considerations for responsible use. The model is run in-browser using the WebLLM JavaScript module, with no server-side processing.

In cases where no GPU is available, or WebGPU is not supported, a fallback mode using the smollm2 model running in the WLLAMA CPU-based runtime is used.

All in-browser LLM-based apps include a minimal content moderation solution in which the app validates input for common potentially offensive or harmful terms, and returns an appropriate message without submitting the prompt to the model. In some cases, legitimately non-offensive and non-harmful prompts may be blocked by this mechanism.

Other AI models and technologies

In addition to WebLLM and the Microsoft Phi model described above for generative AI, the apps make use of the following models and technologies under the terms of their respective licenses:

The “OpenAI” library provided in the Model Coder app is not the real OpenAI Python library. Instead, it’s a set of Python classes that expose commonly used objects and methods of the OpenAI API as abstractions over a local JavaScript layer that handles prompt submission to the smollm2 model in the local WLLAMA environment. From the learner’s perspective, you’ll write and run real Python code using the same syntax as you would with the OpenAI library, and interact with a real LLM back-end.