Experiment with Azure Machine Learning
- Introduction
- Preprocess data and configure featurisation
- Run an automated machine learning experiment
- Evaluate and compare models
- Configure MLflow for model tracking in notebooks
- Train and track models in notebooks
- Evaluate models with the Responsible AI dashboard
- Exercise: Find the best classification model with Azure Machine Learning
Perform Hyperparameter Tuning with Azure Machine Learning
- Introduction
- Define a search space
- Configure a sampling method
- Configure early termination
- Use a sweep job for hyperparameter tuning
- Exercise: Run a sweep job
Run Pipelines in Azure Machine Learning
- Introduction
- Create components
- Create a pipeline
- Run a pipeline job
- Exercise: Run a pipeline job
Trigger Azure Machine Learning Jobs with GitHub Actions
- Introduction
- Understand the business problem
- Explore the solution architecture
- Use GitHub Actions for model training
- Exercise
Trigger GitHub Actions with Feature-Based Development
- Introduction
- Understand the business problem
- Explore the solution architecture
- Trigger a workflow
- Exercise
Work with Environments in GitHub Actions
- Introduction
- Understand the business problem
- Explore the solution architecture
- Set up environments
- Exercise
Deploy a Model with GitHub Actions
- Introduction
- Understand the business problem
- Explore the solution architecture
- Model deployment
- Exercise
Plan and Prepare a GenAIOps Solution
- Introduction
- Explore use cases for GenAIOps
- Select the right generative AI model
- Understand the development lifecycle of a language model application
- Explore available tools and frameworks to implement GenAIOps
- Exercise: Compare language models from the model catalog
Manage Prompts for Agents in Microsoft Foundry with GitHub
- Introduction
- Apply version control to prompts
- Understand Microsoft Foundry agents and prompt versioning
- Organise prompts in GitHub repositories
- Develop safe prompt deployment workflows
- Exercise: Develop prompt and agent versions
Evaluate and Optimise AI Agents Through Structured Experiments
- Introduction
- Design evaluation experiments
- Apply Git-based workflows to optimisation experiments
- Apply evaluation rubrics for consistent scoring
- Exercise: Evaluate and compare AI agent versions
Automate AI Evaluations with Microsoft Foundry and GitHub Actions
- Introduction
- Understand why automated evaluations matter
- Align evaluators with human criteria
- Create evaluation datasets
- Implement batch evaluations with Python
- Integrate evaluations into GitHub Actions
- Exercise: Set up automated evaluations
Monitor Your Generative AI Application
- Introduction
- Why monitoring matters
- Understand key metrics to monitor
- Explore monitoring with Azure
- Integrate monitoring into your application
- Interpret monitoring results
- Exercise: Enable monitoring for a generative AI application
Analyse and Debug Your Generative AI Application with Tracing
- Introduction
- Why tracing is important
- Identify what to trace in generative AI applications
- Implement tracing in generative AI applications
- Debug complex workflows with advanced tracing patterns
- Analyse trace data to inform decisions
- Exercise: Enable tracing for a generative AI application