Part One: Strategy
1. Framing AI as an enterprise risk (Ajit Jaokar)
You will begin by examining how artificial intelligence reshapes enterprise risk and what this means for existing governance and oversight models. You will look at how AI systems introduce new forms of uncertainty, external dependency and risks that evolve over time, and how traditional ERM approaches must therefore adapt to govern AI effectively.
This session will examine the central role of AI Governance and how it relates to data governance, strategic, operational and cyber risk, and AI system provenance. This framing provides the foundation for the subsequent stages of the programme.
For more details about the standards used in this course, please see here.
2. NIST AI Risk Management Framework as operating system (Vikram Tegginamath)
This session will explore how enterprises manage AI risks using the NIST AI Risk Management Framework as a unifying operating system. You will gain an understanding of how the four core functions (Govern, Map, Measure and Manage) map directly onto strategic, operational and cyber risk categories, aligning Responsible AI principles with day-to-day decision-making.
3. Risk management overview (Yemi Adeniran)
An overview of risk management programmes and frameworks, as well as key controls to reduce risk. You will learn how adopting a standard framework improves organisational resilience and mitigates AI risks.
4. Risk management controls and AI oversight: accountability, risk appetite and reporting (Steven Alexander Kok)
This session focuses on how senior leaders design and operationalise AI governance within an enterprise risk framework, equipping you to structure AI governance as a decision architecture that integrates strategy, operations, cyber and regulatory risk into a coherent oversight model.
Specifically, the session will cover lifecycle-based accountability using a RACI model aligned to the Three Lines of Defence, define AI risk appetite across areas such as autonomy, model uncertainty, customer impact and vendor dependency, and establish escalation triggers and decision rights for executive and board oversight. You will also be introduced to key considerations in developing an enterprise AI risk register and board-ready dashboard that highlights residual risk, emerging trends and material exposures.
5. Risk management: end-to-end lifecycle management for new and emerging situations (Vignesh Manikam)
The focus of this session will be on end-to-end lifecycle processes, and will cover risk identification, generative AI model risk, Zero Trust principles, security tools such as Microsoft Security Copilot, and risks associated with AI agents.
6. AI risk analysis, monitoring and reporting (Yemi Adeniran)
In this session, you will learn how to analyse and prioritise AI risks using both qualitative and quantitative methods, assessing likelihood, impact and uncertainty. You will explore AI-specific metrics, risk levels and treatment approaches, enabling you to make defensible, risk-based decisions within your enterprise framework.
You will also consider effective monitoring and reporting processes, including performance indicators, incident tracking, audit review and board-level reporting. A practical workshop enables you to apply these tools to identify strategic, operational, cyber and regulatory risks in a real-world scenario.
7. Cyber risks and AI-enabled systems (Yemi Adeniran)
This session examines how AI reshapes the cyber threat landscape and introduces new attack surfaces. Through real-world cases, you will explore risks such as data poisoning, prompt injection, model extraction, adversarial attacks, supply chain compromise, insider threats, fraud and deepfakes.
You will learn how to assess these threats in enterprise terms and integrate AI-specific cyber risks into existing security and governance structures.
Part Two: Implementation
8. A practical, principled way to classify AI system risk (Chris Fong)
You will learn how to classify AI systems using a structured, principle-based framework designed for enterprise deployment. After reviewing leading classification approaches and their operational challenges, you will apply a use-case agnostic methodology based on Responsible AI risk dimensions, scoring and control alignment.
9. Using Claude to create Cybersecurity tools: a CISO's perspective (Clyde Johnson)
In the wake of the dramatic uptake of AI to create tools in recent months, you will see how large language models can support cybersecurity governance and strategic decision-making. Through live demonstrations using Claude Code, you will explore structured workflows for risk assessment, compliance mapping and policy development.
From the perspective of a Chief Information Security Officer (CISO), the session focuses on creating transparent, auditable and explainable governance processes, including an AI-enabled risk register tool that you may use in the capstone exercise.
10. Agentic AI and generative AI risk (Nicole Königstein)
This session outlines how risk changes when AI systems become autonomous agents rather than static models. You will explore agentic threat types such as reasoning manipulation, memory poisoning, tool misuse and cascading failures, and why traditional AI and cyber controls are often insufficient.
You will assess enterprise controls including human-in-the-loop governance, constrained autonomy, secure execution and auditability to govern increasingly autonomous systems responsibly.
11. Advanced AI risks (Ajit Jaokar)
This session moves on to more advanced AI risk topics, including how to define and monitor meaningful metrics such as model accuracy, thresholds, drift, incident rates, false positives, regulatory breaches and security events. You will learn when escalation is required and how to embed measurement discipline into governance.
You will also evaluate vendor and foundation model risk, including lock-in, API dependency, model updates, geopolitical exposure, data leakage and liability from hallucination or error.
12. Capstone application (Onur Bıçakçı)
The programme concludes with a practical capstone in which you build an enterprise-ready AI risk register within a financial services mergers and acquisitions scenario. You will integrate strategic, operational and cyber perspectives into a single governance artefact suitable for executive and board review, mirroring how mature organisations operationalise AI risk management in practice.
You will have two weeks after the end of the taught sessions to complete to project. You will do so as part of a group, with each group expected to commit approximately 10 hours of work to the project outside of the live sessions.
Course Delivery
This course will run over six live online sessions on Mondays, Wednesdays and Fridays.
-
Monday 11 May 2026
-
Wednesday 13 May 2026
-
Friday 15 May 2026
-
Monday 18 May 2026
-
Wednesday 20 May 2026
-
Friday 22 May 2026
Sessions will be 14:00 to 18:00 (UK time) and delivered online via Microsoft Teams.
A world clock, and time zone converter can be found here: https://bit.ly/3bSPu6D
No attendance at Oxford is required and you do not need to purchase any software.
Accessing Your Online Course
Details about accessing the private MS Teams course site will be emailed to you during the week prior to the course commencing.
If you have not received your joining instructions three working days before the course start date, please get in touch.