Managing Enterprise AI Risks: Governance, Operational, Cyber (online)

Overview

This course equips you to oversee Artificial Intelligence initiatives as a strategic asset and enterprise risk, enabling confident senior-level oversight without requiring a technical background. It is designed for risk, cyber and technology leaders, consultants, and senior executives responsible for AI adoption and oversight.

As AI becomes embedded in decision-making, operations and customer-facing systems, accountability increasingly sits at board level. This course focuses on how to recognise AI-related risks and opportunities, and how to oversee them effectively and manage the risk of AI within your organisation’s enterprise risk management framework.

Through structured teaching, case studies and applied discussion, you will learn how to integrate AI into existing Enterprise Risk Management (ERM) frameworks, governing it with the same discipline as other material enterprise risks. The course examines AI across strategic, operational and cyber risk domains – from vendor dependency, regulatory change and reputational exposure, to data quality failures, model drift, human–AI handoffs, and emerging cyber threats such as data poisoning, prompt injection and model extraction.

The course analyses AI risk from the perspective and framework of AI Governance. AI governance ensures that AI systems are built and used responsibly, transparently and in alignment with organisational goals and societal values.

By the end of the programme, you will be able to:

  1. Explain how AI changes enterprise risk across strategy, operations and cyber security
  2. Map AI risks across the AI lifecycle (data → model → deployment → monitoring)
  3. Apply National Institute of Standards and Technology (NIST) AI Risk Management Frameworks to real organisational scenarios
  4. Evaluate AI vendor and foundation model dependency risks
  5. Design an AI risk register aligned with ERM frameworks
  6. Communicate AI risk trade-offs to executive leadership and boards

Drawing on examples from financial services while remaining applicable across sectors, this intensive course provides up-to-date tools and frameworks that you can apply immediately to strengthen AI governance and organisational resilience.

Programme details

Part One: Strategy

1. Framing AI as an enterprise risk (Ajit Jaokar)

You will begin by examining how artificial intelligence reshapes enterprise risk and what this means for existing governance and oversight models. You will look at how AI systems introduce new forms of uncertainty, external dependency and risks that evolve over time, and how traditional ERM approaches must therefore adapt to govern AI effectively.

This session will examine the central role of AI Governance and how it relates to data governance, strategic, operational and cyber risk, and AI system provenance. This framing provides the foundation for the subsequent stages of the programme.

For more details about the standards used in this course, please see here.

2. NIST AI Risk Management Framework as operating system (Vikram Tegginamath)

This session will explore how enterprises manage AI risks using the NIST AI Risk Management Framework as a unifying operating system. You will gain an understanding of how the four core functions (Govern, Map, Measure and Manage) map directly onto strategic, operational and cyber risk categories, aligning Responsible AI principles with day-to-day decision-making.

3. Risk management overview (Yemi Adeniran)

An overview of risk management programmes and frameworks, as well as key controls to reduce risk. You will learn how adopting a standard framework improves organisational resilience and mitigates AI risks. 

4. Risk management controls and AI oversight: accountability, risk appetite and reporting (Steven Alexander Kok)

This session focuses on how senior leaders design and operationalise AI governance within an enterprise risk framework, equipping you to structure AI governance as a decision architecture that integrates strategy, operations, cyber and regulatory risk into a coherent oversight model.

Specifically, the session will cover lifecycle-based accountability using a RACI model aligned to the Three Lines of Defence, define AI risk appetite across areas such as autonomy, model uncertainty, customer impact and vendor dependency, and establish escalation triggers and decision rights for executive and board oversight. You will also be introduced to key considerations in developing an enterprise AI risk register and board-ready dashboard that highlights residual risk, emerging trends and material exposures.

5. Risk management: end-to-end lifecycle management for new and emerging situations (Vignesh Manikam)

The focus of this session will be on end-to-end lifecycle processes, and will cover risk identification, generative AI model risk, Zero Trust principles, security tools such as Microsoft Security Copilot, and risks associated with AI agents. 

6. AI risk analysis, monitoring and reporting (Yemi Adeniran)

In this session, you will learn how to analyse and prioritise AI risks using both qualitative and quantitative methods, assessing likelihood, impact and uncertainty. You will explore AI-specific metrics, risk levels and treatment approaches, enabling you to make defensible, risk-based decisions within your enterprise framework.

You will also consider effective monitoring and reporting processes, including performance indicators, incident tracking, audit review and board-level reporting. A practical workshop enables you to apply these tools to identify strategic, operational, cyber and regulatory risks in a real-world scenario.  

7. Cyber risks and AI-enabled systems (Yemi Adeniran)

This session examines how AI reshapes the cyber threat landscape and introduces new attack surfaces. Through real-world cases, you will explore risks such as data poisoning, prompt injection, model extraction, adversarial attacks, supply chain compromise, insider threats, fraud and deepfakes.

You will learn how to assess these threats in enterprise terms and integrate AI-specific cyber risks into existing security and governance structures.

Part Two: Implementation

8. A practical, principled way to classify AI system risk (Chris Fong)

You will learn how to classify AI systems using a structured, principle-based framework designed for enterprise deployment. After reviewing leading classification approaches and their operational challenges, you will apply a use-case agnostic methodology based on Responsible AI risk dimensions, scoring and control alignment.

9. Using Claude to create Cybersecurity tools: a CISO's perspective (Clyde Johnson)

In the wake of the dramatic uptake of AI to create tools in recent months, you will see how large language models can support cybersecurity governance and strategic decision-making. Through live demonstrations using Claude Code, you will explore structured workflows for risk assessment, compliance mapping and policy development.

From the perspective of a Chief Information Security Officer (CISO), the session focuses on creating transparent, auditable and explainable governance processes, including an AI-enabled risk register tool that you may use in the capstone exercise.

10. Agentic AI and generative AI risk (Nicole Königstein)

This session outlines how risk changes when AI systems become autonomous agents rather than static models. You will explore agentic threat types such as reasoning manipulation, memory poisoning, tool misuse and cascading failures, and why traditional AI and cyber controls are often insufficient.

You will assess enterprise controls including human-in-the-loop governance, constrained autonomy, secure execution and auditability to govern increasingly autonomous systems responsibly.

11. Advanced AI risks (Ajit Jaokar)

This session moves on to more advanced AI risk topics, including how to define and monitor meaningful metrics such as model accuracy, thresholds, drift, incident rates, false positives, regulatory breaches and security events. You will learn when escalation is required and how to embed measurement discipline into governance.
You will also evaluate vendor and foundation model risk, including lock-in, API dependency, model updates, geopolitical exposure, data leakage and liability from hallucination or error.

12. Capstone application (Onur Bıçakçı)

The programme concludes with a practical capstone in which you build an enterprise-ready AI risk register within a financial services mergers and acquisitions scenario. You will integrate strategic, operational and cyber perspectives into a single governance artefact suitable for executive and board review, mirroring how mature organisations operationalise AI risk management in practice.

You will have two weeks after the end of the taught sessions to complete to project. You will do so as part of a group, with each group expected to commit approximately 10 hours of work to the project outside of the live sessions.

Course Delivery

This course will run over six live online sessions on Mondays, Wednesdays and Fridays.

  • Monday 11 May 2026

  • Wednesday 13 May 2026

  • Friday 15 May 2026

  • Monday 18 May 2026

  • Wednesday 20 May 2026

  • Friday 22 May 2026

Sessions will be 14:00 to 18:00 (UK time) and delivered online via Microsoft Teams.

A world clock, and time zone converter can be found here: https://bit.ly/3bSPu6D

No attendance at Oxford is required and you do not need to purchase any software.

Accessing Your Online Course 

Details about accessing the private MS Teams course site will be emailed to you during the week prior to the course commencing.  

If you have not received your joining instructions three working days before the course start date, please get in touch. 

Digital Certification

Participants who satisfy the course requirements will receive a University of Oxford digital certificate of completion. To receive a certificate at the end of the course you will need to:

  1. Achieve a minimum attendance at online sessions of 75%.
  2. Submit completed work as part of the capstone project after the end of the taught sessions

Participants who meet this criteria will be emailed after the end of the course with a link, and instructions on how to access their University of Oxford digital certificate. 

The certificate will show your name, the course title and the dates of the course you attended. You will also be able to download your certificate or share it on social media if you choose to do so.

Fees

Description Costs
Course Fee £1490.00

Tutors

Ajit Jaokar - Course Director

Ajit combines AI roles in Research, Industry and Education

Ajit is a researcher and teacher of Applied Artificial intelligence at the University of Oxford.

He also works in senior AI Research and advisory roles such as recently for the 

UK Ministry of Justice as an applied AI researcher / fellow

Currently, he serves as the Course Director for several AI programs at the University of Oxford - ranging from AI Engineering, Low code AI and AI in Cyber security and Risk. He is also a Visiting Fellow for AI in Engineering Sciences at the University of Oxford. His work is rooted in the interdisciplinary aspects of AI.

Ajit’s applied research and consulting activities include areas such as AI in Cyber security/Risk management - Human AI collaboration - Data governance for AI agents - AI for Engineering Sciences. These areas are also related to his teaching.

He works globally in a consulting and advisory capacity.

Ajit is passionate about democratising the teaching and learning of AI using large language models and is doing some pioneering work in this area at the University of Oxford.

He has presented at the Capitol Hill / White house - The G7 summit, the World Economic Forum and the European Parliament on AI. At the UK Ministry of Justice, Ajit primarily worked with the use of AI in Cyber and Risk management working with the senior leadership in the Risk space at the Government.

He is currently writing a book aimed at teaching AI through mathematical foundations at the high school level.

Ajit resides in London, UK, and holds British citizenship. He is actively engaged in advancing AI education and innovation both locally and globally. He is neurodiverse - being on the high functioning autism spectrum.

Sample Consulting clients: UK Ministry of Justice, Verizon, Nvidia, Microsoft, European Internet Foundation
His newsletter on AI in Linkedin has a wide following

Vikram Tegginamath - Course tutor

Vikram Tegginamath is a Cyber Security leader and a technologist with over two decades of experience in developing, managing and securing information systems for high-growth consumer electronics, broadcast, data science and artificial intelligence organisations. In addition to leading cyber security for the Global Operations Practice, he also leads the AI Security domain at McKinsey & Company, serving clients across diverse industries.

As a trusted advisor to senior business and security leadership, his current role involves securely adopting new technologies (such as GenAI and Agentic AI), building successful internal security teams, creating security programmes and taking the organisation’s security capability forward in an accelerated time-frame.

With extensive experience in DevSecOps and cloud security, he is actively engaged in research on emerging AI security challenges and contributes to the broader security community. He brings deep technical expertise and substantial industry experience to this course. He holds an MSc in Cyber Security (GCHQ certified) from the University of Oxford and BEng in Electronics and Communication.

Mr Yemi Adeniran - Course Tutor

Yemi is a senior cyber security and AI risk & governance professional with more than 25 years’ experience in cyber security, AI governance, risk and compliance. He has led and advised on multiple high-profile technology transformation programmes across global markets, bringing together deep technical expertise and business solutions.

He holds an MSc in Cyber Security and an MBA from leading UK institutions, providing a solid academic foundation that complements his extensive hands-on experience.

Throughout his career, Yemi has helped organisations strengthen their cyber resilience and AI risk posture by designing and implementing cyber risk profiles in line with ISO and NIST frameworks. His strategic guidance has supported the development of comprehensive cybersecurity improvement programmes, AI governance frameworks, and regulatory compliance roadmaps.

A committed advocate for cyber security and responsible AI, he regularly delivers executive workshops on topics ranging from global cyber security to AI governance and risk.

Onur Bicakci - Course Tutor

Onur Bıçakçı is a Vice President at Alysian, with extensive experience in technology transformations and M&A transactions—from tech due diligence and Day 1 readiness to post-merger integration. As Senior Manager at Motive Create (formerly Motive Partners) and previously Assistant Director at EY-Parthenon, he has specialised in IT strategy, cloud migrations (AWS, Azure, GCP), and enterprise AI implementations across sectors including telecoms, life sciences, healthcare, finance and private equity. Onur holds an MBA from the University of Oxford’s Saïd Business School and a BSc in Industrial Engineering from Bilkent University.

Chris Fong - Course Tutor

Chris Fong has over 12 years of direct experience in Governance, Risk and Compliance (GRC) and has worked for Big Four advisory firms lKPMG and PwC, as well as major financial institutions such as Standard Chartered Bank, United Overseas Bank, and Singapore's sovereign wealth fund - GIC. Chris subsequently co-founded a VC-backed agritech startup and stepped away from active management after 6 years leading it from inception to a partial acquisition. He is now fully focused on helping enterprises operationalise AI Governance and provides training and consulting services around an operational framework and toolkit that he developed. 
Chris has a Bcomp in Computing and a MSc in Technopreneurship and Innovation from Singapore's top universities - NUS and NTU. He is also a certified AI Governance Professional (AIGP) and AI Governance Architect (AIGA), Certified Information System Security Professional (CISSP), Certified Information System Auditor (CISA), AWS Certified AI Practitioner and IBM watsonx.governance: Technical Essentials trained.
 

Clyde Johnson - Course Tutor

Clyde Johnson has over 30 years of experience at the intersection of cyber security leadership and AI governance, with credentials spanning information security, assurance frameworks and UK Government (CESG) security accreditation. Drawing on experience across global financial services, PCI-DSS regulated payment environments and secure UK Government systems, he examines how emerging AI systems are reshaping accountability, oversight and executive responsibility within high-assurance contexts.
His work focuses on developing disciplined approaches to governing agentic AI workflows within security operations, strengthening transparency, defensibility and executive accountability as autonomous systems assume greater decision-making roles.
 

Nicole Königstein - Course Tutor

Nicole Königstein is an AI Researcher and Practitioner in Agentic Systems, working across research, consulting, teaching and direct system implementation to build reliable, production-ready AI systems. Her work focuses on multi-agent architectures, evaluation, safety and long-term system behaviour.

She served as an external evaluator for a European Commission AI Grand Challenge and has advised the International Organization of Securities Commissions (IOSCO) on generative AI in regulated environments. She also serves on advisory boards for leading AI and quantitative finance conferences. Nicole regularly delivers invited talks and technical workshops across academia, industry and international events.

She is the author of Math for Machine Learning and Transformers in Action (Manning Publications).

Her forthcoming books, Transformers: The Definitive Guide – Applications Beyond NLP and AI Agents: The Definitive Guide, will be published by O’Reilly Media.

Application

How to apply for this course

To please apply for this course please complete the online application form

If you would like to discuss your application or any part of the application process before applying, please click 'Ask a question' at the top of this page.

Payment

Places will only be confirmed upon receipt of payment.

Fees include electronic copies of all course materials and tuition.

Course fees are VAT exempt.

IT requirements

This course is delivered online using Microsoft Teams. You will be required to follow and implement the instructions we send you to fully access Microsoft Teams on the University of Oxford's secure IT network.

This course is delivered online; to participate you will need regular access to the Internet and a computer meeting our recommended Minimum computer specification.

It is advised to use headphones with working speakers and microphone.