Language: English
10-18, 13:30–13:50 (America/New_York), Track 2 (206a)
The NIST AI RMF helps address and structure AI risks. This session explores how this framework provides essential tools for identifying, assessing and mitigating risks associated with AI. By adopting the NIST AI RMF, organizations can ensure ethical and safe practices in their AI operations, while building resilience to potential threats. Join this session to discover practical strategies and case studies illustrating the effective application of this framework in various business contexts.
Section 1: Introduction & The Modern AI Risk Landscape (3 minutes)
This section will quickly set the stage. I'll introduce myself and immediately dive into the rapid growth of AI and generative AI, emphasizing that understanding and managing its inherent risks is crucial. I'll provide a high-level overview of key risk categories: malicious use, cybersecurity/privacy, intellectual property, fake content, and algorithmic bias. The goal is to establish the "why" for the talk, highlighting the fundamental need for risk management in today's tech landscape.
Section 2: Introducing the NIST AI Risk Management Framework (RMF) (4 minutes)
Here, I'll introduce the NIST AI RMF as a structured, voluntary framework designed to address AI's unique challenges. I'll briefly mention NIST's reputation and then explain the RMF's primary goals: improving AI trustworthiness, promoting responsible innovation, and aiding compliance. I'll concisely cover its foundational concept of trustworthy AI, listing its seven key characteristics: safety, security and resilience, validity and reliability, accountability and transparency, explainability and interpretability, privacy, and fairness (with harmful bias managed).
Section 3: Core Functions of the AI RMF (10 minutes)
This segment is the technical core, moving from theory to practical application. I'll provide a concise walkthrough of the four functions that comprise the RMF lifecycle, dedicating about 2.5 minutes to each.
Govern (2.5 minutes): This foundational layer is about establishing an organizational culture of AI risk management. I'll explain that it involves setting clear policies, defining accountability, and ensuring continuous stakeholder engagement.
Map (2.5 minutes): Once governance is in place, we "Map." This function focuses on identifying the specific context of an AI system and discovering its potential risks. I'll briefly cover categorizing the system, understanding its capabilities, and mapping its potential impacts on individuals and society.
Measure (2.5 minutes): After identifying risks, we "Measure" them. This involves assessing, analyzing, and tracking AI risks. I'll highlight the importance of selecting appropriate methods and metrics to evaluate AI systems against the trustworthy characteristics, including continuous monitoring.
Manage (2.5 minutes): The final function is to "Manage" the identified and measured risks. This involves prioritizing risks and developing response plans. I'll touch upon different risk treatment strategies (mitigation, transfer, avoidance, acceptance) and the importance of documenting these strategies and creating a communication plan for incidents.
Section 4: Conclusion & Actionable Next Steps (3 minutes)
To conclude, I'll summarize the key pillars of the NIST AI RMF and reiterate its value as a flexible guide. The final message will be a call to action, encouraging proactive integration of risk management into AI development. I'll provide one or two simple, actionable first steps an organization can take to begin their RMF journey. The presentation will end with a brief Q&A session.
A Seasoned Full-Stack Software Developer, specialist in cybersecurity and DevOps, Jean-François is also CEO and Co-Founder of BrightOnLABS, a company which will soon market a range of agentless cybersecurity software powered by AI to protect your cloud infrastructure.