Excerpt from Global Compliance News Article, Published on Oct 22, 2024.
As artificial intelligence (AI) continues to evolve, the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) is emerging as a critical tool for managing AI-related risks. First introduced in January 2023, the framework is gaining traction as an essential guide for both public and private organizations aiming to develop AI systems responsibly.
In recent months, the framework has become a key reference point for US legislation. For example, in October 2023, the White House issued an executive order on AI governance that directly referenced NIST’s AI RMF. Similarly, California’s 2023 executive order on responsible AI use directed state agencies to base AI procurement guidelines on the NIST framework, with the state’s public sector AI guidelines now incorporating its principles.
The private sector is also recognizing the value of NIST’s AI guidance. California’s proposed legislation—the Safe and Secure Innovation for Frontier AI Model Act—mandates that AI developers follow NIST’s recommendations. Meanwhile, Colorado’s upcoming AI regulations will provide a legal defense to companies that adhere to the AI RMF, further underlining its growing influence.
NIST’s AI RMF is designed to help organizations mitigate risks and increase trustworthiness in AI systems. It outlines key characteristics of reliable AI, including validity, safety, security, and transparency, all while encouraging organizations to implement robust governance measures. These principles serve as the foundation for AI governance, risk assessment, and management.
As AI becomes more deeply embedded in society, the NIST AI RMF is set to play a crucial role in shaping how AI is developed, governed, and deployed—both in the US and beyond. It offers a flexible yet comprehensive approach to managing AI risks and building trustworthy systems.
To delve deeper into this topic, please read the full article on Global Compliance News.