The National Institute of Standards and Technology (now part of the U.S. Department of Commerce) has been an American institution since 1901.
To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
NIST has been developing the AI Risk Management Framework (RMF) to “better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).” While the mission of NIST is broad and far-reaching, the organization’s recent contributions when it comes to AI technology have been strategic and specific.
The risk management framework (RMF) from NIST is being crafted through a community approach, with over 85 entities contributing comments and recommendations on the recent draft of the framework in March 2022. The organizations contributing to the development, and research behind, the framework range from higher education institutions and Fortune 500 companies to research labs and AI technology companies (including Monitaur - read our NIST AI RMF comments here).
NIST’s framework for AI risk management is currently in draft form. The organization aims to release an official version 1.0 in early 2023. In both cases, the framework is meant to be a voluntary resource for organizations using AI technology to do so more effectively while:
NIST lists four intended audiences for the AI RMF:
The overall outline of the NIST AI Risk Management Framework Draft is:
Additional information in the draft includes the scope of the framework, intended audiences, and a practice guide.
NIST compiled information on understanding risk and adverse impacts of AI and the challenges that come with AI risk management. Key points include potential harm (and advantages) that can stem from AI, being able to measure and track AI initiatives and the potential associated risks, and organizational integration.
AI Risks and Trustworthiness
In this section of the AI RMF, NIST defines “characteristics that should be considered in comprehensive approaches for identifying and managing risk related to AI systems: technical characteristics, socio-technical characteristics, and guiding principles.”
Read the full AI Trust section to learn more.
Core RMF Components
The ultimate objective of the AI RMF Core is to enable functions within an organization to “organize AI risk management activities at their highest level to map, measure, manage, and govern AI risks.”
NIST includes the following functions as essential to AI governance:
This vital project from NIST has the potential to accelerate effective governance and assurance of AI and ML systems.
At Monitaur, we believe that, by creating more trust and confidence in how these technologies are applied and managed, all stakeholders – corporations, regulators, and consumers – can benefit from extraordinary innovations that will improve our lives. We also believe that good AI requires great governance to ensure that these systems are more fair, safe, compliant, and robust than the human processes that they replace or enhance.
We recognize that NIST is at its core a technical organization seeking to provide clarity on the use of AI technologies, and the AI RMF achieves that aim. However, the risks associated with AI are not solely technical in nature, nor are we at a time in its maturity when we can mitigate those risks effectively with purely technical solutions. Recognizing those limitations, we encourage NIST to consider a holistic, lifecycle approach that incorporates oversight of the people and processes involved, in addition to the model and data risk management.
NIST previously delivered just such a comprehensive approach in its Cybersecurity Framework. In it, inherently technical activities (e.g. Detect) are complemented by human- and process-driven activities (e.g. Identify), as well as a recognition that technical activities must be supported by effective human effort. The combination of people, process, and technology enables organizations to mitigate risks, and we believe it should serve as a model for the AI RMF to create direction, clarity, and accountability for organizations that wish to use AI systems now and in the future.
Read Monitaur’s full response to the NIST AI RMF draft.
Earlier this year, NIST also published a special report: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
This special report from NIST is aligned with what we’ve said at Monitaur from day one: bias is a human problem, not a machine problem. Also, “context is everything.”
In this report, NIST suggests a “socio-technical” approach to mitigate bias in AI by acknowledging that AI operates in a larger social context.
Some of the key takeaways from the report include:
“This document has provided a broad overview of the complex challenge of addressing and managing risks associated with AI bias. It is clear that developing detailed technical guidance to address this challenging area will take time and input from diverse stakeholders, within and beyond those groups who design, develop, and deploy AI applications, and including members of communities that may be impacted by the deployment of AI systems.” Read the full report.