Trustworthy AI? National Institute of Standards and Technology Releases Artificial Intelligence Risk Management Framework

Trustworthy AI? National Institute of Standards and Technology Releases Artificial Intelligence Risk Management Framework

Alessia Woolfe

What could this mean for upcoming legislation in Canada?

Exciting new advancements in artificial intelligence (AI) promise benefits in industries ranging from finance to healthcare. AI can solve complex problems faster than humans and it is quickly shaping up to be the next step in automation. On the flipside, AI-powered content generation tools like ChatGPT and Google’s “Bard” present new threats ranging from plagiarism to fraud. In Canada and the United States, there is currently no comprehensive legislation governing the use and development of AI. 

However, regulatory initiatives are emerging⁠—the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) was released on January 26. The NIST is an agency of the U.S. Department of Commerce that aims to “promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” Although NIST standards are non-binding, they can potentially become industry-standards with widespread use. For example, the NIST Cybersecurity Framework is internationally recognized and outlines best practices for organizations seeking to manage cybersecurity risks. The AI RMF presents guidelines for industries to develop “trustworthy” AI: AI that is lawful, ethical, and technically sound. The idea behind the guidelines is that AI will only be used when people can trust the technology, and adherence to the framework will achieve this goal.

Framework

The AI RMF is broken into two parts. Part One is about framing the risks associated with AI by identifying the features that make it trustworthy. Part Two outlines the contents of the framework to guide the development of trustworthy AI. In Part One, there are seven characteristics for identifying the trustworthiness of an AI system: (1) valid and reliable; (2) safe; (3) secure and resilient; (4) accountable and transparent; (5) explainable and interpretable; (6) privacy-enhanced; and (7) fair. 

Self-driving cars are an example of an AI system whose trustworthiness could be characterized using these descriptors. The car must be valid and reliable in that it must consistently obey traffic rules, avoid obstacles, and not interfere with other cars and pedestrians. Safety is paramount, given the high risks posed to human health and life when driving. A secure car would not be vulnerable to cyberattacks, and a resilient car would be able to drive safely during the sudden onset of a snowstorm. In a transparent system, the decision-making mechanism is generally accessible and reviewable. Thus, if the system fails to function properly, transparency can assist in determining who should be held accountable. It should also be possible to explain how the car makes decisions like when to brake and interpret why the car makes a decision in a given scenario. The car will also inevitably collect extensive data on driving habits, where a person spends time, how much time they spend there, and more, highlighting the need for privacy. Finally, facial recognition technology may be used to differentiate between humans and inanimate objects on the road. However, if the car cannot recognize the features of some people as well as others, this lack of fairness would be a significant ethical and safety problem. All of these characteristics interact with one another and sometimes must be balanced against each other. 

The NIST AI RMF provides guidance on translating the characteristics of “trustworthy AI” into practice in Part Two. It does so through the four core “functions” that organizations can use: governance, mapping, measuring, and managing. Mapping tracks different stakeholders and functionalities of AI systems to anticipate the risks. Then, the seriousness of the risks are measured to align the system with the characteristics of trustworthy AI. These indicators are used to manage the risks by monitoring and responding to them. The AI RMF brings these core functions under governance, pointing to the need to create a whole-of-organization approach that brings together and coordinates these different functionalities. 

Implications

The AI Risk Management Framework will be an important tool for organizations internationally. Though standards are voluntary, NIST compliance is a requirement for many public sector entities in Canada (for example, the Canadian Sector for Cyber Security must adhere to the NIST cybersecurity standards). Given this requirement, the NIST framework can influence how Canada will legislate around AI. The federal government’s new private sector privacy bill, Bill C-27, is currently at second reading in the House of Commons. One of the proposed acts in the bill is the Artificial Intelligence and Data Act, which will be the de facto comprehensive AI regulation. As this proposed legislation makes its way through Parliament and goes through changes, the need for NIST compliance in industry may affect the final product.

One of the main benefits of AI is its ability to parse enormous quantities of data and recognize patterns that may be invisible to humans. Inexplicability is a necessary side effect of the power of AI, but it is also a major source of distrust. The risk management framework requires AI systems to be explainable, but the extent of this is yet to be determined. Balancing the need for explainability with the power to perform tasks beyond human capability continues to be a challenge. Only time will tell if the NIST framework will serve its mission to create safe, secure, and reliable systems, or if regulation will impede AI’s progress and potential.

Visualization of the core functions from the AI RMF document. Credit: NIST

Editor’s Note: Alessia Woolfe and Hannah Rosenberg are both 1L representatives of the student-led Privacy and Cybersecurity Law Group (PCLG).

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.