Article Summary

VISIT AI Trust Library
Article Summary of:

NIST Asks A.I. to Explain Itself

Published:
August 18, 2020

While not explicitly a regulatory agency, the National Institute of Standards and Technology (NIST) has massive reach and influence, so its request for public comment on newly created principles for explainable AI shows the current groundswell. The title of this article undersells some of the deeper content covered after the principles themselves, which align with many other groups' conceptions: evidence for every output; understandable to users; reflective of systems' processes; and operating within design constraints. Jonathon Phillips, one of the authors, expounds on challenges in the field of AI explainability, since different users have different expectations of comprehensibility. And he also questions the reliability of human explanations, even imagining a future in which machines strengthen our human capabilities in this area.

Text Link
Regulation & Legislation