Article Summary

VISIT AI Trust Library
Article Summary of:

Been Kim is building a translator for artificial intelligence

Published:
January 10, 2019

Although over a year old, this interview with AI thought leader and practitioner Been Kim is worth a read, or a reread, even 18 months later. Kim has a gift for translating the drive for responsible AI into relatable and evocative metaphors. Beyond addressing the black box problem of machine learning, she and her team at Google Brain are actively seeking ways to make algorithms interpretable by humans through a system called Testing with Concept Activation Vectors (TCAV). The goal is for it to reflect human concepts of understanding, rather than just the input features that the computer relies upon. As she notes, "You don’t have to understand every single thing about the model. But as long as you can understand just enough to safely use the tool, then that’s our goal."

From a broader perspective, she rightly notes the importance of ML practitioners delivering interpretability, one facet of assurance, to ensure that humans don't abandon the potential of AI: "in the long run, I think that humankind might decide — perhaps out of fear, perhaps out of lack of evidence — that this technology is not for us."

Text Link
Ethics & Responsibility