In this recent VentureBeat article, Kyle Wiggers illuminates what Explainable AI (XAI) is and why the promises made about XAI are not always attainable. Explainable AI is almost always more beneficial than black box AI models, however XAI offers several technical barriers. These barriers make the models uninterpretable to 65% of employees. The explainations provided by XAI often do not create a transparent enough presentation of data to be understood by non-technical stakeholders.
As Wiggers notes, “XAI should give users confidence that a system is an effective tool for the purpose and meet society’s expectations about how people are afforded agency in the decision-making process.” Despite XAI often failing to meet these expectations, it is still important for businesses to continue to move towards more transparent AI models. They should be cautious about how valuable explainability is. Explainablility alone is not the solution to understanding AI and ML models, but it is a critical step towards understanding them.