Image credit: Towards Data Science

Highlights from Towards Data Science podcast

AI Governance & Assurance
Risks & Liability

Recently, our CEO Anthony Habayeb had the opportunity to speak with Jeremie Harris on the Towards Data Science podcast.

Together they discussed the importance of AI Governance within corporations as a growing chorus of regulatory bodies, consumers, and lawmakers across the globe call for increased regulation of artificial intelligence systems. Here are some of the themes and key moments that emerged in their discussion.

The call for AI regulation is here

AI's explosive growth over the past decade has opened the door for new and innovative technologies, but if they are not understood by the public or the corporations using them, they are not as effective as they could be.

There’s a heck of a lot of talk about the concerns about AI. I think that’s creating a new friction that is slowing down a lot of innovation. [...] Regulators and the public are saying, "Wait a minute. This is a black box that you can’t understand? How can you see this as being fair? I’m not okay with that."

– Anthony Habayeb

Consumers want to understand how and why algorithms reach their decisions on topics that directly impact their lives. With the absence of oversight on such consequential decisions, consumers will resist placing trust in AI. Therefore, to appease consumers, it is in the best interest of corporations to work towards implementing oversight to demonstrate well-informed and well-intended deployment of AI and ML systems.

While the United States government has yet to take legislative action to regulate artificial intelligence, businesses should understand that the FTC and OCC have already stated that established and active regulations do apply to artificial intelligence systems. They can take positive steps to assure and govern their high impact AI and ML today and protect themselves from regulatory, strategic, and reputational risks.

The need for Third Party Oversight

In enterprises today, we expect there to be levels of supervision to avoid risk. Why should AI be any different? 

There are already many understood methodologies for managing risk. And those methodologies already have concepts of people oversight, process oversight, and objective verification. The person who bakes the cake can’t say the cake tastes amazing. The person who writes the software doesn’t necessarily say their software is perfect, right?

– Anthony Habayeb 

Mitigating risk is a process that requires many stakeholders across a business. The creators of a model should not have responsibility to oversee their work. As the creators, they cannot gain the necessary outsider’s perspective and therefore are likely to miss issues that can have a hugely negative impact on people’s lives and opportunities. Third party oversight makes catching these mistakes more feasible because they are further removed from the model construction process. 

Eliminating bias is a myth

Across industries and the larger data science community, more and more people are grasping just how susceptible machine learning and  artificial intelligence are biases in their algorithmic outcomes. AI is made by human beings who have biases themselves that they unintentionally embed within their software. The data on which these systems rely also often reflects systemic biases seen across societies.

You need to go into the use of algorithmic decisioning and AI and ML just recognizing that something is gonna go wrong. Bias against protected classes is one of those things that no matter what guardrails you try to put in place, it will happen. So, really the conversation should be: What guardrails do I have in place to try to mitigate, to try to identify when that happens, and to resolve once I identify that.

– Anthony Habayeb 

Instead of talking about “eliminating” biases, we should talk about mitigating the risk of bias through a comprehensive approach. Through oversight, governance, and auditing, companies can identify biases and resolve them as they are discovered. Just like human decisions, no piece of software will ever be free of bias, and companies need to manage these “intelligent” systems as they have long done with additional lines of defense and specific controls tailored for the new paradigm and scale that AI presents 

Be cautious of “Explainability”

In the fray of technical debate about the "right" explainability tools for the model at hand, an important dimension fades from view. Who exactly is the audience of an "explanation"?

“Explainability” is a dangerous word. “Understanding” is the right word. We very much need to align with the stakeholders and what they want to know. So, with my credit example, [as a consumer] I’d like to know: Are there things I could do to improve my score? Are there thematically certain areas that impacted what my negative treatment might have been?

– Anthony Habayeb

Today in the realm of Explainable AI tools, when we say “explainability,” it means explainable to a person with strong technical skills and deep familiarity with how that model works. That does not mean it is explainable to the average consumer wondering why they received a certain insurance pricing, line of credit, or medical diagnosis instead of another. In the future, the corporations that provide transparency that helps their customers understand why certain decisions were made by the machine will possess a unique competitive advantage over those that cannot because they focused on  building trust.

Check out Towards Data Science's summary of the session and excellent educational content.