Don't skip the human side of managing AI

Ethics & Responsibility

Hello, Anthony Habayeb here, co-founder and CEO of Monitaur.

I'm finding that conversations about AI ethics and explainability pretty much always come up with customers these days, and I'm reading more and more articles talking about the need for ethics and explainability. I recently was speaking to a journalist and a question came up: "Should AI be accountable for AI? Should AI systems get credit for their inventions?"

The Human side of AI

Somewhere in all of this excitement about the possibilities of AI explainability or AI ethics we seem to be losing this fundamental understanding that humans build these systems. Humans select the data that these systems are trained upon. Organizations run by humans are giving budget to build these systems that will then impact our lives.

When we are talking about explainability, what we should be discussing is who needs to understand what, and are we providing an understanding and context for that stakeholder. A consumer impacted by a model: what do they need to know? A regulator evaluating a company or their systems: what do they need to know? A data scientist: what do they need to know? A business leader: what do they need to know?

Each of those people have different needs for understanding, and we should be thinking about how to cause understanding, not just trying to "explain" AI decisions. Rather, we should try to put it in the eyes of the recipient, the person who wants to understand it.

Finding guideposts for AI ethics

As we talk about AI ethics sometimes, I think we discuss it in isolation without recognizing some governmental and societal principles around fairness and equity in treatment of individuals already exist. Those principles in large part should be our ethical guideposts as we are thinking about deploying systems that make decisions about our lives.

We've got some content in this issue of the Newsletter that touches on these points, but I really wanted to reflect on something I've been seeing over the last month, which is almost an isolated, somewhat disconnected from context, conversation about technical explainability or AI-specific ethics that is maybe missing that we've walked the path before around implementing technology or systems that impact our lives. We have some foundational concepts that we should be bringing along in the AI journey that really help inform how we think going forward.

Hope everyone is having a great day. And please keep an eye out for our next issue of the ML Assurance Newsletter.