Say you hired a human to do a job. They dive into the work, and at some point along the way you ask them to explain why they made a certain decision. You might be asking the question to better understand their process, or you might be asking because of a particular outcome (good or bad) that resulted from their work.
If they repeatedly replied with “I’m not telling you” – what would you do?
Most likely, the person would (at least) be put on a performance improvement plan - or ultimately be fired. This kind of behavior and lack of communication isn’t generally tolerated in the workplace among humans.
So, why would we accept this from our models?
Operating your AI without good governance practices is the equivalent of allowing your models to say “I’m not telling you” when it comes to the why, what, and how these automated algorithms are making decisions on behalf of your business.
What’s the point of AI? You’re investing in models to do a job. And you should expect accountability for how that job is performed and what impact is made.
Two trends have been growing in parallel over the past decade:
In a recent survey by Deloitte, 79% of respondents say they've fully deployed three or more types of AI compared to just 62% in 2021. In the same report, 94% of business leaders surveyed say AI is critical to success today. Such remarkable growth in a single year underscores the pervasive importance of advanced modeling to businesses worldwide.
Additionally, more and more technology vendors with AI-first offerings are emerging in every market, from the most technically savvy to the most lagging industries. Building AI into software products has begun to move beyond marketing hype to reality. As with the software revolution before it, predictions that models will run the world are coming to fruition. There is every reason to believe every structured data will eventually have learning models running attached to it.
With that eventuality in mind, it is not surprising that we are experiencing a great awakening of the risks associated with that influence on our lives. ESG and CSR have become closely intertwined because AI is such an important driver of how responsible and ethical an organization is. Without AI governance, organizations will not be able to demonstrate responsible practices around their most consequential models – those with the capacity to impact consumers’ lives most and those that can create adverse business outcomes.
A report conducted by MIT Sloan in partnership with Boston Consulting Group found that “while 84% of organizations view Responsible AI (RAI) as a top management issue, only a quarter have fully mature RAI programs.” Moreover, those organizations with stronger programs benefited significantly, since they were able to put more models in production with confidence.
Finally, Responsible AI as a job title and business function has grown quickly this year. At a high level, this is likely for three reasons:
So, AI governance is a part of ESG and CSR. It also relates to operational risk management, cyber security, privacy policy, and data governance. How companies organize these various initiatives can vary, but the important thing is that they exist and create transparency across the many stakeholders involved.
It’s time to ditch the intimidation that often rides along with the word “governance.” Think of AI governance as no more than the act of defining and executing good practices related to your machine learning and artificial intelligence models.
Framed in those terms, governance becomes something we should expect every company to do – and something that businesses should all want to do. “I don’t want to adopt best practices'' is not a phrase you ever hear pass the lips of business leaders and employees.
Enterprise AI governance is the work of applying tried and true business practices to this newer area of business empowered by AI. This includes:
AI governance is similar to other business practices in that it involves:
The main difference is that with A, we are outsourcing business decision-making to a non-human entity. And the question is how we govern that effectively and responsibly.
AI has been eye-opening in making us reconsider how we govern our models in general. Model governance in business has been pretty loosey-goosey.
When it’s a statistical model or a financial model, we build the model around assumptions and expectations that the model will continue to operate successfully. Good governance is to document, monitor, and test those assumptions – whether related to a specific model or an overall AI program.
Ironically then, the emergence of AI governance has brought greater visibility into the overall need to question, monitor, and measure model performance. Along with the need to consider risks associated with your models.
So, the practice of model and AI governance are basically the same. It’s a question of the scope of the type of model. But both are to do with:
Context means that the business reasons, scope, risks, limitations, and data are well-defined and fully documented before a model goes into production.
Naturally, as the model operates, the focus of AI governance switches into an execution phase that centers on monitoring the contextual elements, particularly key tests and validations of models that demonstrate unbiased and optimal operation.
Every business and technical decision and step in the model development process should be able to be verified and interrogated. Verification requires visibility, so it is key to have a central system of record that enables:
The gold standard of governance is when any ML system can be reasonably evaluated and understood by an objective individual or party not involved in the model development. If a machine learning project is built with the prior two principles of context and verifiability, it is far more likely that your business and risk partners can act effectively as that second-line and third-line objective parties to evaluate it and greenlight your work to go into production.
The result of using the above principles will help your organization create multiple lines of defense against the outsized risks that AI can represent to the business. Lines of defense that follow best practices are both internal and external. AI governance should cover:
The push for external independent audits of AI is a common thread in regulatory and risk management discussions in the broader world. The EU AI Act, for example, will require external independent audits of high-risk AI systems. The NIST AI Risk Management Framework also states in Measure 1:
“Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, and external stakeholders and affected communities are consulted in support of assessments.”
We should expect this expectation will propagate around the world as more AI-specific regulations and standards frameworks are launched.
In parallel with recent social movements, AI has also brought a new level of awareness around bias in data. Bias is not just an “AI problem” – as a society, we are grappling with equity and fairness and as we continue to advance those conversations, we should expect our technology to reflect those principles and expectations. Historical data is probably going to be biased because society has been biased. This is not only an ethical and reputational risk for businesses; it’s a huge legal risk.
Once we acknowledge that bias is a HUMAN problem, we can recognize that unfairness in machine learning models is both caused by and perpetuated by humans, not the machine. ML uses data that humans provide and performs those functions that we humans assign it. Any malfunctions, maladaptations, or maliciousness are thus extensions of human actions and choices. Wide-scale deployments making consequential decisions in the real world have only uncovered the need for more human responsibility and accountability. This puts the onus on human prevention, which ultimately falls under the purview of AI governance.
A holistic approach to lifecycle AI governance helps root out bias so companies can mitigate and manage the problem before it scales. Moreover, a strong program of governance helps companies anticipate areas where bias might emerge and take a more proactive approach earlier in the model lifecycle. The end goals should be to:
The responsible AI that good governance enables addresses three paramount concerns for corporations today: business performance and ROI, government regulations and compliance, and corporate responsibility.
Below are a few of the business benefits of effective AI governance.
With these benefits also comes better ROI on AI programs. In a 2020 study from ESI ThoughtLab, researchers found that “overperformers'' had a higher ROI from AI implementations.
AI overperformers were enterprises that had more of a business practice foundation in these areas:
When you reflect on the business case, touched on at the highest level above, combined with the peace of mind that comes with doing the right thing for consumers and society, AI governance starts to look less like a chore and more like a vital business enabler.