Suppose you were to ask a team member for their reasoning on a decision they made during the job. This might be to understand how they worked or because of a result, either good or bad, from their labor.
If they continued to tell you “I'm not telling you”, what would be your next step?
At the least, the individual will probably be put on a performance improvement plan - or fired. This type of conduct and lack of communication is not typically accepted in the workplace.
What makes this acceptable coming from our models?
Without proper governance practices, deploying AI is like asking your models to keep silent about why, what, and how they're making decisions that affect your business.
The purpose of AI is to create models that achieve desired outcomes—thus, it's important to be mindful of the results they yield and the impact they make.
Climate change and the implementation of artificial intelligence (AI) have both been on the rise in the past ten years.
Businesses are increasingly expected to take into account social and environmental issues, as evidenced by the Environmental, Social, and Governance (ESG) and Corporate Social Responsibility (CSR) movements.
A Deloitte survey found that within one year, the percentage of people who had deployed three or more types of AI rose from 62% to 79%, and 94% of business leaders felt that AI was vital for success. This rapid expansion demonstrates just how pervasive advanced modeling has become in businesses around the world.
Moreover, more and more tech businesses with AI-first offerings are appearing in every sector, from the most state-of-the-art to the laggard markets. Installing AI into software applications has started to become an actuality rather than merely empty hype. As it did previously with the software wave, prophecies that models will control the world are becoming a reality. It seems logical to assume that each organized data will eventually have learning models connected to it.
Given the implications of AI, it is understandable that ESG (Environmental Social Governance) and CSR (Corporate Social Responsibility) are more closely linked. To be responsible and ethical, organizations must have proper governance over their AI models – those that can shape consumer lives or lead to negative business results. Without this governance, firms will not show responsible behavior.
A joint study by MIT Sloan and Boston Consulting Group showed that "though nearly all companies saw Responsible AI (RAI) as a major concern, just one in four had fully developed RAI programs." Furthermore, those organizations with better programs saw more success as they were able to introduce more models with assurance.
In the quest to meet market demand, technological advancement, and urgent global needs, there was an increased demand for the job of Responsible AI and its associated business function. This is most likely due to markets seeking new opportunities, technological advances in the field, and a pressing need for reliable solutions on a global scale.
Businesses that invest in responsible AI are likely to see an improved return on their investment. A growing expectation from consumers, regulators, investors and society is that businesses conduct themselves responsibly.
Business leaders tend to make the right decision when given the chance.
Let's not be intimidated by the concept of AI governance. It is simply setting and maintaining good standards related to your ML and AI models.
In this context, governance is something that all businesses should aspire to do. There's no excuse not to adhere to the expected standards of corporate behavior.
Executing enterprise-level AI governance requires applying traditional business management principles to this newer technology, including:
Similar to other business practices, AI governance requires a combination of people, structures, and processes to be successful across the entire data science lifecycle. Good governance adheres to the following requirements:
A key distinction exists in that AI entrusts business decisions to an automated system. So the concern then becomes how we effectively and responsibly regulate it.
AI has really challenged us to think differently about our model governance, which had previously been quite lax.
When constructing a statistical model or a financial model, we form it based on our belief that it will continue working efficiently. Good governance is making sure to record, keep track of, and validate those expectations - either concerning one particular model or a larger AI program.
Consequently, the rise of AI governance sheds light on the necessity to query, monitor, and evaluate model performance, as well as contemplate risks connected with the models.
Therefore, the scope of model and AI governance are similar in the following ways:
Let's look at three principles for managing responsible AI and establishing governance
Prior to production, it is important to ensure that the goals of the business, scope, potential risks, existing limitations, and data are properly defined and documented in order to maintain clear context.
When the model is in use, the focus of AI governance moves to monitoring the context around it, particularly testing and validating that the model is operating fairly and efficiently.
For any business or technical decision and action taken during model development, verification and scrutiny is important. Having a central system of record that provides visibility helps to ensure that the team is accountable to governance:
Adopting the gold standard of governance ensures that ML models can be evaluated and comprehended by an impartial individual or entity not affiliated with the model's construction. If a machine learning initiative is designed with context and transparency in mind, stakeholders like risk managers have the necessary information to confidently approve its deployment.
The result of using the above principles will help your organization create multiple lines of defense against the outsized risks that AI can represent to the business. Lines of defense that follow best practices are both internal and external. AI governance should cover:
The push for external independent audits of AI is a common thread in regulatory and risk management discussions in the broader world. The EU AI Act, for example, will require external independent audits of high-risk AI systems. The NIST AI Risk Management Framework also states in Measure 1:
“Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, and external stakeholders and affected communities are consulted in support of assessments.”
We should expect this expectation will propagate around the world as more AI-specific regulations and standards frameworks are launched.
In parallel with recent social movements, AI has also brought a new level of awareness around bias in data. Bias is not just an “AI problem” – as a society, we are grappling with equity and fairness and as we continue to advance those conversations, we should expect our technology to reflect those principles and expectations. Historical data is probably going to be biased because society has been biased. This is not only an ethical and reputational risk for businesses; it’s a huge legal risk.
Once we acknowledge that bias is a HUMAN problem, we can recognize that unfairness in machine learning models is both caused by and perpetuated by humans, not the machine. ML uses data that humans provide and performs those functions that we humans assign it. Any malfunctions, maladaptations, or maliciousness are thus extensions of human actions and choices. Wide-scale deployments making consequential decisions in the real world have only uncovered the need for more human responsibility and accountability. This puts the onus on human prevention, which ultimately falls under the purview of AI governance.
A holistic approach to lifecycle AI governance helps root out bias so companies can mitigate and manage the problem before it scales. Moreover, a strong program of governance helps companies anticipate areas where bias might emerge and take a more proactive approach earlier in the model lifecycle. The end goals should be to:
The responsible AI that good governance enables addresses three paramount concerns for corporations today: business performance and ROI, government regulations and compliance, and corporate responsibility.
Below are a few of the business benefits of effective AI governance.
With these benefits also comes better ROI on AI programs. In a 2020 study from ESI ThoughtLab, researchers found that “overperformers'' had a higher ROI from AI implementations.
AI overperformers were enterprises that had more of a business practice foundation in these areas:
When you reflect on the business case, touched on at the highest level above, combined with the peace of mind that comes with doing the right thing for consumers and society, AI governance starts to look less like a chore and more like a vital business enabler.