It increasingly looks like a matter of “when” not “if” AI will become a regulated category of technology. Questions about safety, security, and ethics have become increasingly urgent as ever more sophisticated and consequential cognitive tasks are delegated to machines.
Some industries were already in legislators' crosshairs. A notable example affects life insurers operating in Colorado, where a first-of-its-kind regulation has introduced governance and risk management requirements for the use of algorithms or predictive models. Broader legislation is `taking shape at the federal level: the Biden-Harris administration recently signed an executive order about responsible AI that went further than many predicated.
The movement towards stricter regulation of AI is now clear in the U.S. and internationally. Focus is on the need for dedicated AI governance. However, while the advent of rules governing AI is novel and a threat if left unmanaged, it is important to remember that governance is not a synonym for compliance.
Done properly, enterprise governance aligns the strategic objectives of a business with assessments and management of risk, and makes sure that company resources are used responsibly and efficiently. The objectives of AI governance are similar, and the prospect of a formal process represents an opportunity to improve the quality and expand the impact of AI models.
An alarming statistic for anyone interested in the responsible and efficient use of company resources is that 60-80 percent of AI projects fall short of their intended objectives.[1] The study behind these figures blamed the problem on poor internal alignment and a lack of collaboration. Many AI systems cross internal silos, either in their inputs, outputs, or both.
Leaders in enterprises who see evidence of this statistic in their organizations should already be questioning how to turn it on its head. Those who do not should probably be looking more closely. There’s a good argument that the reach of innovation should exceed its grasp. However, when teams fail to work effectively with each other, they not only waste budgets and time but also thwart innovation and diminish future competitiveness.
The business case for AI governance rests on uniting controls for risk with programs for achieving business and stakeholder objectives. Dedicated processes and frameworks can achieve this alignment by setting clear requirements and embedding best practices into the building of complex systems. The bonus is that these same controls also align with compliance needs.
[Data and analytics] and business strategy are among the main drivers for AI governance. When AI governance is lacking, increased costs is the most common negative impact. - AI governance frameworks for responsible AI, Gartner Peer Community
It’s no surprise that the tenets of AI governance are a complement to enterprise governance - the objectives of reducing costs and driving revenue are shared. While AI governance needs specialist knowledge, the stakeholders span the business.
If your role is related to data science or AI model building, risk, or governance, or if you’re an executive in a business that uses AI, you are likely among these stakeholders. Sooner rather than later, the outcomes of the AI safety debate will directly affect your organization and your corporate responsibilities. According to a recent AWS survey, few enterprises have established a dedicated AI governance function. But with regulation taking shape and the business impact growing, how this function will be structured, tasked, and resourced are near-term questions.
At Monitaur, we’re ready to help you achieve great AI governance for both the near and long term. Contact us to assess the best place to start in your organization.
________________
[1] 1. Schmelzer R. (2022) “The One Practice That Is Separating The AI Successes From The Failures”