5 tips for governance in your AI product strategy

Regulation & Legislation
Risks & Liability
Ethics & Responsibility
AI Governance & Assurance

The biggest users of AI tend to be large organizations, but many of the AI-based solutions they deploy are innovations of specialty technology vendors. A common stumbling block for vendors is the need to demonstrate to enterprise buyers that the AI models powering their solutions meet applicable standards for safety and ethics, quality, and performance.

Risk teams are often the first to consider a systematic approach to these demands. The first task is navigating complex or conflicting rules and principles that govern AI systems. After that, there is the need to balance the needs of model builders and engineers with the growing expectation that non-technical audiences understand AI models.

We spoke to Susan Bow, General Counsel at CAPE Analytics, a fast-growing vendor of AI solutions for insurance and real estate enterprises, about how her team meets and exceeds customer expectations with an AI governance solution from Monitaur.

A general counsel’s tips for AI governance

1. How did AI governance become a priority?

We could see that AI governance was set to become a priority for our customers. The National Association of Insurance Commissioners (NAIC) ‌published its Principles on AI plus we saw increasing regulatory sensitivity. We knew our product had to align with industry and regulatory responsibilities. 

Fortunately, our engineering team was open-minded about the need for and value of governance. They already had a desire to standardize their documentation and risk controls, and appreciated the advantages of a formalized process that could give them a single system of record. This meant that what started as a matter of corporate responsibility soon evolved into an approach that could give us and our clients confidence that our technology does what we say it does. 

2. AI governance usually begins with defining controls. Is that where you began?

Yes. We saw the complexity of rules and principles that our customers face, and we knew we didn’t want to start from a blank sheet of paper. 

The function of compliance is to take a rule or a principle and convert it into a process that can be followed and assessed. In practice, that can be very complex and can seem scary, which is why we sought external support.  

With the support and interest of our engineering leadership, I looked for an expert partner who really understood AI technology and the regulatory space, and had built a foundation of controls and evidence environments. That ultimately brought us to Monitaur. I knew we’d need this balance of insight and capabilities if we were to move beyond regular software monitoring and fully respond to what customers and regulators look for and ask about. 

Webinar replay: How an AI company formalized AI governance

3. What problems has AI governance helped you resolve?

One of the challenges is that the regulatory environment for AI is only beginning to take shape. However, some fundamental compliances already exist: what data are you using, what models do you have, what monitoring is in place, etc. 

Technology is invariably ahead of regulation, so while the initial push to look at formalizing a governance process came from seeing the regulatory trend, the primary goal was to have a system that could help us build good AI models. That means the suitability, effectiveness, and performance of the models matter in parallel to regulatory compliance.

4. What validation have you had that AI governance is worth the investment?

It’s directly contributed to improved business outcomes – better quality models, confidence in their outputs, trust with clients, and efficiencies such as responding quickly to client requests. Ultimately, we see AI governance as a business driver.

It’s enhanced several key workflows, such as quality checks before a model can progress. The clarity we gain from governance provides consistency as we scale, and it gives us easily accessible, understandable real-time records of how a model has been built and is functioning.

5. Do you have any advice for others considering AI governance?

It’s hard to make governance resonate when it is not core to your business. The value is in positive outcomes, not just compliance, but not everyone sees that and this is a ‘get stuck’ place.

Take a step back to understand the business impact of good software and product management. Put your existing controls on a piece of paper, see how well you are covered for risk and quality, and how you show compliance with those controls. When you see it all on paper, the gaps become clear and the governance process starts to flow. 

The collaboration that ensued by formalizing our model governance operations has been the greatest reward. Governance processes must integrate with daily work. Achieving that is a group effort, internally and with the unique vision and support provided by Monitaur.

To get the full conversation about risk and innovation with AI, watch an encore presentation of an AI company’s approach to AI governance.

Webinar replay: How an AI company formalized AI governance