Governance as a roadmap for AI transformation in insurance

Regulation & Legislation
AI Governance & Assurance

The word governance often inspires a natural association with compliance and regulation. But when it’s designed and incorporated into business processes correctly, governance is often the enabler of moving new ideas forward despite regulation. 

An exclusive interview from Live@ITC Vegas 2025

Join Marissa Buckley from the Live@ITC 2025 stage, where she interviews Anthony Habayeb, CEO and co-founder, of Monitaur, about AI, regulation, and what it will take for insurers to move AI forward responsibly.

What is Monitaur? 

Monitaur is the leading provider of AI governance to the insurance industry, founded in 2019. Its mission is to enable trust in AI by helping carriers, Insurtechs, and regulators achieve confidence, transparency, and compliance.

What motivates your customers to pursue AI governance as a strategy?

While compliance is one motivation, it often becomes table stakes in the context of a well-formed AI strategy. The primary motivation for Monitaur's customers is the excitement about the business opportunity AI presents. They’re using AI and responsible automation to achieve goals like improving ratios and or showing efficiency in claims processing. They also want visibility to overcome unknowns and failures safely as they innovate. Good governance serves as a roadmap or enabler for successful AI transformation. 

As an executive, I had to understand that in insurance, we never really know the cost of goods sold. How does that relate to the implementation and governance of AI, especially considering the need for compliance?

Monitaur gives insurers greater governance and control over their tech investments, which means less uncertainty while mitigating risk. We help insurers better estimate the risk because we’re providing greater insight and ensuring the use and outcomes are as expected.

What's the biggest regulatory shift that's happening for insurers right now?

Regulators are there to protect consumers and make sure insurance companies can keep their promises. The shift we’re seeing is that they are now focusing on how companies‌ do what they say they do to maintain responsible, open, safe, fair AI use. For example, regulators are preparing to ask questions about AI during financial exams, requiring a "package of proof" that gives them confidence in a company's handling of AI.

How should insurance leaders think about managing third-party AI risk? 

The biggest AI risk for carriers isn’t the AI they build, but the high percentage of embedded AI in the systems they buy from vendors. This is forecasted to be 85-90% by the end of next year. Build an overall AI governance program with a clear corporate policy and then apply those standards to vendors. More specifically:

  1. Do not treat vendors differently from internal AI systems development. Use the same corporate policy and risk assessment (high, medium, or low risk).
  2. Require the vendor to demonstrate their governance using the same program you’ve applied to the internal builds. No matter if you buy it or build it, apply the same governance.

Watch the full interview or contact us if you have any questions.