Monitaur and PwC Germany alliance is a milestone on the path towards responsible and ethical AI governance

AI Governance & Assurance
Impact & Society
PwC Monitaur Announcement

Along my founding journey, I spent quite a bit of time with auditors asking questions like: how do you audit complex models, what systems do you use, what are the most helpful tools, what worries you about assuring AI and ML, if you could have anything to help you audit a model - what would it be…

That journey led me to Andrew, who was on his own parallel path obsessing about how to audit machine learning systems with Capital One and ISACA.

Fast forward to today’s announcement - one of the most respected and impactful audit firms in the world joined our journey to improve people’s lives by providing confidence and trust in AI.

The challenge of auditing AI

Earlier this year, the Algorithmic Justice League published a paper talking about the complexities and challenges of auditing AI. The paper also demonstrated how there is agreement about the need for independent objective audits, across a broad range of sectors and stakeholders.

A future world with responsible and ethical AI doesn’t exist without objective audits. PERIOD.

Society depends on independent and objective audits and assurances to create confidence and trust across the most important industries that impact our everyday lives. Financial markets, healthcare, transportation, and energy (just as examples) position objective independent audits and inspections as non-negotiable pillars of governance and regulation.

I’ll simplify the challenge of auditing AI into 4 questions:

  1. How do you audit AI?
  2. What information do you need to have?
  3. How is the information captured and made available for audit and validation?
  4. Who is qualified to perform the audit?

At Monitaur, we’ve been pretty obsessed with the first 3 questions and feel like the 4th question is almost moot without the first 3 solved. We should absolutely be concerned about anyone in the current market offering to fully attest or certify an AI system for fairness or some other measure of “responsible and ethical.” The information required is either not always captured and is almost certainly never reliably available for audit and/or validation. And the methodology or standards for audit are still developing.

It doesn’t mean we should do nothing - and that’s why I’m so excited about this news. We need formative relationships that bring together expertise and technology to accelerate our responsible and ethical goals.

Germany is the canary in the coal mine of AI auditing

Depending on the methodology of market share, let’s generally agree that The Big 4 represent more than 70% of the audit and assurance market globally. Today, they are unquestionably the firms that our most visible global companies trust to conduct independent audits. So as we are thinking about the complex challenge of how to audit AI, they are best positioned to see many of the gaps and challenges.

Germany is a first-mover of independent AI audits. With AiC4 and its leadership role in the EU, we should all be watching Germany to see how ready (or not) we are for our global intention of auditing AI.

And so as much as anyone, PwC Germany has a front row seat and opportunity to help Germany’s leading organizations build and manage responsible and ethical AI programs. Maybe more than anyone, they are going to get the call to perform an audit. Ahead of almost everyone - they see the gaps and want to help solve them - that is why we are forming this partnership.

PwC recognizes how intentionally we built Monitaur to enable assurance. They aligned with our audit and assurance rooted risk and control methodologies. They appreciated our approach to enabling every model and model decision to be discoverable, verifiable and understood. They recognized our embedded 1st, 2nd, and 3rd line permissions. They celebrated our methods of independent model validations and automated documentation.

Collaborating to build trust in enterprise AI

Together, through partnerships like PwC Germany and Monitaur, we can bring transparency to AI. We can help to break down the fear of black boxes and rogue sentient robots taking over our lives. We can enable robust and reliable independent audits.

We didn’t build Monitaur to be “the only” solution for AI Governance because that is not an achievable outcome…building trust and confidence in AI is going to take an ecosystem. It is going to require old guards and new guards coming together. It is going to require greater diversity of contribution and involvement. It is going to require collaboration.

We look forward to solving challenges with PwC Germany, and other future partners.