[Video transcript lightly edited for clarity and grammatical errors.]
Hello, Anthony Habayeb here, the Co-founder and CEO of Monitaur, a software platform building transparency and assurance solutions for machine learning. Welcome to the 4th edition of the ML Assurance Newsletter, a newsletter at the intersection of machine learning, regulation, and risk.
Recently, the White House put out some updated guidance regarding what it means to have responsible and appropriately used artificial intelligence across federal departments. That guidance is also likely to trickle down into other regulating agencies. The European Union just put out some new guidance.
A consistent thing that you see across these is the concept of principles, which I talked about in our last issue. There’s another article in this newsletter edition which talks about turning principles into practice that was authored and led by the former head of AI at Accenture.
As we are out there talking to clients about creating transparency and assurance, one of the things that’s really important for organizations to consider is how they marry the concepts of machine learning monitoring, observability, or explainability with the human element of creating assurances. The European Commission even went as far as to say any models making high-risk decisions should be human verifiable.
So I think these articles do an interesting job of causing some thought and some insight regarding what it looks like to have first principles and then deploy those principles into an organization-wide framework of assurance and control.
As always, please do share this newsletter. If you have anything you think we should be reading or sharing with this audience, please don’t hesitate to reach out. This is probably our last issue before the new year, so happy holidays to all and a happy new year, and we’ll talk to you soon. Take care.