[Video transcript lightly edited for clarity and grammatical errors.]
Welcome to the Machine Learning Assurance Newsletter from Monitaur. I’m Anthony Habayeb, the CEO of Monitaur, a software platform bringing transparency and assurance to machine learning. This newsletter is meant to bring you closer to the relationships between machine learning, regulatory guidance, and risk, and in this edition we have six pieces of content that we think do that.
The first article is something out of the news last week. Somebody was wrongly accused of a crime by some machine learning software. It’s an interesting piece to consider where we are today in using machine learning in high stakes situations, and to hear the manufacturer say “we don’t specifically test for bias” is somewhat mind-blowing.
But regulators are developing their standards and their policies for how machine learning should be used, and so in this edition we provide a few examples of regulators and their outlooks on machine learning. We have the Food & Drug Administration, the Federal Trade Commission, and the National Association of Insurance Carriers.
Then we end this issue with two marketplace perspectives. One about how enterprises should think about trust and transparency of AI and machine learning. The other is about what explainability, transparency, and auditability of machine learning and artificial systems should look like. How should we think about those concepts as people who will be affected by machine learning?
Thank you for being a part of our first edition. Please share this newsletter with other people you think might be interested. If you ever come across content you think other people might want to know about that’s at the intersection of machine learning, regulation, and risk, please let us know. We’ll try to include it in future issues. Thanks so much, and have a great 4th of July.