ML Assurance Newsletter

Issue
8
-
Jun 8, 2021

Trust & AI: Must Reads

No items found.
No items found.

The Federal Trade Commission (FTC) issued a surprise blog post this past month clarifying its perspective on the use of algorithms and AI. While the expectations fall directly in line with the emerging consensus around AI principles and guidelines for transparent, fair, and accountable use of AI, the agency also asserted that existing law sufficiently covers any regulatory action. In short, a clear shot across the bow for every company under their purview and using AI.

Our Co-founder and CTO Andrew Clark contributed a piece to BetaNews about how companies can address many of the outstanding problems highlighted by the FTC today. The key takeaway is that "Accountability should be the primary driver of how companies assure their systems," and those that delay should expect to see the FTC to show up in their lobby (or inbox) in the not too distant future.

Regulation & Legislation

As with the FTC's post above, the European Commission's announcement of robust, new legal framework on AI stirred up an immense amount of speculation, analysis, and consternation in some quarters. The immediate alarmism has since tempered, but AI builders have a host of new considerations and requirements coming down the pike.

In order to make the legislation easier to digest, Open Ethics founder and AI thought leader Nikita Lukianets wrote this excellent visual guide teasing out the salient themes from the rather confusing structure of titles and topics. In conclusion, he notes that "despite the efforts from the EC to make the regulation future-proof, some of its provisions [may] be overtaken by technological developments before they even apply."

Other valuable perspectives on this new legislation include:

Regulation & Legislation

As a fast follow from the previous two articles, the Harvard Business Review published this piece by Andrew Burt of boutique AI law firm BNH.ai, which also featured in Issue 6 of this Newsletter. He raises many important issues for enterprises to consider as the regulatory landscape for AI continues to emerge. His thoughts parallel much of how we think about ML Assurance as a best practice here at Monitaur. Pointing to the "high rates of failure" of AI systems, Burt emphasizes that companies deploying AI need a more frequent process of audit and review of the decisions made, increasing in rigor as risk profile of a particular system rises. For most organizations accustomed to singular audits, this will require overhauling existing practices quite extensively. He distills the dizzying array of requirements across regulatory frameworks down to two key components for any impact assessment. Companies must clearly document the risks created by each AI system, and they must describe how individual risks have been addressed.

Principles & Frameworks

While the headline does the catch the eye – and largely reflects similar executive takes we covered in Issue 7 – many of the finer stats captured in this write-up on a survey conducted by Corinium Intelligence and FICO tell us more about ML/AI Governance and monitoring of production environments. Only 39% of organizations are pursuing AI governance programmatically. Core governance capabilities like bias mitigation during model development are only pursued by 38%, while only a third have independent model validation teams. A stark majority of organizations (57%) seem to have no responsibility to report even to internal stakeholders about the performance of their consequential systems. For production models making decisions today, only a fifth (20%) are monitored for fairness and ethics.

Clearly, a large number of companies are taking on extraordinary and unnecessary risk by not governing and monitoring their systems. That said, respondents expect change, since the lack of monitoring is slowing AI adoption (90%) and responsible AI will be a strategic focus in the next 2 years (63%).

AI Governance & Assurance

In this piece, AI entrepreneur Kareem Saleh connects the broader social milieu to the field of Explainable AI in a compelling and enjoyable Forbes read. Borrowing Steven Colbert's neologism "truthiness" invented in his tenure on The Daily Show, Saleh draws a line between conspiracy thinking and explainability in the way that both construct a narrative that "sounds accurate and makes people comfortable but in reality can be far from the truth." In effect, it is a form of algorithmic storytelling, accessible only to domain experts (the developers). He also notes the downstream psychological effect on the scientist – having an explanation itself perpetuates a false sense of trust in the decisions made by AI, no matter their validity. The right mindset should start instead with a healthy dose of skepticism about a monolithic approach to explainability across models, building on themes we previously covered in Issue 5, Issue 4, and Issue 3.

Ethics & Responsibility

Two recent developments at the state level are taking the advisement of the NAIC on AI issued last fall to the next step, as covered in this article by Carlton Fields on JD Supra. The Connecticut Insurance Department issued a memo in the same vein as the FTC's covered above, re-emphasizing that existing law covers Big Data and AI technology use. Delving deeper, the agency provided guidance specifically on data governance, including use of third-party data, as well as how algorithms are “inventoried, risk assessed/ranked, risk managed, validated for technical quality, and governed throughout their life cycle.” Companies doing business in the state should be prepared to "show their work."

The action in Colorado comes in the form of new draft legislation SB 21-169 that focuses on insurer's use of third-party "external" data and models, that broadly falls in line with the new spate of state laws focused on data privacy. However, the requirement that companies effectively assure all of their vendors' systems around their use of data and models is new and challenging, since very little established practice or reporting currently exists in this gray area. Enforceability is very much a matter of interpretation at this point in time.

Risks & Liability

In our inaugural issue of this newsletter, we covered the first known case of facial recognition leading to a false arrest in Detroit. Since then, a number of other unfortunate cases like it have emerged, as well as outright bans on the technology at national, state, local, and even municipal levels. As this article in MIT's Technology Review shows, the inevitable has occurred, with a potentially groundbreaking civil rights lawsuit being filed by the ACLU in the case of Robert Williams. The lawsuit not only seeks damages, but also transparency about the use of facial recognition technology and an end of the Detroit Police Department's use of the it.

Ethics & Responsibility