ML Assurance Newsletter

Issue
9
-
Aug 18, 2021

Trust & AI: Must Reads

No items found.
No items found.

Along with all of the discussion around AI risk, regulation, ethics, and responsibility, a new question has come into the limelight as of late. How will the US legal system engage with the new paradigm of that AI presents. This Harvard Business Review piece penned by researchers we featured in Issue 5 suggests that, if we want to enable innovation and adoption, our current system of liability and approach to adjudication will need a major overhaul. Insurers have an important role to play in encouraging strong governance practices, and creating a tribunal system to take the load off the courts and adjudicate complaints about lower impact, higher volume AI systems.

Another examination in Information Week explores AI risk from a number of angles. On the topic of liability, law expert and adjunct professor Cynthia Cole notes that product liability claims related to AI are "gaining traction in judicial and regulatory circles...I think that this notion of 'the machine did it' probably isn't going to fly eventually." AI fairness expert Liz O'Sullivan also wrote in TechCrunch, using the Apple Card debacle to explain the legal peril enterprises will face with biased systems.

Risks & Liability

We previously covered moves by the National Institute of Standards and Technology (NIST) to address AI risks on behalf of the US Department of Commerce, a standards driving role it has played before to great benefit of both government and private industry. Following up on that, NIST has launched an open comment period until August 19th. This formal RFI for an AI Risk Management Framework was requested by the Biden Administration as part of its larger engagement with AI strategy and policy for the US, the National Artificial Intelligence Initiative (NAII). It is intended to be an inclusive, multi-stakeholder process to "meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways," according to NAII director Lynne Parker. We at Monitaur plan to submit comments, and we strongly encourage our community to do likewise.

Risks & Liability
Regulation & Legislation

This article in Fortune (subscription required), as well as other media sources, covered the common practice in the tech world of borrowing techniques used in one domain and applying it to another. And so it was perhaps inevitable that bug bounty programs that software developers have deployed successfully would eventually come to machine learning and AI systems. Microsoft and Nvidia bridged the gap with prizes awarded for those who can break their malware detection AI, cybersecurity being an existing area that highly leverages bounties to improve systems.

Twitter awarded $3,500 to its program winner, who changed individual's faces to be thinner, whiter, and younger to demonstrate the bias in the company's now shuttered profile image cropping algorithm. In related news, Twitter committed to responsible AI by hiring thought leader Rumman Chowdhury, who discusses how she's building a more proactive approach to AI governance in this interview.

Ethics & Responsibility

Deloitte AI Institute Executive Director Beena Ammanath and ethical risk consultant Virtue CEO's Reid Blackman authored this excellent piece capturing the need to educate across the organization about ethical use of AI. They see the drivers as the lack of perspective that three audiences have when it comes to ethics and AI risk: procurement officers, executive leadership, and technical teams. Their step-wise plan advises the following, in paraphrased form:

  1. Remove fear through basic AI literacy
  2. Provide relevant in-industry examples
  3. Align to expressed company values
  4. Make policies actionable instead of abstract
  5. Engage influencers across the enterprise
  6. Make education a habit

Such a consistent effort internal to businesses is of course a perfect complement to the larger mission of encouraging civic competence for AI covered in Issue 6.

Ethics & Responsibility

The rise of ML and AI in insurance decisioning has triggered a larger discussion with regulators and the general public about less "intelligent" models that have gained prominence over the past couple of decades. That interest has, in turn, shown how prevalent credit scores are in model-based decisions across the industry, which this article covers in the context of new state-level regulatory action that is emerging. Citing a specious connection to an individual's driving record – the most obvious variable to measure for auto insurance – states like Colorado, New Jersey, New York and Oregon are considering joining California, Massachusetts, and Hawaii in outlawing the use of credit scores in lieu of federal bills that have yet to find their way to a vote in Congress. Given the high correlation between credit score and race, critics have long called the practice a form of "fairwashing" and part of a larger pattern of "economic racism" in the United States, and President Biden seems inclined to agree based on his comments at a February town hall.

Regulation & Legislation

This opinion piece from the New York Times (subscription required) by law professor Frank Pasquale from the US and associate professor Gianclaudio Malgieri from France argues that the "Biden administration should harmonize the U.S. approach" with those in the European Commission's recently announced AI rules. In particular, they highlight the delineation of AI applications into tiers of risk that include a level "too harmful to be permitted" and a high risk category that demands documentation and evidence of governance. As with insurance, inaction from regulators like the Equal Employment Opportunity Commission (EEOC) seems likely to drive state action, which could lead to a patchwork of regulatory requirements that will place a higher burden on US companies from a compliance standpoint, not to mention the burden of working in the US and the EU as seen in a previous issue of this newsletter.

Meanwhile, European developers worry about the competitive disadvantage they will face against the large tech platform companies, which have existing competencies with regulatory work that smaller players without those teams and experience.

Principles & Frameworks

Adapted from a longer piece in Science, this essay – also by I. Glenn Cohen from earlier in this newsletter – lays out a cogent case for why the FDA should avoid explainable AI tools and focus instead on safety and efficacy as the key measures for Software as a Medical Device (SaMD) and other health offerings. Like our previous coverage of explainability in our last edition and issue 3, the authors detail the unique challenge of designing interpretable "white box" models in medical fields like radiology in which the number of variables enforces a "black box" approach on developers. However, with explainability tools consisting of new models predicting what logic the original models may have used erects a façade of truth that feeds into our cognitive biases, a "fool's gold" of sorts. Instead, the FDA should attend to accuracy and outcomes as a gold standard.

AI Governance & Assurance