ML Assurance Newsletter

Issue
1
-
Jul 1, 2020

Trust & AI: Must Reads

No items found.
No items found.

This unfortunate story about the arrest of Robert Julian-Borchak Williams by the Detroit Police Department is the first known case of an arrest based on flawed facial recognition software. It highlights the very real, very personal consequences of uncontrolled, unvalidated algorithms.

The software provider's GM admitted that the company "does not formally measure the systems' accuracy or bias". Without proper ML governance and controls, it is likely to be the first of many such stories we see. It is inevitable that such high profile injustices will accelerate the regulatory pathways for machine learning models.

Ethics & Responsibility

In April, the FTC published valuable guidance on how existing law applies to the use of AI and ML. The rules of thumb are:

  1. Be transparent.
  2. Explain your decision to the consumer.
  3. Ensure that your decisions are fair.
  4. Ensure that your data and models are robust and empirically sound.
  5. Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination.

The sub-points in each of those sections are worth a deep read, and we imagine that they will provoke robust discussion in the business teams that provide the assurance for all of these considerations. There are many implicit areas for future regulatory attention, ranging from the effect of third-party data on consumer reporting to the implicit bias that emerge from apparently unbiased data like zip codes.

Regulation & Legislation

The state-based standard-setting organization for insurance held three public meetings in Q1 2020 of the Accelerated Underwriting Working Group (AUWG). Expectations after those meetings include more regulatory focus on data use, algorithms development, consumer transparency, and governance/controls.

NAIC has a separate working group developing new standards for regulatory approvals of predictive models that is more specifically focused on Generalized Linear Models (GLM), but is also likely to feed into and maybe complement this AUWG work. Insurance companies will need to retool their processes and systems to adhere to emerging regulator expectations, but regulators will also need to upskill their knowledge to effectively evaluate ML systems.

Regulation & Legislation

An excellent summary and analysis of FDA actions on AI over the last year. Most of the attention thus far has centered on premarket certification, whereas the post-market pieces are still in motion. Looking at it from an assurance lens, a couple of key takeaways jump out:

  1. Since radiological devices are the tip of the spear, they're likely to shape regulatory thinking across devices and applications in the future.
  2. Groups are focused on ensuring quality inputs, an important issue, but not enough attention has been paid to the transparency and validity of the outputs.
  3. Since safety is less evident premarket, the burden of proof must shift to postmarket reporting.
  4. The FDA will need to consider how to minimize the reporting burden in order to maximize the positive impact of AI/ML on population health.
AI Governance & Assurance
Regulation & Legislation

A quick and valuable read from the Federal Trade Commission covering:

  1. With every business role and every business process under deep scrutiny, it is likely than many more opportunities to deploy AI applications will emerge.
  2. Enterprise CEOs and Boards need to be aware of the risks assumed when a process or role is re-engineered to use an AI product.
  3. Firms need to adopt codes of practice or specific tools/products/services to mitigate such risks. Hybrid approaches are quite likely.
Regulation & Legislation

This wide-ranging and accessible article dives into the movement for "Explainable AI", exploring the practical, psychological, and regulatory dimensions of explainability. Explainability is a prerequisite for ML assurance, and counterfactuals are emphasized as core to the next step: auditability.

Money quote: "In the absence of clear auditing requirements, it will be difficult for individuals affected by automated decisions to know if the explanations they receive are in fact accurate or if they’re masking hidden forms of bias."

AI Governance & Assurance