This unfortunate story about the arrest of Robert Julian-Borchak Williams by the Detroit Police Department is the first known case of an arrest based on flawed facial recognition software. It highlights the very real, very personal consequences of uncontrolled, unvalidated algorithms.
The software provider's GM admitted that the company "does not formally measure the systems' accuracy or bias". Without proper ML governance and controls, it is likely to be the first of many such stories we see. It is inevitable that such high profile injustices will accelerate the regulatory pathways for machine learning models.
In April, the FTC published valuable guidance on how existing law applies to the use of AI and ML. The rules of thumb are:
The sub-points in each of those sections are worth a deep read, and we imagine that they will provoke robust discussion in the business teams that provide the assurance for all of these considerations. There are many implicit areas for future regulatory attention, ranging from the effect of third-party data on consumer reporting to the implicit bias that emerge from apparently unbiased data like zip codes.
The state-based standard-setting organization for insurance held three public meetings in Q1 2020 of the Accelerated Underwriting Working Group (AUWG). Expectations after those meetings include more regulatory focus on data use, algorithms development, consumer transparency, and governance/controls.
NAIC has a separate working group developing new standards for regulatory approvals of predictive models that is more specifically focused on Generalized Linear Models (GLM), but is also likely to feed into and maybe complement this AUWG work. Insurance companies will need to retool their processes and systems to adhere to emerging regulator expectations, but regulators will also need to upskill their knowledge to effectively evaluate ML systems.
An excellent summary and analysis of FDA actions on AI over the last year. Most of the attention thus far has centered on premarket certification, whereas the post-market pieces are still in motion. Looking at it from an assurance lens, a couple of key takeaways jump out:
A quick and valuable read from the Federal Trade Commission covering:
This wide-ranging and accessible article dives into the movement for "Explainable AI", exploring the practical, psychological, and regulatory dimensions of explainability. Explainability is a prerequisite for ML assurance, and counterfactuals are emphasized as core to the next step: auditability.
Money quote: "In the absence of clear auditing requirements, it will be difficult for individuals affected by automated decisions to know if the explanations they receive are in fact accurate or if they’re masking hidden forms of bias."