ML Assurance Newsletter

Issue
10
-
Sep 30, 2021

Trust & AI: Must Reads

No items found.
No items found.

With increased regulation of AI seeming more and more inevitable, the real question may be – as the authors of this piece from Boston Consulting Group and INSEAD explore – what type of regulatory approaches may prove most effective. Some of the most compelling thought explores the nature of how artificial and human intelligence interact. For example, while much public attention has focused lately on the very real dangers of biased AI decisions, they highlight the opportunity for AI to help measure and mitigate bias in human decisions, even those augmented by machines. Since more subjective decisions are trusted by consumers less, the authors argue that “companies need to communicate very carefully about the specific nature and scope of decisions they’re applying AI to and why it’s preferable to human judgment in those situations.”

This meaty Harvard Business Review read delves into many of the trade-offs of regulation such as how the opportunities for scale across geographies that make AI an attractive investment necessarily increase the likelihood of unfairness in more contained localities. They also explore the complexities of explainability for AI, a thread we have covered in most issues of this newsletter and most recently in Issue 9. Their position is that companies with “stronger explanatory capabilities will be in a better position to win the trust of consumers and regulators.”

Ethics & Responsibility
Principles & Frameworks

AI and ML systems have the potential to improve and streamline business processes, however, they can also go awry. Without proper oversight, systems can reinforce biases in their algorithms that can harm marginalized and protected classes unintentionally, and they can act in unpredictable ways in unpredictable moments. SearchEnterpriseAI showcases the importance of AI governance and oversight while exploring accountability when using these evolving systems, quoting experts like Forrester’s Brandon Purcell and IBM’s AI chief Seth Dobrin.

Who should be held accountable when things go wrong with AI? Nascent legal tests will eventually turn into more clear case law around liability. The answer may turn out to be a pastiche of companies, vendors, and other parties that, oddly enough, mirrors the complex inner workings of the systems themselves. What is clear is that companies must be able to identify when and how algorithms and models have gone wrong to demonstrate an accountable approach to AI governance. Blaming the machine may not be an acceptable defense for much longer.

AI Governance & Assurance
Risks & Liability
Ethics & Responsibility

As ML and AI systems become more common in driving the customer experience across industries, consumers are voicing heightened distrust. This piece by Kyle Wiggers in VentureBeat rounded up a couple of surveys of insurance customers, one showing a paltry 17% who would trust AI to review their insurance claims and another showing little movement in public sentiment about "trading" their personal data for more advantageous pricing. The answer may lie in approaches like those enacted by the Dutch Association of Insurers in January that require companies to prepare to explain how their systems make decisions before they go live.

Recently, Lemonade, a leading InsurTech company in the US, was hit with a lawsuit claiming that the company failed to responsibly respect consumers’ privacy and unethically used consumer data. If successful, this class action would represent a landmark court case as the first major decision related to the use of AI in the insurance industry in the US. Regardless, the case presents an object lesson for business leaders across industries to take positive, immediate steps towards implementing a comprehensive approach to AI governance.

Ethics & Responsibility

Data is at the root of many of the governance challenges raised by AI and ML, and Karen Hao of the MIT Technology Review pulls back the veil on the questionable provenance of some datasets that underlie modern data science, especially for facial ID. She cues off a study looking at a dataset created by researchers over 15 years ago using the unethical practice of scraping images and information from the web without the individuals’ knowledge or permission. The dataset subsequently was cited in over 1000 academic studies and has since “gone wild” in the corporate world, where it is difficult to know how often it is used. Other similar datasets have been “retracted” by researchers, but there is no mechanism to ensure they are not used.

How to govern the large datasets required for building effective AI is one of the big puzzles for the AI community to solve over the coming years. The study authors propose independent data stewardship organizations. In a similar vein, the Stanford Human-Centered Artificial Intelligence’s (HAI) new Center for Research on Foundation Models (CRFM) released a study on the possible ramifications of “foundation” models like Google’s BERT and OpenAI’s DALL-E. Trained at massive scale, they create unique advantages for developers, but their opacity raises serious concerns about governance and their close control by vested interests create high competitive barriers for new entrants.

Ethics & Responsibility
Risks & Liability

For years, major technology companies deployed AI with minimal consideration of the ethical repercussions of these systems. When these companies first launched their AI systems with speech mimicking, photo-tagging, and chatbots, there were few safeguards to protect consumers from harm and biases. In a positive sign of the times, last year industry leaders like Google, Microsoft, and IBM took steps to reject new AI technologies that failed their ethical guidelines. This article highlights a few poignant examples, showing a more sophisticated ethical review process in operation – a muscle that many large companies are developing.

Yet the academic and data science community as a whole is skeptical to trust the tech giants’ ethics. We have referenced Google’s mixed record in previous issues, including the firing of ethics leader Timnit Gebru and academics refusing funding from the company on ethical grounds.

Ethics & Responsibility

The last 2 years have revealed that AI and ML systems often hold immense power over people’s lives in the criminal justice realm. A new Associated Press investigation into an arrest made based on an AI system known as ShotSpotter calls attention to many of the themes of trust and AI. ShotSpotter is a surveillance technology used by law enforcement agencies across the country to combat gun violence in urban centers. However well intended, this technology seems to be flawed in how it distinguishes gunfire from other loud noises, like a car engine backfiring. It operates as a complete black box, with its developer refusing to elaborate on how the system works. Customer support can overwrite the AI’s decisions upon request from police officers. And independent tests have shown it to fail at identifying the location of known gunshots, its primary function. As a result, it has been deemed an insufficient source of evidence by several judges, yet it continues to make life-altering decisions about individuals across the United States.

As seen in Issue 8 and our inaugural issue, this is not the only example of the harms of ungoverned AI in policing. After Robert Williams’s false arrest because of errors in facial recognition technology, the pending lawsuit filed by the ACLU would force the Detroit Police Department to stop using facial recognition technology and embrace more transparency into its systems across the board.

AI Governance & Assurance
Ethics & Responsibility

Last week the United Kingdom announced their intention to create more innovation in AI by scaling back regulatory parameters enforced under the EU’s GDPR statute. This step leaves many experts asking how UK-based AI vendors can foster innovation when their technology may not be able to be used universally across Europe, effectively cutting off a large portion of their addressable market. It will be interesting to watch how many UK developers weigh the strategic opportunities and risks of such a lower regulatory bar.

This comes at the same time as the UN has called for more regulations on AI and ML to ensure models are ethical and responsible. While innovation in AI is vitally important to the future, being able to monitor and govern over these sometimes unpredictable programs is equally – if not more – important. Without proper safeguards in place, AI is more susceptible to abnormalities that can harm the lives of consumers in profound ways. 

Regulation & Legislation