Article Summary

VISIT AI Trust Library
Article Summary of:

Technology Can't Fix Algorithmic Injustice

Published:
January 9, 2020

A Princeton team of political philosophers wrote this expansive and thorough essay on how democratic societies should engage with the wide-scale deployment of AI and ML systems. They use numerous real world examples to illustrate some of the cognitive dissonance that exists in the public, academic, and industry discussions of bias and fairness in the U.S. today, as well as questioning the viability of what they call "quality control" approaches to solving for algorithmic bias. Ultimately, they determine that only the democratic process with a deeply engaged public interface is capable of addressing the problem: "Rather than allowing tech practitioners to navigate the ethics of AI by themselves, we the public should be included in decisions about whether and how AI will be deployed and to what ends." This perspective falls in line with a growing chorus for contestability over transparency and recourse over explainability. If developers and system owners do not take more aggressive leadership positions, they may find themselves facing a more aggressive regulatory regime and public concern than they would like.

Text Link
Ethics & Responsibility