In Steve Lohr’s recent New York Times article, he reports on the creation of The Data & Trust Alliance. Composed of some of the largest corporations in the US, this group will work to mitigate AI bias in the hiring process. To mitigate risk, the Alliance has created a scoring and evaluation system for AI software in the corporate world. From the diversity of training data to “neutral” datasets, this framework intends to help corporations identify where the algorithms used in the hiring process may be unfairly treating protected classes.
This evaluation framework comes after the FTC warned companies that those who do not take accountability for the harm their AI systems may unintentionally create will be held liable by the federal government. “The Data & Trust Alliance seeks to address the potential danger of powerful algorithms being used in work force decisions early rather than after widespread harms are apparent.” The movement by the private sector to hold themselves accountable will go a long way towards compliance once the rising calls for AI regulations are codified.