In this updated report from the National Institute of Standards and Technology (NIST), the authors suggest a “socio-technical” approach to mitigate bias in AI by acknowledging that AI operates in a larger social context.
“Context is everything,” said Reva Schwartz, principal investigator for AI bias and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”
NIST is planning a series of public workshops over the next few months. For more information and to register, visit the AI RMF workshop page.