ML Assurance Newsletter

Issue
16
-
Mar 24, 2022

Trust & AI: Must Reads

No items found.
No items found.

In this updated report from the National Institute of Standards and Technology (NIST), the authors suggest a “socio-technical” approach to mitigate bias in AI by acknowledging that AI operates in a larger social context.

“Context is everything,” said Reva Schwartz, principal investigator for AI bias and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

NIST is planning a series of public workshops over the next few months. For more information and to register, visit the AI RMF workshop page

Ethics & Responsibility

“The threat to civil rights, civil liberties and privacy is one of the biggest considerations made in regulating AI in the U.S.” In this recent VentureBeat article, Monitaur CEO Anthony Habayeb and Reporter Chris J. Preimesberger share a round-up of what’s new in AI regulation as well as what could be in store moving forward in 2022.

Europe and the UK

  • Europe is moving quickly toward comprehensive legislation regulating how AI  can be used across industries. 

Individual U.S. states

  • The United States has taken a less-centralized approach to AI regulation. States legislatures have taken steps to regulate this agile technology, but the federal government has made little progress compared to Europe. 

U.S. federal authorities

  • Many U.S. agencies took steps in 2021 and early 2022 toward centralized AI governance, including: National Security Commission and Government Accountability Office (GAO), Federal Trade Commission (FTC), Department of Commerce, Food and Drug Administration (FDA), Equal Employment Opportunity Commission (EEOC), and National Institute of Science and Technology (NIST).
Regulation & Legislation
AI Governance & Assurance

Russell Wald, policy director at Stanford University's Institute for Human-Centered AI, participated in a Q&A with Makenzie Holland for TechTarget last week. Wald provides context and commentary on China’s new AI regulations, what the implications might be for international businesses if China will use the guidance for their own AI surveillance, and how feasible the regulations will be to implement.

China is one of the first countries to enact AI regulations. What’s yet to be seen is how technically difficult it may be for businesses to adhere to the requirements. This will be an issue across the globe as AI regulations continue to roll out. Questions that need to be considered are:

  • How to technically achieve compliance
  • How to handle consumer complaints at scale on a national, and possibly international, level
  • How AI regulations in one country will affect businesses globally

Wald comments on where the EU and U.S. stand comparatively:

"The EU or U.S. system of open debate around this is ultimately driving us in the right space. Not fast enough, but still a better pathway forward. What I do have concerns about in the U.S. is we are woefully behind."

Regulation & Legislation
AI Governance & Assurance

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) released the latest edition of their annual AI Index report in March 2022. The 230-page report provides a comprehensive overview of artificial intelligence in society and business today and aims to provide actionable insights for decision-makers who are working to advance AI that is responsible, ethical, and human-centric.

The 2022 edition leveraged more data sources than ever before. New content includes:

  • An expanded technical performance chapter
  • A new survey of robotics researchers around the world
  • Data on global AI legislation records in 25 countries
  • A new chapter with an in-depth analysis of technical AI ethics metrics
Principles & Frameworks
Ethics & Responsibility

In this recent article in SciTechDaily, writer Adam Zewe from the Massachusetts Institute of Technology (MIT) explains how thinking like a neuroscientist can help address dataset bias in AI. Zewe shares how a group of researchers from MIT, Harvard University, and Fujitsu Ltd., gained insights into how machine learning can overcome bias.

The group’s research was focused on neural networks, “a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or ‘neurons’ that process data.” The research identified that more diverse datasets enable the network to overcome bias

“But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn’t seen, then it will become harder for it to recognize things it has already seen,” states Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of this research paper.

Ethics & Responsibility

On February 3, 2022 U.S. Senators Ron Wyden (D-OR) and Cory Booker (D-NJ) introduced the Algorithmic Accountability Act of 2022 alongside Representative Yvette Clarke (D-NY). As the press release published by Senator Wyden’s office states, the goal of this bill is to introduce greater transparency and oversight on software, algorithms, and other automated systems used to make consequential decisions. The bill would require companies to conduct regular impact assessments for bias and effectiveness on their automated systems. It would also create a new public repository at the FTC of these systems along with 75 new employees to enforce the law.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalized communities,” said Senator Booker in the release. Artificial intelligence is not neutral or egalitarian. Instead, these systems have the power to exacerbate systemic biases. The goal of this bill is to create more accountability among corporations who use AI in their business practices.

Regulation & Legislation