ML Assurance Newsletter

Issue
11
-
Oct 28, 2021

Trust & AI: Must Reads

No items found.
No items found.

Following the recent congressional hearing where Facebook whistleblower Frances Haugen shed light on the ways in which the social media company promoted company growth over implementing safeguards to protect consumers, major public concern has grown around the use of AI without adequate regulations. In response, Nick Chegg, Vice President of Global Affairs at Facebook, spoke out in favor of stronger regulations. As this Huffington Post article outlines, Facebook is willing to create new safeguards to protect children from the harm that their algorithms have caused. However, many observers think that online platforms are unlikely to fundamentally change their business models without regulation to incentivize further adjustments.

Meanwhile, Congress is working to regulate how online platforms are held accountable for the content published by their users. Last week, Congressman Frank Pallone (D-NJ) penned the Justice Against Malicious Algorithms Act, which would remove the existing protections websites have against liability under Section 230 of the federal Communications Decency Act. Doing this would hold companies accountable that continue to ignore the risks associated with their algorithms.

Ethics & Responsibility
Regulation & Legislation

With more complex artificial intelligence adoption growing rapidly across industries, the need for executive oversight within corporations is becoming more and more prevalent. As a result, the role of chief AI ethics officer (CAIEO) is on the rise. Francesca Rossi of the World Economic Forum outlines five important characteristics for a successful chief AI ethics officer. Here they are, along with the most interesting observations from our perspective: 

  1. Multi-disciplinary knowledge: The CAIEO requires a very diverse skillset and knowledge base. They must have a technical AI background, empathy for ethical considerations, as well as familiarity with social science, technology law, and business strategy. 
  2. Effective and inclusive governance: In large organizations, they will lead top-down, centralized governance initiatives and bottom-up efforts to select the right tools for distinct business units. 
  3. Strategic differentiation and business value: Emphasizing AI ethics can, and should, be a source of strategic differentiation connected to the company’s values and business model. 
  4. Public communication and advocacy: Although the laws and regulations surrounding AI are still emerging, CAIEO’s can showcase values and norms around ethical AI use that will benefit the company’s public perception today.
  5. Company-wide engagement: With such a complex issue, putting AI ethics into practices requires a company-wide effort and direct stakeholder engagement as we covered in Issue 9.
Ethics & Responsibility
Principles & Frameworks

Since 2017, China has strived to be the global leader in artificial intelligence, and most recently they have created their first set of six guidelines for more ethical and advanced AI. This article in Verdict outlines the country’s most recent step towards more ethical and advanced AI: The New Generation Artificial Intelligence Ethics Specifications. According to Rebecca Arcesati – an analyst from the Mercator Institute for China Studies – this new framework is “a heavy-handed model, where the state is thinking very seriously about the long-term social transformations that AI will bring, from social alienation to existential risks, and trying to actively manage and guide those transformations.”

However well-intended this framework may appear, some worry that these new regulations on user autonomy may be an attempt to exercise more control over China’s tech sector. As we discussed in Issue 6, thought leaders in artificial intelligence worry that China’s fast advancements in AI capabilities will lead to more barriers in developing more advanced technologies. In the past, Beijing has worked to control the spread of individuals’ data to major technological companies in an attempt to exert control over its citizens.

Regulation & Legislation

In this piece, Author Kyle Wiggers uses the EU’s steps towards safer AI to showcase how far behind the US is in the process of creating effective regulations of AI. Recently, the European Union has held many discussions about the future of AI regulations, focusing heavily on preventing bias and optimizing security. However, the European Citizens’ Initiative raised concerns about policymakers’ focus on debiasing AI systems. They claim that, while policymakers are not expected to be experts on AI, they are making assertions without understanding the technical side of AI discrimination. Realistically, eliminating bias in AI is not possible in most systems because of unintended biases in their human creators. Instead, they should be focusing on how to mitigate bias through procedural processes

Further, the ECI recommends that policymakers focus on banning practices that are the most biased. Mass surveillance has proven time and time again to disproportionately harm protected classes because the technology is very likely to have or develop biases. Looking back to Issues 10 and 8, it is clear that surveillance systems have incorrectly identified individuals from marginalized groups, leading to class action lawsuits in the United States.

Regulation & Legislation

This WIRED piece discusses the steps the White House Office of Science and Technology Policy is taking to regulate AI while simultaneously protecting consumer privacy and safety. For AI in the US, few regulations surround how data is used, what consumer privacy should look like, and how to mitigate biases in algorithms. However, Eric Lander and Alondra Nelson propose rights for Americans based on the precepts that “Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly. Codifying these ideas can help ensure that.” Currently, several tech companies are working to monitor their own artificial intelligence systems, what Raj Seshadri, President of Data Science at MasterCard, called “a disciplined approach” to create safeguards and ensure ethical usage of algorithms. However, clear regulations codified into law would make this process much faster and easier for companies to know what steps to take.

In the coming months, the Office of Science and Technology Policy is working with experts across industries, academia, and government agencies to create an informed and holistic approach to regulating AI. If you are interested in sharing your views, the Office is accepting responses from the public through a public interest form.

Regulation & Legislation

On October 14th, the FDA held a public workshop to discuss better methodologies for identification and improvement of algorithms prone to mirroring systemic biases in healthcare. Through their discussions, the FDA concluded that more racially and ethnically diverse populations should be enrolled in clinical trials.

According to Jack Resneck, president-elect of the American Medical Association, the FDA should focus on patient outcomes and clinical validation with published peer-reviewed data to build trust in AI- and ML-based medical devices. Another important area of focus concerns taking steps to safeguard these devices against bias that can exacerbate pre-existing disparities in healthcare. To prevent these inequities from growing and to protect against AI and ML related risks over time, the FDA intends to publish draft guidance in 2021 on what should be included in a SaMD Pre-Specifications (SPS) and Algorithm Change Protocol (ACP).

Risks & Liability
Ethics & Responsibility

A continuing thread in our newsletter is the promise and problems of explainable AI for tackling AI’s well-known issues with transparency, fairness, safety, and trust. Scott Clark in CMSWiRE focuses on the positive aspects, interviewing experts on these tools to make AI “explainable, transparent, and understandable in order to be trusted, reliable, and consistent.” In doing so, Explainable AI can bolster four core principles: 

  • Build trustworthiness
  • Satisfy developing legal requirements
  • Provide ethics justifications, and
  • Derive actionable and robust insights into AI decision-making. 

Though explainable AI does explain some of the decisions that black boxes make, it is not a perfect solution. Boris Babic and Sara Gerke from Stat wrote a piece outlining how explainable AI is unable to access the original dataset a model uses to make decisions. Instead, it creates a somewhat similar "white box" model that is fully transparent; however, that white box will never perform identically to the original model. Thus, as Kareem Saleh notes, the provided explanation is just an approximation that does not fully represent reality.

AI Governance & Assurance
Ethics & Responsibility