ML Assurance Newsletter

Issue
12
-
Nov 18, 2021

Trust & AI: Must Reads

No items found.
No items found.

According to an Axios report, a group of bipartisan members of the US House of Representatives introduced the Filter Bubble Transparency Act. This bill would require internet platforms to allow consumers the option to use services that do not rely on algorithms for content selection. This bill comes to the House following the recent discovery about Facebook’s internal research findings, which has caused lawmakers to seek to grant people more autonomy in how algorithms shape their online experiences. Ken Buck (R-CO), one of the original sponsors of the bill, commented on the legislation saying, “Consumers should have the option to engage with internet platforms without being manipulated by secret algorithms driven by user-specific data.” Granting the public options on how their data is used has the potential to bolster more public trust in the business practices of major online platforms. This bill comes as the US continues to lag behind other governing bodies’ actions to regulate artificial intelligence, including the EU.

Regulation & Legislation

In this WIRED piece, Max G. Levy recaps the conversation Timnit Gebru had on November 9th with WIRED senior writer Tom Simonite on how AI research can work to mitigate biases. The former Google researcher was ousted from the major technology company after voicing concerns over the speed at which AI was being produced without adequate controls for biases. Fundamentally, Gebru believes AI should slow down so researchers can “try to understand and try to limit the negative societal impacts of AI.” 

As the article discusses, one of the largest barriers facing AI governance and oversight is the question over where to place responsibility for biases and abnormalities. As discussed in Issue 10 of our newsletter, conversations on corporate responsibility and artificial intelligence have been on the rise. Across industries, regulators are beginning to recognize that blaming the machine may no longer be acceptable.

Ethics & Responsibility

AI bias continues to be a prominent topic of discussion in the marketplace. As mentioned in Issues 11, 10, and 8, facial recognition continues to be a hotly debated topic in the realm of artificial intelligence. However, the central question of Aaron Raj’s Tech Wire Asia article is “can ethical AI help overcome AI bias?” The answer in short is yes, but with adequate human oversight and monitoring systems. “When done right trustworthy AI can counter our human biases and ensure fairer outcomes for decisions that matter – employment, health, and wealth.”

However, another integral piece of the puzzle to create more ethical AI is for government policy to establish clear regulations and guidelines to promote more trustworthy behavior. From Malaysia to China to Europe, there has been a boom in ethical AI policy and regulations in recent months. Through an appropriate framework, businesses will be able to implement more robust and thoughtful practices surrounding their use of artificial intelligence and machine learning.

Ethics & Responsibility

Cynthia Rudin is the second recipient of the AAII Squirrel AI Award for “pioneering socially responsible AI.” The computer science and engineering professor’s work focuses on creating and deploying interpretable AI systems to improve societal problems. Her models have been deployed across healthcare and the criminal justice system to improve people’s daily lives.

Her interpretable approach to machine learning is the opposite of the black box models that so often make headlines. The transparent approach to algorithmic decision-making ensures that non-technical stakeholders are able to understand AI processes without losing their effectiveness. As Professor Rudin puts it, “We’ve been systematically showing that for high-stakes applications, there’s no loss in accuracy to gain interpretability, as long as we optimize our models carefully.” The interpretable approach to model development is a valuable alternative to using explainable AI tools to try to approximate the decision-making of more opaque black box models.

AI Governance & Assurance

Ehsan Foroughi's latest article in Help Net Security discusses the current debate surrounding AI and ML systems is about security, however, focuses on how enterprises must prioritize the ethical implications of AI to remain competitive in the marketplace. Ultimately, artificial intelligence is biased because the humans who create these systems are innately biased. Many enterprises do not understand the decision-making process of their algorithms, which adds to these disparities because they cannot be properly addressed or mitigated. 

As discussed in Issue 10 of our newsletter, major tech companies have ignored the ethical implications of their AI systems. From mounting pressure grows for reforms to how algorithms are used to how data should be stored, the ethical implications of these practices are growing alongside them.

Ethics & Responsibility

In this recent Information Week interview, Jessica Davis sits down with Brandon Purcell, VP of Forrester, to discuss his predictions for artificial intelligence in the coming year. Most notably, Forrester anticipates that the market for responsible AI solutions will double in 2022. “Work in this area can dovetail with efforts in AI governance, he says, and organizations will be looking to MLOps and modelOps to help govern, monitor, and manage” AI systems over their lifecycles. With growing concerns about privacy and regulations coming from across the globe, enterprises will adjust their AI-related business models to reflect more responsible and ethical practices.

Alongside this explosive growth, AI more generally will continue to become instrumental in business models. Among Forrester’s predictions are: 

  • 15% of non-technology companies will implement AI teams to design and test new technologies, 
  • 5% of Fortune 500 companies will adopt more automation to promote innovation, and 
  • Patents will be awarded to dozens of clever AI systems.
Principles & Frameworks