ML Assurance Newsletter

Issue
15
-
Jan 27, 2022

Trust & AI: Must Reads

No items found.
No items found.

While the EU and China have implemented AI regulation, the US has not yet passed a comprehensive approach to regulate AI. In her recent Jurist article, Uche Ewelukwa Ofodile outlines the federal government’s attempts to regulate the issues posed by artificial intelligence. However, critics believe the bills proposed by Congress may not go far enough to address the complex problems created by AI, which has led to hesitancy to regulate it. Ewelukwa Ofodile suggests three things to consider when deciding whether or not the US should regulate AI:

  1. A consensus to regulate AI is present in the US.
  2. Due to their leadership in creating these technologies, the US is obligated to write the rules to ensure ethical and responsible deployment and use.

If the federal government does not act, the patchwork of laws and regulations created by state and local governments will lead to greater confusion across the landscape of AI.

Regulation & Legislation

Artificial intelligence has become an integral part of business strategies. However, this agile technology can expose a company to reputational, legal, ethical, and privacy risks. In his contributed article in VentureBeat, David Ellison of Lenovo outlines the seven values Responsible AI Committees should consider when building a company’s AI plan: 

  1. Human oversight
  2. Technical robustness
  3. Protect privacy and data
  4. Transparency
  5. Mitigate bias
  6. Manage societal and environmental harm
  7. Accountability

Through the adoption of these values, Ellison believes organizations will protect themselves from legal, ethical, and reputational risks.

Risks & Liability

By 2028, the global AI market is expected to become a $360 billion industry. For reference, in 2021 it was a $47.7 billion industry. Despite this expected exponential growth, enterprises who utilize AI and machine learning models still have trouble providing their systems with adequate oversight and governance to mitigate risk. To work towards solving this problem, Alankrita Priya recommends introducing MLOps into an enterprise’s business model.

In her Inside Big Data article, Priya emphasizes collaboration and communication MLOps facilitates between those who create the machine learning models and business leaders. This process allows for a more holistic lifecycle governance strategy. Through collaboration, businesses can lessen the gap between functional teams and those implementing a business model. This approach has the potential to bolster greater accountability and responsibility in how organizations build and deploy complex AI and machine learning models.

Ethics & Responsibility
AI Governance & Assurance

In their latest step towards becoming the world leader in AI governance, the UK introduced the AI Standards Hub to help businesses adopt more ethical and responsible AI governance frameworks. As outlined by Sabina Weston in IT Pro, this new group will be led by the Alan Turing Institute in collaboration with the British Standards Institution and National Physical Laboratory to increase AI governance following Brexit. 

Although the activities of the AI Standard Hub are not yet outlined, they will be developed through a series of workshops and roundtable discussions led by the Alan Turing Institute. With stakeholders from a variety of industries participating in these talks, the Institute’s chief executive Adrian Smith believes they will create the most comprehensive framework for AI governance.

AI Governance & Assurance

In her most recent article in Information Week, Jessica Davis outlines the growing concerns over AI bias among companies based on a survey conducted by DataRobot. Of the 36% of organizations who reported a negative impact due to AI bias, 62% reported a loss of revenue and 61% reported a lost customer. The DataRobot report also found that 54% of technology leaders are very or extremely concerned about AI bias, which is a 12% increase compared to those surveyed in 2019. 

Concerns surrounding AI bias have grown substantially over the past few years. It has raised questions over how to make AI more ethical, who should be held responsible for bias, and how to mitigate bias in datasets. Most respondents believed the responsibility should fall to data scientists, third-party consultants, and C-suite executives. Among the solutions most commonly discussed are monitoring of AI systems, deploying algorithms to help train data sets to avoid bias, and incorporating tools that make AI more explainable.

Ethics & Responsibility
Risks & Liability

On January 18, 2022 The U.S. Chamber of Commerce announced the launch of the Artificial Intelligence Commission on Competition, Inclusion, and Innovation. According to the president and CEO of the U.S. Chamber of Commerce Suzane P. Clark, the goal of the Commission is to “leverage AI to compete globally, and [create] reasonable and responsible rules governing the use of AI that harness its potential while effectively mitigating its risks.”

The Commission will be co-chaired by former representatives John Delaney (D-MD) and Mike Ferguson (R-NJ). The Commission will utilize the opinions of leaders in government, industry, and civil society to build a framework for the responsible use and deployment of AI. Through the advice provided by these entities and the Chamber’s AI policy principles, the Commission will recommend sustainable, bipartisan AI policies to establish the United States as a leader in AI innovation in a more ethical and fair way. This comes after the UK created their own standards to become the global leader in AI ethics and responsibility.

Regulation & Legislation