ML Assurance Newsletter

Issue
13
-
Dec 9, 2021

Trust & AI: Must Reads

No items found.
No items found.

This month, the UK Central Digital and Data Office published a framework to develop more algorithmic transparency for government departments and public sector bodies with the assistance of the Centre for Data Ethics and Innovation. The framework will be enforced beginning in 2022. This step towards more transparency in algorithmic models makes the United Kingdom one of the first countries to create comprehensive guidelines on the matter, strengthening its position as a leader in AI governance.

With the assistance of the Centre for Data Ethics and Innovation’s advisory council and public entries, the Office’s Central Digital and Data Office designed a standard with two tiers. The first tier requires organizations to publish a short description about their algorithmic tools, “including how and why it is being used.” The second tier provides more details about how the algorithm operates, the datasets being used to train the model, and the level of human monitoring. The Office hopes greater transparency will “promote trustworthy innovation by providing better visibility of the use of algorithms across the public sector, and enabling unintended consequences to be mitigated early on.”

Regulation & Legislation

In Tom Simonite’s most recent WIRED piece, he outlines the next step for the former Google researcher Timnit Gebru as she continues to work towards promoting more ethical and responsible AI: the Distributed Artifical Intelligence Research (DAIR) Center. This institute is a project of the nonprofit Code for Science and Society, but will move towards becoming a nonprofit in its own right in the near future. 
“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” Ms. Gebru states about her new project. Moving away from the corporate world, which continues to struggle towards a comprehensive AI risk and accountability framework, Gebru hopes to better understand the influences artificial intelligence has on society. Through this process, she intends to promote more responsible development and deployment of AI systems throughout industries.

Ethics & Responsibility

The Associated Press reports that the New York City Council passed a piece of legislation on November 10, 2021, placing limits on the use of artificial intelligence in the hiring process. This bill addresses the opaque nature of AI tools utilized by employers by requiring them to implement an auditing system. The new auditing requirement is aimed at protecting against racial and gender biases that are a major risk during AI-oriented hiring processes.
Similarly to our own beliefs at Monitaur about artificial intelligence, Frida Polli, an advocate for this bill, said, “I believe this technology is incredibly positive but it can produce a lot of harms if there isn’t more transparency.” This bill is the first major step towards regulating AI in regards to employment and hiring. However, similar types of legislation are under consideration by the White House, the Equal Employment and Opportunity Commission, and the European Union.

Regulation & Legislation
Ethics & Responsibility

In this recent VentureBeat article, Kyle Wiggers illuminates what Explainable AI (XAI) is and why the promises made about XAI are not always attainable. Explainable AI is almost always more beneficial than black box AI models, however XAI offers several technical barriers. These barriers make the models uninterpretable to 65% of employees. The explainations provided by XAI often do not create a transparent enough presentation of data to be understood by non-technical stakeholders.

As Wiggers notes, “XAI should give users confidence that a system is an effective tool for the purpose and meet society’s expectations about how people are afforded agency in the decision-making process.” Despite XAI often failing to meet these expectations, it is still important for businesses to continue to move towards more transparent AI models. They should be cautious about how valuable explainability is. Explainablility alone is not the solution to understanding AI and ML models, but it is a critical step towards understanding them.

Risks & Liability

Melissa Heikkilä’s recent piece in Politico examines the UN’s Educational, Scientific, and Cultural Organization’s (UNESCO) recent recommendation for its member states to stop using AI technology for social scoring systems – a practice used by Beijing “to score Chinese citizens based on their perceived trustworthiness.” Most notably, this is the first time China signed onto an international organization’s set of principles to end utilizing AI technology for mass surveillance. Due to the voluntary nature of these recommendations, UNESCO’s Assistant Director-General for Social and Human Sciences Gabriela Ramos did not comment on whether or not she believes China will abide by these new regulations. However, China’s signature comes shortly after the country released its own framework regarding AI Ethics
Ramos recognizes that the committee’s framework will provide a stepping stone for other organizations across the globe to take steps to regulate AI – including the EU and the US, who is not a member state of this committee. This recommendation is important to the growth of AI regulation across the world because it is a concrete framework for governments and governing bodies to model their own frameworks after as calls for regulation continue to grow.

Regulation & Legislation
Ethics & Responsibility

In his most recent CMSWire piece, Phil Britt summarizes the predictions Moutusi Sau, analyst and vice president of Gartner, has for Explainable AI (XAI) in financial services. He anticipates the adoption of Explainable AI will drive the incorporation of AI systems from the current rate of 30% to approximately 50%. Sau believes this growth will be due to increased understanding of how AI models make decisions. 
The slow adoption of explainable AI in financial services is nothing new. Historically, this industry was apprehensive to integrate the internet, mobile services, and other technological advancements because of their dependency on legacy infrastructure. Additionally, as a highly regulated industry, they had to consider the implications regulations may have on new technologies. Yet, as the demand for regulations continues to evolve, enterprises are beginning to see the value of more transparent models. However, organizations may be careful not to oversell what explainable AI can offer. Explainability alone will not be enough to make AI systems understandable to all stakeholders, but is an important step along the journey.

AI Governance & Assurance