The complexity of algorithmic bias is one of the reasons why it is a major discussion point in business today. More and more decisions in our lives are being decided by algorithms - from whether or not we’re qualified for a job to what clothes we buy to the medical treatment we receive.
The importance of ensuring that these automated decisions are fair and ethical is reaching a greater urgency. Most of us probably agree on that, yet how we address this task of assuring* fair and ethical algorithmic decision-making remains a gray area. What does an ethical decision really mean?
Dive into this post from Monitaur's Dr. Andrew Clark to learn more.
Employment software that uses AI decision-making could be a new source of legal liability for organizations in California, according to a recent article by Brandon Vigliarolo in The Register.
In a recently proposed amendment to California's hiring discrimination laws, it would be “illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class by the California Department of Fair Employment and Housing.”
The broad language of the proposal could make the law easily applicable to other software and methods used in employment decisions.
More than 90% of businesses use software to rank and filter job candidates - so the implications of the law would be far-reaching.
A new study from the IBM Institute for Business Value (IBV) uncovered a shift in who is responsible for leading and upholding AI ethics within an organization, according to respondents.
“When asked which function is primarily accountable for AI ethics, 80% of respondents pointed to a non-technical executive, such as a CEO, as the primary ‘champion’ for AI ethics, a sharp uptick from 15% in 2018.”
The global study also affirmed a clear call for “trustworthy AI” but that there remains a gap between intention and meaningful actions when it comes to AI ethics by organizational leaders.
Read the full release on PR Newswire.
This month, the European Parliament (EP) approved the European Governance Data Act (DGA). The regulation applies to public sector institutions and companies.
“The purpose of the bill is to make public data accessible to companies and citizens to unlock the potential of artificial intelligence. However, the regulation does not create any obligation for public sector bodies to allow the reuse of data — it encourages them to share the data on a voluntary basis, in an anonymized format (including synthetic data) and respecting other laws such as the General Data Protection Regulation or Copyright legislation.”
The Governance Data Act is the first data regulation approved under the EU Data Strategy. One of the goals of the act is to create a foundation for a data economy that is fair and trustworthy for both citizens and businesses.
Lead Member of the European Parliament (MEP) Angelika Niebler commented:
“We are at the beginning of the age of AI, and Europe will require more and more data. This legislation should make it easy and safe to tap into the rich data silos spread all over the EU. The data revolution will not wait for Europe. We need to act now if European digital companies want to have a place among the world’s top digital innovators.”
Recent revenue success at LinkedIn was the result of a focus on Explainable AI or XAI.
Microsoft Corp’s LinkedIn boosted subscription revenue by 8% after arming its sales team with artificial intelligence software that not only predicts clients at risk of canceling but also explains how it arrived at its conclusion.
“While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm.”
Regulators, including the Federal Trade Commission, have been increasingly vocal about the need for AI to be explainable (or run the risk of investigation). The EU is also currently considering an Artificial Intelligence Act, that could pass as early as next year. The act includes “a set of comprehensive requirements including that users be able to interpret automated predictions.”
Watch (or listen) to this 37-minute interview with Beena Ammanath on the Keen On show with host Andrew Keen. Ammanath is the Executive Director of the Global Deloitte AI Institute and Founder of Humans For AI and is considered a global thought leader on AI ethics.
In the interview, Ammanath discusses a range of topics - including some of the ideas from her book: Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI. According to Ammanath, we need to “move beyond personal morality or personal ethics, [and] make it at the organizational level.”
Ammanath also discusses how there is an opportunity with AI technology to attract more diverse talent to the field. Unlike other technology, working in AI doesn’t necessarily require coding skills. Having subject matter expertise is needed, but coding and deep technical skills are not always required. This opens up the industry to candidates with diverse backgrounds and experiences.
FutureMakers is part of the Responsible AI for Social Empowerment and Education (RAISE) launched by MIT in 2021, according to Kim Patch of the MIT Media Lab.
FutureMakers offers programs for middle and high school students to learn more about AI as well as the social implications of AI technologies.
“We want to remove as many barriers as we possibly can to support diverse students and teachers,” says Cynthia Breazeal, a professor of media arts and sciences at MIT who founded the Media Lab’s Personal Robots Group and also heads up the RAISE initiative. “All RAISE programs are free for educators and students. The courses are designed to meet students and teachers where they are in terms of resources, comfort with technology, and interests.”
FutureMakers currently has two programs:
“AI is shaping our behaviors, it’s shaping the way we think, it’s shaping the way we learn, and a lot of people aren’t even aware of that,” says Breazeal. “People now need to be AI literate given how AI is rapidly changing digital literacy and digital citizenship.”