How does bias happen, technically?

Ethics & Responsibility
Impact & Society
AI Governance & Assurance
Data Bias - abstract image

Dealing with data bias

As we touched on in our previous post Breaking down bias in AI, algorithmic bias is a complicated topic. Below, we examine the common causes of algorithmic bias.

To understand what causes bias, we first must understand: what do we mean by bias? Our working definition is as follows:

  • Bias is prejudice or unfair treatment towards a group of people.
  • Bias can occur on several factors such as gender, race, disability, sexuality, and/or age.
  • The three broad categories of algorithmic bias stem from statistical data biases, historical data biases, and algorithm training bias

A common error is focusing on algorithms as the cause of bias. There is truth to this belief, but things are more complex than just faulty models. Blaming an algorithm for model bias is analogous to blaming a child who burns themselves on an unattended stove, is the child entirely to blame? 

Machine Learning (ML) models are incredibly unintelligent, despite contrary representations in the media. Algorithms are simply equations that optimize a loss function1 and make a prediction (such as approve or deny) based on the algorithm’s internal representation of reality. If the algorithm is given a lot of data points of the form: “black women, poor credit, deny loan” and “white men, good credit, approve loan” the algorithm will mirror this behavior in its predictions. 

The root cause of a biased algorithm is often not the algorithm but the data it was trained on.

Webinar replay

Statistical bias

Statistical bias is the difference between the estimated value and the true value of a measure (such as in the credit example above). For algorithmic bias specifically, it comes down to three types of statistical bias:

  • Data used for training is not representative of the target population
  • Measurement errors
  • Unbalanced data

“Data used for training is not representative of target population'' means that if we are building a model for policy claims for individuals in Wyoming, we need to ensure that the data we’ve selected to use for training our model is representative, or similar in all material respects, to the true population data from Wyoming. 

This means that if, for instance, 50% of Wyomingites drive pickup trucks (disclaimer: all data is entirely fictitious and used as an example only), and 5% of pickups get into an accident every 5 years, then we need to ensure that our data is reflective of these statistical properties.

Measurement errors compound the matter further. Assume that we think we understand our target population (50% of Wyomingites drive pickups); however, in fact, 40% of Wyomingites drive pickups, and our population-level statistics were false. This would be due to measurement errors. Measurement error is a pernicious issue, affecting even the top levels of statistical practice[1]. Accurate surveying or polling is extremely difficult, as we find out every election cycle, measurement error occurs as the result of the difficulty of generalizing a sample to represent the entire population of interest. Measurement error itself is caused from random errors due to factors that can’t be controlled (and/or inaccurate statistical assumptions while systematic error occurs due to the way we get population data, and theoretically can be accounted for. To expound on our polling example, political polls are notoriously inaccurate as statistical assumptions, sample calculation, and the sheer difficulty of executing a representative survey. Cold calling or knocking on doors at 1pm during a work week and tabulating the responses given and assuming the responses are representative of all of that region is not a valid approach, for instance. 

Here is where it gets complicated and where a lot of model builders trip up. Even if it is free of measurement errors, and even if it is representative of the target population, unbalanced data is still an issue for the algorithm. For instance, if our dataset of Wyomingites shows, hypothetically, 100 millionaires with pickups, 90 of them being White Americans, 10 being Black Americans. This may be representative and free from measurement errors, however, this is where the unintelligence of models comes into play: algorithms rely on seeing enough similar transactions to know how to handle them. 

If we were concerned about our algorithm being based on race, we can upsample the number of Black American millionaires so that our algorithm doesn’t equate Caucasian = rich, African American = not rich. This tendency of models to over-generalize based on the data they encounter is where the thorny ethical and fairness considerations we spoke about in our last post come into play. 

We can very successfully upsample parts of our data while keeping it representative of the target population. For example, adding a couple of additional observations to ensure that the algorithm optimizes on the appropriate factors - such as frequency of accidents - versus focusing on something like ethnicity or credit score. 

By judiciously rebalancing parts of the data, we can ensure that the algorithm is encoding the proper factors. Outside the scope of this post, to ensure our algorithm is doing what it should be doing, techniques such as Sobol analysis[2] can be used to ascertain the first and second-order effects of what discriminating factors are encoded in the model.

bias blog

Historical bias

Another major cause of bias is historical or societal bias. Historical or societal bias is the presence of historical bias, such as historical loan approval data where loan officers exhibited racial bias or as a result of redlining. To build effective models, and ML models specifically, we need large amounts of data to train our model to have high predictive accuracy. 

Using the wealth of historical data available to companies who have been in business for many years is normally a major asset and huge business advantage. Yet, when we are building fair and ethical models, we need to specifically evaluate these datasets to ensure that they are representative of the current target population and do not have historical biases embedded, as data collecting during redlining would exhibit.

Discrimination in financial services was ‘legalized’ by President Franklin Roosevelt's New Deal in the 1933 Home Owners Loan Corporation[3]. The HOLC created color coded maps where affluent, often white areas, were categorized as low risk while poor neighborhoods and neighbors with high concentrations of minorities were categorized as high risk. Areas of ‘high risk’ were often highlighted in red, hence the term ‘redlining’. As a result, minorities and low-income areas had an unfair catch-22 feedback loop of: ‘you live in a poor neighborhood, so you don’t get access to credit services’ and ‘you don’t have a history of credit, so I can’t give you financial services’. This is a long topic in of itself, but for our purposes here, redlining was a major contributing factor to historically biased data, which still has a half-life in making our data and algorithms fair today, even though it was fully abolished under President Gerald Ford in the 1975 Home Mortgage Disclosure Act.

Algorithm training bias

Now that we have examined the two main causes of bias, we still need to cover algorithm training bias. This is the part that, as previously discussed, though overemphasized, is still a critical component of algorithmic bias. 

Algorithm training involves showing an algorithm a large dataset with labeled outcomes, such as approve and deny. The algorithm iterates over the individual rows, optimizing the loss function for optimal performance. Usually, accuracy, precision, recall, or a combination thereof is used as the criteria for model optimization. 

Unless our data is pristine and representative of a socioeconomically diverse target population (having both is highly improbable), we can either resample the data to be more representative or we can introduce the notion of multi-objective optimization. 

With multi-objective optimization, we ask our algorithm to not only make the most accurate model but optimize for the most accurate model that is fair. The most common fairness method for model training is equalized odds2. As discussed in our previous post, equalized odds require that males and females (as an example) are approved for a loan at an equal rate. 

When we optimize our algorithm to be the most accurate it can be, while meeting equalized odds, we are training our algorithm to be fair. 

FairLearn by Bird, Sarah, et al[4] has several fantastic model-agnostic tools, such as their grid search tool that allows one to train their model under the notion of multi-objective optimization. 

Although more work needs to be done on multi-objective optimization to increase the accessibility of the concept, it is possible to train your model to explicitly not be biased. 

No matter how effective this method is, focusing on the quality and representation of one's data should be the first line of defense against algorithmic bias.

Recap and conclusion

In this post, we’ve discussed the two root causes of algorithmic bias, Statistical and Historical, both relating to data, while also providing a high-level overview of algorithm training bias. 

Focusing on data quality and its representativeness of the target population should be the main focus of organizations – with multi-optimization modeling later instituted as a “safety” preventative measure. Despite the improvements in multi-objective modeling, the adage still applies: “garbage in garbage out”. No fancy algorithmic optimization techniques can create fair models when the data is rife with historical bias and staled with outdated population representations.

The tenets of statistical analysis and independent statistical surveying are still very relevant in today’s Big Data age. Representative data and judicious data rebalancing are the most effective means to ensure our models are fair and ethical.

About the author

Dr. Andrew Clark is Monitaur’s co-founder and Chief Technology Officer. A trusted domain expert on the topic of ML auditing and assurance, Andrew built and deployed ML auditing solutions at Capital One. He has contributed to ML auditing education and standards at organizations including ISACA and ICO in the UK. He currently serves as a key contributor to ISO AI Standards and the NIST AI Risk Management framework. Prior to Monitaur, he also served as an economist and modeling advisor for several very prominent crypto-economic projects while at Block Science.

Andrew received a B.S. in Business Administration with a concentration in Accounting, Summa Cum Laude, from the University of Tennessee at Chattanooga, an M.S. in Data Science from Southern Methodist University, and a Ph.D. in Economics from the University of Reading. He also holds the Certified Analytics Professional and American Statistical Association Graduate Statistician certifications. Andrew is a professionally trained concert trumpeter and Team USA triathlete.


Footnotes

1. Example loss function. Binary logistic regression log loss function:

$L=-\frac{1}{n}\sum_{i=1}^{n}(y_i*\log(\hat y_i) + (1-y_i)\log(1-\hat y_i))$

Where:

  • $n = \textrm{number of training value}$
  • $i=\textrm{ith training value}$
  • $y_i=\textrm{correct value for the ith training value}$
  • $\hat{y_i}=\textrm{predicted value for the ith training value}$

Loss functions are usually minimized by using the gradient descent approach.

2. We say that a model satisfies Equalized Odds with respect to a protected attribute and an outcome, if the prediction and protected attribute are independent conditional on the outcome. The equation for equalized odds in a binary classification with a binary protected class is:

$P(\hat Y=1|A=0,Y=y)=P(\hat Y=1|A=1,Y=y), y \in {0,1}$

Where:

  • $P = \textrm{Probability}$
  • $\hat Y = \textrm{Predicted outcome}$
  • $Y = \textrm{Actual outcome}$
  • $A = \textrm{Protected class attribute}$
  • $y=1 \textrm{ true positive rates}$
  • $y=0 \textrm{ false positive rates}$

References

  1. Overberg, Paul. “U.S. Census Undercounted Blacks, Hispanics in 2020.” Wall Street Journal , March 10, 2022, sec. Politics.
  2. Zhang XY, Trame MN, Lesko LJ, Schmidt S. Sobol Sensitivity Analysis: A Tool to Guide the Development and Evaluation of Systems Pharmacology Models. CPT Pharmacometrics Syst Pharmacol. 2015 Feb;4(2):69-79. doi: 10.1002/psp4.6. PMID: 27548289; PMCID: PMC5006244.
  3. Mosley, Roosevelt, Wenman Radost. Methods for Quantifying Discriminatory Effects on Protected Classes in Insurance. CAS Research Paper Series on Race and Insurance Pricing. Casualty Actuarial Society. 2022.
  4. Bird, Sarah, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. “Fairlearn: A Toolkit for Assessing and Improving Fairness in AI,” May 18, 2020.