Bias in Algorithms

K Murray
2 min readMar 13, 2021

Algorithms follow quickly after any implementation of computerization, I feel like people often want things to be automated- without realizing the complexities and potential consequences of implementing AI. Even for seemingly simple tasks, AI can have deep implicit biases (Hauser 2018). These biases can come from many places, but more often than not it’s the data set the AI is trained on that is skewed- and garbage in, garbage out. These AI bases need huge pools of data to learn from, but it's important for this learning to be tested and looked after to make sure the AI isn’t learning something way off base. This is why ‘black box’ algorithms are so scary, we don’t know how they run.

And AI doesn't just determine what ads end up on your screen, but even if you get that job interview or not, that loan you applied for, if you got into college, and they also can even be used to determine how long a prison sentence is. That last is especially suspect, as it determines your recidivism rate based on factors including race, gender, and background (Buolamwini 2018, Sharma 2020). These factors can be incredibly biased, as even the rates of human arrests for different races are known to be skewed- the AI should prevent the bias, not enforce it. This is why AI must be used in a manner that is not flippant, but well regulated and watched over by capable human eyes. As O’Neil said, “There’s a lot of money to be made in unfairness,” corporations and ignorant people in power should not be solely in charge of unknowable AI that can determine your quality of life (2017).

--

--