Racist Algorithms: How Code Is Written Can Reinforce Systemic Racism

This op-ed talks about how algorithmic bias can be found in everything from standardized testing to policing tactics.
Teenage girl holding her cellphone
Adolescent Content /Caroline Japal

This summer, my peers marched and spoke out against blatant acts of racial injustice. Meanwhile, as a 17-year-old student who dabbles in computer programming, I’ve been stewing about a newfangled, less-overt threat that also relates to systemic racism. What I did not realize until this summer was that my generation is already experiencing bias from our most trusted ally: the computer.

If you are a student, you may have already been the target of some sort of algorithmic bias, even if you don’t know it. Consider one telling fact: for a good number of high schoolers like myself who take state standardized tests, written essays might not be graded not by an English teacher, but by a robot! My first reaction to learning this was simple surprise; I had never thought that my essays might be graded by inanimate objects. The more I thought about it, the more incredulous I became. My experience with computers made me doubtful that such an algorithm could be accurate and unbiased. Turns out I was correct: the programs are primarily concerned with a programmed set of vocabulary terms, not the expression of ideas, and as a result, have a pattern of punishing many Black and other minority students.

Students in Britain might be a bit more aware of algorithmic bias because of their government’s decision this summer to let a computer program predict and determine test scores. That’s right; after examinations were cancelled due to COVID-19, the British government literally let an algorithm estimate what grades high schoolers would have earned on their college entrance tests. The algorithm’s two data points, the overall performance of a student’s school and each student’s classroom grades, massively inflated the grades of private school students while degrading those of students from less reputable schools. And while this particular program is no longer in use, it is a harbinger of the even more harmful algorithmic bias that is likely to come.

While flawed algorithms have impacted our lives as students, for others, the consequences can be more severe — maybe even a matter of life and death. Consider the field of criminal justice. Today, algorithms are being used to predict a defendant’s risk of recidivism. These programs use factors such as employment status, age, and a plethora of other data points to provide courts with reports classifying defendants as low, medium, or high-risk. These reports are then considered when determining the appropriate length of a person’s sentence in multiple states. At first glance the computer seems objective, so how exactly is it perpetuating bias and contributing to unjust incarceration?

The answer is that some data sets used to create the program are themselves biased due to historical inequities and current socioeconomic disparities. The nature of Artificial Intelligence (AI) is to disregard these inequalities and simply look for patterns. But that pattern recognition can and will lead to false and oversimplified conclusions if it is flawed from the start. That is why some risk-assessment algorithms have almost twice as often miscategorized Black Americans as high-risk compared to white Americans of similar backgrounds.

Of course, individual human decisions are often biased at times too. But AI has the veneer of objectivity and the power to reify bias on a massive scale. Making matters worse, the public cannot understand many of these algorithms because the formulas are often proprietary business secrets. For someone like me, who has spent hours programming and knows firsthand the deep harm that can arise from a single line of code, this secrecy is deeply worrisome. Without transparency, there is no way for anyone, from a criminal defendant to a college applicant, to understand how an algorithm arrived at a particular conclusion. It means that, in many ways, we are powerless, subordinated to the computer’s judgment.

Sometimes, a computer can just be downright wrong. For example, in August, four Black girls in Aurora, Colorado, one of whom was just six years old, were forced out of their car onto the ground at gunpoint after police officers pulled them over. It turned out that a law enforcement surveillance algorithm mixed up the license plate on the girls’ car with that of a stolen motorcycle. The episode is a poignant reminder to us teenagers that flawed algorithms don’t just affect test scores. The ramifications can be far worse.

Though there are solutions. While we should carefully monitor data sets for biases that arise from historical inequities, as well as encourage diversity in the field of AI, there is one simple thing we can demand: less secrecy shrouding the algorithms themselves. All of us have had the harrowing experience of being told, at a store or government office, “That’s what the computer says,” signifying the end of the conversation. Human decision-making somehow suddenly leaves the equation. Computer programs risk something that is much, much worse — it involves unknown formulas that may determine the course of our lives. We need to understand how these algorithms make decisions and expose the programming to sunlight, which former Supreme Court Justice Louis Brandeis once called “the best of disinfectants.”

Algorithmic bias is not just a thing of the future — it is already with us. Teenagers are uniquely situated to both understand and deal with flawed programs because we will have to live with them for the rest of our lives. The attention generated by protests around police brutality is a springboard to start thinking about how discrimination can seep into almost everything — including our technology. Black lives matter in code too.

Want more from Teen Vogue? Check this out: These Teens Support Black Lives Matter — But Their Parents Don’t

Stay up-to-date on the 2020 election. Sign up for the Teen Vogue Take!