How Coronavirus and Protests Broke Artificial Intelligence And Why It’s A Good Thing

"AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination."

YANGZHOU, CHINA – APRIL 28, 2020 – Visitors visit Jiangdu digital economy Exhibition Center, Yangzhou City, Jiangsu Province, China, April 28, 2020. The 2020 China Yangzhou (Jiangdu) digital economy development conference opens. Costfoto/Barcroft Media via Getty Images

Until February 2020, Amazon (AMZN) thought that the algorithms that controlled everything from their shelf space to their promoted products were practically unbreakable. For years they had used simple and effective artificial intelligence (AI) to predict buying patterns, and planned their stock levels, marketing, and much more based on a simple question: who usually buys what?

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="nofollow noreferer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Yet as COVID-19 swept the globe they found that the technology that they relied on was much more shakable than they had thought. As sales of hand sanitizer, face masks, and toilet paper soared, sites such as Amazon found that their automated systems were rendered almost useless as AI models were thrown into utter disarray.

Elsewhere, the use of AI in everything from journalism to policing has been called into question. As long-overdue action on racial inequalities in the US has been demanded in recent weeks, companies have been challenged for using technology that regularly displays sometimes catastrophic ethnic biases.

Microsoft (MSFT) was recently held to account after the AI algorithms that it used on its MSN news website confused mixed-race members of girlband Little Mix, and many companies have now suspended the sale of facial recognition technologies to law enforcement agencies after it was revealed that they are significantly less effective at identifying images of minority individuals, leading to potentially inaccurate leads being pursued by police.

“The past month has brought many issues of racial and economic injustice into sharp relief,” says Rediet Abebe, an incoming assistant professor of computer science at the University of California, Berkeley. “AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination. This has been an opportunity to reflect more deeply on our research practices, on whose problems we deem to be important, whom we aim to serve, whom we center, and how we conduct our research.”

SEE ALSO: Artificial Intelligence Is on the Case in the Legal Profession

From the COVID-19 pandemic to the Black Lives Matter protests, 2020 has been a year characterized by global unpredictability and social upheaval. Technology has been a crucial medium of effecting change and keeping people safe, from test and track apps to the widespread use of social media to spread the word about protests and petitions. But amidst this, machine learning AI has sometimes failed to meet its remit, lagging behind rapid changes in social behavior and falling short on the very thing that it is supposed to do best: gauging the data fed into it and making smart choices.

The problem often lies not with the technology itself, but in a lack of data used to build algorithms, meaning that they fail to reflect the breadth of our society and the unpredictable nature of events and human behavior.

“Most of the challenges to AI that have been identified by the pandemic relate to the substantial changes in behavior of people, and therefore in the accuracy of AI models of human behavior,” says Douglas Fisher, an associate professor of computer science at Vanderbilt University. “Right now, AI and machine learning systems are stovepiped, so that although a current machine learning system can make accurate predictions about behaviors under the conditions under which it learned them, the system has no broader knowledge.”

The last few months have highlighted the need for greater nuance in AI—in short, we need technology that can be more human. But in a society increasingly experimenting with using AI to carry out such crucial roles as identifying criminal suspects or managing food supply chains how can we ensure that machine learning models are sufficiently knowledgeable?

“Most challenges related to machine learning over the past months result from change in data being fed into algorithms,” explains Kasia Borowska, Managing Director of AI consultancy Brainpool.ai. “What we see a lot of these days is companies building algorithms that just about do the job. They are not robust, not scalable, and prone to bias… this has often been due to negligence or trying to cut costs—businesses have clear objectives and these are often to do with saving money or simply automating manual processes, and often the ethical side—removing biases or being prepared for change—isn’t seen as the primary objective.”

artificial intelligence data
artificial intelligence data Pixabay/Gerd Altmann

Kasia believes that both biases in AI algorithms and an inability to adapt to change and crisis stem from the same problem and present an opportunity to build better technology in the future. She argues that by investing in building better algorithms, issues such as bias and an inability to predict user behavior in times of crisis can be eliminated.

Although companies might previously have been loath to invest time and money into building datasets that did much more than the minimum that they needed to operate, she hopes that the combination of COVID and an increased awareness of machine learning biases might be the push that they need.

“I think that a lot of businesses that have seen their machine learning struggle will now think twice before they try and deploy a solution that isn’t robust hasn’t been tested enough,” she says. “Hopefully the failure of some AI systems will motivate data scientists as well as corporations to invest time and resources in the background work ahead of jumping into the development of AI solutions… we will see more effort being put into ensuring that AI products are robust and bias-free.”

The failures of AI have been undeniably problematic, but perhaps they present an opportunity to build a smarter future. After all, in recent months we have also seen the potential of AI, with new outbreak risk software and deep learning models that help the medical community to predict drugs and treatments and develop prototype vaccines. These strides in progress demonstrate the power of combining smart technology with human intervention, and show that with the right data AI has the power to enact massive positive change.

This year has revealed the full scope of AI, laying bare the challenges that developers face alongside the potential for tremendous benefits. Building datasets that encompass the broadest scope of human experience may be challenging, but it will also make machine learning more equitable, more useful, and much more powerful. It’s an opportunity that those in the field should be keen to corner.

How Coronavirus and Protests Broke Artificial Intelligence And Why It’s A Good Thing