Forget about artificial intelligence, extended intelligence is the future

We should challenge the cult of Singularity. AI won't take over the world
Getty Images / TEK IMAGE / SCIENCE PHOTO LIBRARY / WIRED

Last year, I participated in a discussion of The Human Use of Human Beings, Norbert Weiner’s groundbreaking book on cybernetics theory. Out of that grew what I now consider a manifesto against the growing singularity movement, which posits that artificial intelligence, or AI, will supersede and eventually displace us humans.

The notion of singularity – which includes the idea that AI will supercede humans with its exponential growth, making everything we humans have done and will do insignificant – is a religion created mostly by people who have designed and successfully deployed computation to solve problems previously considered impossibly complex for machines.

They have found a perfect partner in digital computation, a seemingly knowable, controllable, machine-based system of thinking and creating that is rapidly increasing in its ability to harness and process complexity and, in the process, bestowing wealth and power on those who have mastered it.

In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a feedback loop, lacking in self-regulation (although #techwontbuild, #metoo and #timesup are forcing some reflection).

On an S-curve or a bell curve, the beginning of the slope looks a lot like an exponential curve. According to systems-dynamics people, however, an exponential curve shows a positive feedback curve without limits, self-reinforcing and dangerous.

In exponential curves, Singularitarians see super-intelligence and abundance. Most people outside the Singularity bubble believe that natural systems behave like S-curves, where systems respond and self-regulate. When a pandemic has run its course, for example, its spread slows and the world settles into a new equilibrium. The world may not be in the same state as before the pandemic or other runaway change, but the notion of singularity – especially as some sort of saviour or judgment day that will allow us to transcend the messy, mortal suffering of our human existence – is fundamentally a flawed one.

This sort of reductionist thinking isn’t new. When the psychologist BF Skinner discovered the principle of reinforcement and was able to describe it, we designed education around his theories.

Scientists who study learning now, however, know that behaviourist approaches like Skinner’s only work for a narrow range of learning – but many schools nonetheless continue to rely on drill and practice and other pillars of reinforcement. Take, as another example, the field of eugenics, which incorrectly over-simplified the role of genetics in society. This movement helped fuel the Nazi genocide by providing a reductionist scientific view that we could “fix humanity” by manually pushing natural selection. The echoes of that horror exist today, making taboo almost any research that would link genetics with, say, intelligence.

While one of the key drivers of science is to elegantly explain the complex and increase our ability to understand, we must also remember what Albert Einstein said: “Everything should be made as simple as possible, but no simpler.” We need to embrace the unknowability – the irreducibility – of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar and comfortable with.

Today, it is obvious that most of our problems – for instance, climate change, poverty, chronic disease or modern terrorism – are the result of our pursuit of the Singularity dream: exponential growth. They are extremely complex problems produced by tools used to solve past problems, such as endlessly pushing to increase productivity or to exert control over systems that have, in fact, become too complex to control.

In order to effectively respond to the significant scientific challenges of our times, I believe we must respect the many interconnected, complex, self-adaptive systems across scales and dimensions that cannot be fully known by or separated from observer and designer.

In other words, we are all participants in multiple evolutionary systems with different fitness landscapes at different scales, from our microbes to our individual identities to society and our species. Individuals themselves are systems composed of systems of systems, such as the cells in our bodies that behave more like system-level designers than we do. As Kevin Slavin says in his 2016 essay Design as Participation: “You’re not stuck in traffic, you are traffic.”

Biological evolution of individual species (genetic evolution) has been driven by reproduction and survival, instilling in us goals and yearnings to procreate and grow. That system continually evolves to regulate growth, increase diversity and complexity, and enhance its own resilience, adaptability and sustainability. We could call it “participant design” – design of systems as and by participants – that is more akin to the increase of a flourishing function, where flourishing is a measure of vigour and health rather than scale, money or power.

Machines with emergent intelligence, however, have discernibly different goals and methodologies. As we introduce such machines into complex adaptive systems such as the economy, the environment or health, I see them augmenting, not replacing, individual humans and, more importantly, augmenting such systems.

Here is where the problematic formulation of “artificial intelligence” as defined by many Singularitarians becomes evident, as it suggests forms, goals and methods that stand outside of interaction with other complex adaptive systems.

Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

We must question and adapt our own purpose and sensibilities as observers and designers within systems for a much more humble approach: humility over control.

Joi Ito is director or MIT’s Media Lab. This an edited excerpt of an essay originally published in the MIT’s Journal of Design and Science and which became a call for responses.

This article was originally published by WIRED UK