Could we ever create an AI as smart as the human brain?

Google's Deepmind is attempting to create artificial intelligence modeled on human brains
Google's Deepmind is attempting to create artificial intelligence modeled on human brains Credit: Telegraph

In 1990 a paper curiously-titled “Elephants don’t play chess” , published by Australian roboticist Rodney Brooks, ushered in the idea that artificial intelligence could become smarter by learning as the human brain does. Building simple connections that gradually become more complex could help AI emulate the way we think. 

But whether AI can replicate neuroscience has been a subject of great debate, with researchers on both sides of the equation attempting to learn from each other to see if machines can one day have minds like humans. One company that has taken up the task of solving intelligence is DeepMind, the British AI firm bought by Google in 2014

“It has been part of the company's roadmap from the beginning to include neuroscience and cognitive science in our research agenda and the reason is pretty simple: the company is seeking to solve artificial general intelligence (AGI),” says Matt Botvinick, director of neuroscience research at DeepMind.

AGI is the holy grail of the field, the point at which machines have the same intellectual capacity as humans, but experts warn that getting there will be no easy feat. 

“AGI is a really tough problem, making something that is as flexible and efficient across a wide range of domains as a mammalian brain is a tough challenge,” says Prof Caswell Barry, chair of UCL’s neuroscience-AI committee.

To create an AI system that is as versatile, quick-witted and adept at learning as the human brain, DeepMind has been looking for clues that might uncover more detail on how the brain learns. It may have just found some.

In a new paper published in Nature on Wednesday, DeepMind unveiled how an area of AI, known as reinforcement learning, has shed new light on the way the brain learns. But the research also hints at just how monumental a challenge lies ahead for scientists hoping to create AI as smart as the human brain. 

At the heart of the paper is a new idea of how dopamine works. Known as the “motivation molecule” or “surprise signal”, dopamine has come to be of significant interest to researchers given its role in learning in the brain. 

When things are turning out better than expected from a certain experience, more dopamine is released in the brain, giving a sense of pleasure. If things are worse than expected, dopamine is suppressed. The brain, therefore, uses dopamine to teach us what is satisfying and what’s not. Reward someone for a behaviour, and the behaviour is reinforced. 

For years, scientists assumed dopamine acts as sort of “global signal” in the brain, with each neuron falling in line with each other to emit the same siren. But according to DeepMind’s findings, says scientist Will Dabney, cells releasing dopamine act “more like a choir, all singing different notes harmonising together”.

“For the last three decades our best models of reinforcement learning in AI and neuroscience have focused almost entirely on learning to predict the average future reward,” he says. 

“But this doesn’t reflect real life – when playing the lottery, for example, people expect to either win big, or win nothing – no one is thinking about getting the average outcome.”

The work has significance in some key areas. For one, it could lift the lid on what’s happening neurologically with conditions like addiction and depression.

If some neurons are reducing dopamine, or, as Dabney puts it, thinking in “pessimistic terms”, there may be a situation in which they “take the reins”, shifting the brains entire outlook to a pessimistic one. “Seems like a pretty good characterisation of what depression involves,” he says. 

The finding is also a rare example of AI shedding light on the way the brain works, validating the work being done by researchers to get AI to work just as the brain does. And with researchers expecting this to be a more common occurrence over the next decade, there could be an acceleration in the understanding of the way the brain works, and the subsequent advancement of AI. 

“The brain is also using this kind of technique… it tells us that this is a computational technique that scales in real world situations,” Will says. 

In isolation, the findings are interesting, and could act as a springboard for future research. DeepMind researchers confirmed that scientists they had spoken to would harness these findings for follow-up experiments. But ultimately, this is an incremental step in the wider race towards AGI.

The research involved a collaboration with Harvard University scientists who measured dopamine in mice brains, rather than human brains. Many mice studies fail to translate to humans, bringing into question how much carryover this research will have. 

More broadly, the “virtuous circle” which brings neuroscience and AI in tandem could be its Achilles heel in the broader push to AGI. So much more is still to be known about the brain, and DeepMind’s race to a supremely intelligent machine is bound in part to where we stand in our understanding of the brain. Though the study advances the understanding of one mode of learning in humans, there are many more to dig deeper into. 

DeepMind is putting its AI through a whole host of trials to see if it can learn to out-maneuver opponents in games like Atari, Go or StarCraft. Some of its AI is also being used in the self-driving cars of Waymo, a division of DeepMind's parent company Alphabet. 

Sure, AI has come some way in recognising speech, images and more. But amalgamating these systems will be tough work. This is what makes the brain so unique. In its mix of grey and white matter, the human brain has a kind of agility, connectivity and responsiveness that is tough to replicate. 

DeepMind and its AI seems to be on the right track, but as its researchers admit, “we know that animals and human brains in general can do some things that AI still can’t do at all”.

License this content