_
_
_
_
_

Melanie Mitchell: ‘The big leap in artificial intelligence will come when it is inserted into robots that experience the world like a child’

In her latest book, the American researcher analyzes the real capabilities of this technology, which is unable of human reasoning: ‘There are a lot of things about knowledge that aren’t encoded in language’

Melanie Mitchell
Professor Melanie Mitchell is an expert in analogical reasoning and complex systems.Kate Joyce
Manuel G. Pascual

Are we overstating the potential of artificial intelligence (AI)? How intelligent is it? Will it ever reach the level of human intelligence? These are some of the questions that Melanie Mitchell, 55, asks in her book Artificial Intelligence: A Guide for Thinking Humans. Her answer is clear: we are very far from creating a superintelligence, no matter how much some companies may say otherwise. And one of the fundamental reasons why is because machines do not reason like we do. They can do almost any task better than anyone else, but they understand the world worse than a one-year-old baby.

Mitchell provides a crucial context to gauge the phenomenon of AI, a technology that has dominated public discussion since tools like ChatGPT appeared two years ago. Politicians, business people and academics recently warned about the dangers of these systems, which have dazzled the world by generating elaborate texts and hyperrealistic images and videos.

In her book, Mitchell — Davis Professor of Complexity at the Santa Fe Institute and professor at Portland State University — describes how the most advanced AI systems work and contrasts them with human reasoning. Her conclusion: key aspects such as intuition or environment awareness are, at the moment, beyond the reach of any machine. Mitchell talks to EL PAÍS by video call from her home in Santa Fe, New Mexico.

Question. What is AI capable of today?

Answer. There was really a big jump in capabilities a couple of years ago with the advent of generative AI, which includes things like ChatGPT and Dall-E. But these systems, while they can do many things, do not have the same kind of understanding of the world as we do. They lack reliability, they have certain kinds of limitations that are often hard to predict. So I think that while these systems can be very useful, and I use them all the time, we have to be careful about how much we trust them, especially if there’s no human in the loop.

Q. Why?

A. They can make harmful mistakes. A clear example is self-driving cars. One of the reasons they are not with us yet is that they make mistakes that a human rarely would, such as not identifying a pedestrian or an obstacle. Another example is automatic facial recognition systems. The machines are extremely good at recognizing faces in images, but they are worse at identifying people with darker skin or women. With ChatGPT, we’ve seen countless cases where it’s made up what it said.

Professor Mitchell uses AI tools daily, but recognizes their limitations and always monitors her results.
Professor Mitchell uses AI tools daily, but recognizes their limitations and always monitors her results.Kate Joyce

Q. Does the boom in generative AI help or harm the development of the discipline?

A. In a way, this hype raises people’s expectations, and that then causes disappointment. This is something that’s happened throughout the history of artificial intelligence. In the 1950s and 1960s, people were claiming that we’d have AI machines with human-level intelligence within a few years. That didn’t happen. The so-called AI winter arrived: funding for research dried up and companies went out of business. We are now in a period of great expectation. Is this really going to be the time that the optimistic predictors are right? Or is there going to be another sort of big disappointment? And it’s hard to predict.

Q. Just three years ago, the future was going to be the metaverse. Today, no one is talking about it anymore. Do you think something similar could happen with AI?

A. It happens all the time with great technological innovations: there is a kind of big hype bubble, then expectations are not met and people are disappointed, and finally the technology comes out ahead. The development turns out to be useful, but not as brilliant as people expected. That’s likely what’s going to happen with AI.

Q. You argue that AI systems lack semantic understanding or common sense and therefore cannot be truly intelligent. Do you think that will change at some point?

A. It’s possible. There is no reason why we couldn’t have such a machine. The question is, how do we get there? ChatGPT has been trained on all available digital books and texts, as well as all videos and images on the internet. But there are a lot of things about common sense and knowledge that aren’t encoded in language, that just come through experience. It may be that to get a machine to really have a human-like understanding, it will have to actually experience the world in the way we do. This is a subject of a big debate in the world of AI. I suspect the big leap will come when a machine is not just passively trained on language, but also actively experiences the world like a child does.

Q. When they are in robot form.

A. Yes. An AI inserted into a robot could have the same sort of education or development as a child. It is something that Alan Turing, one of the fathers of computing, speculated about in the 1950s. That idea makes more sense now.

Q. In the book, you describe how AI works and how little this process has to do with our own way of reasoning. Does the process matter if it fulfills its function?

A. It depends on what you want to use the system for. My car’s GPS can find a route to and from where I want to go. It doesn’t understand the concept of road or traffic, but it does a fantastic job. The question is, if we really want systems to interact more generally with our human world, to what extent will they need to understand it? There was a case where a self-driving car slammed on the brakes at a certain moment, and the driver didn’t know why. It turned out that there was a billboard with an ad that had a stop sign on it. Can you avoid mistakes like that? Only when you understand the world like we do.

Q. How far do you think AI can go?

A. I don’t think there’s any reason why we can’t have machines with human-level intelligence. But it’s going to be difficult to get there, I don’t think we’re that close right now. Back in the 1970s, people thought that if a machine could play chess at a grand master level, that would require human-level intelligence. It turned out that it didn’t. Then it was said that translating texts or maintaining conversations would require it. But it didn’t either. The whole history of AI has showed that our intuitions about our own intelligence are often wrong, that it’s actually a lot more complex than we thought. And I think that will continue to be the case. We’re going to learn a lot more about what it really means to be intelligent.

Saying that AI systems could get out of control and destroy humanity is, at the very least, a highly improbable and speculative claim.

Q. Then it will have been worth it.

A. One of the goals of AI is to help shed insight into what we mean by intelligence. And, when we try to implement it in machines, we often realize that intelligence really included a lot of things that we never thought of.

Q. Some AI pioneers, such as Geoffrey Hinton, believe that the technology may become difficult to control. What do you think?

A. There are a lot of kinds of dangers with AI. It can be used to produce disinformation and deepfakes. There are algorithmic biases, like the one I mentioned in the case of facial recognition. Hinton and others go further and say these systems could actually get out of control and destroy humanity. This claim is, to say the least, is very unlikely and speculative. If we develop a superintelligent system, I don’t believe that it wouldn’t care about our values, like killing all humans is not right. Putting all the focus on this dramatic idea of existential threats to humanity only takes the focus away from things that are really important right now.

Q. Do you think that, as a society, we are adequately addressing those threats we face today?

A. Yes, although it’s hard for regulation and legislation to keep up with technology. The EU has taken a first step with the European AI Act. One of the things we are seeing in the U.S. are a lot of lawsuits for copyright infringement. All of these systems are trained on huge amounts of text and images. If they have not paid for its use, is this copyright infringement? The law is unclear because the technology didn’t exist when it was enacted. We’ll see how this is resolved.

Q. What is the most impressive AI application you have seen lately?

A. What excites me most is the application of these systems to scientific problems. DeepMind, for instance, is working on using AI to predict the structure of proteins. It is also being used to develop new bioengineering and medicines. We are in a sort of new era of science, perhaps as important as the one we saw when computers were invented.

Q. You say in your book that those who calibrate deep learning systems, the most advanced AI technique, seem more like alchemists than scientists, because they adjust parameters in the machines without knowing exactly what they are doing.

A. Shortly after writing the book, people began to talk about prompt engineers [the instructions given to generative AI tools]. Their job is to try to make the system perform as well as possible. It turns out that there are people who are making a lot of money doing that work. But it’s pure alchemy, there is no science behind it. It’s just about trying stuff. Some stuffs work and some doesn’t, and we have no idea why.

Q. It is ironic that the people who are trying to optimize one of the most sophisticated technologies in the history of humanity are doing so blindly.

A. These systems are in some sense black boxes. They’re enormously complex software systems that have not been programmed explicitly to do things, but rather have been trained, have learned from data, and nobody can figure out why they work the way they do. Neuroscientists also don’t understand how the brain works, and they do experiments to try to make sense of it. That’s what’s happening now with generative AI.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_