The Washington PostDemocracy Dies in Darkness

Do we have moral obligations to a machine that achieves consciousness?

Review by

Aziz Huq teaches law at the University of Chicago, and is presently a visiting professor at Stanford.

Machine learning is a kind of artificial intelligence that crunches gigabytes or petabytes of data to isolate relationships no human could discern. It helps scientists working in high-energy physics and population-level genetics pick apart huge data sets. Closer to home, you benefit from machine leaning when Amazon recommends a new book, a bank flags a suspicious transaction on your account or your phone translates from another language. Machine learning tools now beat humans at chess, Go, and even cooperative games like Quake III and Overwatch.

What could possibility go wrong?

Plenty, worries Susan Schneider, a philosopher of science and consciousness who has held positions at the Library of Congress, NASA and the University of Connecticut. Schneider wants us to grapple now with artificial intelligence’s evolution beyond current uses of machine learning. Her new book, “Artificial You: AI and the Future of Your Mind,” counts a bushel of tough questions that arise when machines become not only smarter than us — this has already happened — but also conscious.

Schneider envisages three main pathways to machine consciousness. Either it will be engineered — say by mapping neural activity and replicating it in silicon. Or else the boundaries of mind and machine will become increasingly porous as machine components link with and even replace pieces of the brain. She finally posits that we might encounter extraterrestrial machine intelligence. Since we’ve managed to evade contact so far, and it’s hard to see why we should expect visitors right now, this last possibility is hard to find interesting.

Both of the other pathways, however, are already being explored. Microsoft just made a billion-dollar investment in OpenAI’s effort to construct a general artificial intelligence. Elon Musk’s Neuralink company, meanwhile, announced the invention of micron-wide electrode “threads” to enable high-volume information transfers between machines and minds.

Even if you think machine brains a remote prospect, Schneider contends that it is still worth figuring out now whether machines can become self-aware and, if so, how to test for consciousness. This is because consciousness in her view is a watershed that demarcates “special legal and ethical obligations” for its makers and users. Ignoring those obligations, Schneider warns, may have catastrophic effects later.

Her idea of catastrophe, though, isn’t a lurid and bloody fantasy of “Westworld” or Skynet. Rather, it is that we will fail to recognize consciousness in a novel and alien machine form, and hence fail to give it due moral regard. Or, she worries, we will inadvertently lose our own distinctive consciousness by chipping away piecemeal at the brain-machine barrier. She urges a precautionary approach that avoids technologies that flirt with these risks.

Schneider, though, helps herself to a critical assumption — that consciousness is pivotal to our moral lives. In ordinary practice, consciousness seems neither necessary nor sufficient for moral concern. Most obviously, livestock are conscious. Yet they are raised and slaughtered without (much) compunction. For many, qualities of the natural world threatened by pollution, exploitation or climate change are valued objects of moral concern. Yet they plainly have no self-awareness. Our moral system thus treats consciousness as relevant to — but not a defining characteristic of — ethical concern.

Even if you think consciousness is self-evidently important, it remains too mysterious to be easily used as a marker of moral significance. To be sure, we have ruled out René Descartes’ notion that the pineal gland linked body and mind. But that doesn’t mean we understand how consciousness arises, what it comprises or how it relates to physical phenomena.

More on point is the German philosopher Gottfried Leibniz, who in 1714 invited readers to imagine walking around a brain as they would walk around a mill. Nowhere, he suggested, would they see any conscious thoughts. Leibniz’s pessimism about our ability to pinpoint consciousness has proved prescient.

Schneider recognizes that since we lack a firm grasp on the how and what of human consciousness, we need help even recognizing nonhuman consciousness.

This point becomes clearer if we think about octopuses. As naturalist Sy Montgomery and philosopher of science Peter Godfrey-Smith have eloquently explained, these cephalopods can navigate mazes, solve puzzles, play pranks and engineer escapes from aquarium tanks. Yet we have no way of knowing what it feels like to have a beak, a baggy, boneless body and skin that can taste food. We recognize its consciousness because the octopus does things that in our experience require self-awareness. When silicon replaces carbon, this kind of self-recognition gets much harder.

Schneider has spent an important part of her career designing practical tests for machine consciousness. Her most important idea builds on a famous 1950 suggestion by the late British mathematician Alan Turing: She suggests we look for consciousness by asking if a machine grasps and uses concepts linked to internal experiences, which we associate with consciousness.

This test, as Schneider concedes, is radically under-inclusive. Machine consciousness may be so alien and unplumbable that our concepts of interiority are irrelevant. Or the machine might just fake us out.

Despite this shortfall, Schneider is a sure-footed and witty guide to slippery ethical terrain. Her exposition of the consciousness problem is laced with helpful examples. It pries clarity from the essential opacity of its central concepts, most important consciousness itself. And it is refreshingly candid about when her intuitions (as she says) “crap out.”

It would, however, have helped her argument to acknowledge that for many, the catastrophic harms of artificial intelligence aren’t hypothetical. Many today ask of machine learning: Can it be trusted to handle autonomous vehicles? Will it destroy our jobs? Will it discriminate against women or minorities? Or will it supercharge a surveillance state of the sort China is building to keep political dissidents on a tight leash?

The ethical crisis of artificial intelligence, in short, is already with us.

Artificial You

AI and the Future of Your Mind

By Susan Schneider

Princeton.
180 pp. $24.95