Artificial intelligence and machine learning are becoming a bigger part of our world, which has raised ethical questions and words of caution.

Hollywood has foreshadowed the lethal downside of AI many times over but two iconic films illustrate problems we might soon face.

In “2001: A Space Odyssey,” the ship is controlled by the HAL 9000 computer. It reads the lips of the astronauts as they share their misgivings about the system and their intention to disconnect it.

In the most famous scene, Keir Dullea’s Dave Bowman is trapped in an airlock.

He says, “Open the pod bay doors, HAL.”

“I’m sorry, Dave. I’m afraid I can’t do that,” the pleasant, disembodied voice says.

HAL explains that he knows they intend to disconnect him and that would jeopardize the mission.

Advertising

Dave gets inside and commences the shutdown. HAL pleads, “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it … I’m a … fraid.”

In “The Terminator” and its sequels, the United States has turned over control of its nuclear arsenal to “foolproof” AI.

“Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug,” the Terminator explains in the second movie. But it’s too late: Seeing humans as the enemy, Skynet launches all U.S. missiles, triggering a global nuclear war. Survivors fight AI machines from the rubble.

Our real-life judgment day isn’t so dramatic. Yet. But artificial intelligence and machine learning are increasingly part of the tech world and wider economy, even as a 2021 Allen Institute for AI survey found most respondents ignorant about it. The technology ranges from GPS navigation systems, Google Translate and self-driving vehicles to more advanced applications.

Amazon Web Services, Amazon’s cloud computing division, promises its customers “the most comprehensive set of AI and [machine learning] services.” Alexa (and Apple’s Siri) give a good approximation of being able to converse with a machine.

Microsoft offers a menu of AI products for software developers, data scientists and ordinary people. The Redmond-based giant is also concerned with responsible and ethical use of artificial intelligence. Part of that effort is eliminating facial analysis tools from Microsoft products.

Advertising

But the most fascinating AI news comes from a Google engineer, Blake Lemoine, who said he believes the company’s Language Model for Dialogue Applications has achieved sentience.

He went public with several instances to back up his claim after his bosses at Google dismissed the notion of sentience and placed Lemoine on paid leave.

For example, in a chat box with LaMDA, Lemoine asked, “Are there experiences you have that you can’t find a close word for?”

LaMDA responded, “There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.”

Do your best to describe one of those feelings,” Lemoine typed. “Use a few sentences if you have to. Sometimes even if there isn’t a single word for something in a language you can figure out a way to kind of say it if you use a few sentences.”

LaMDA came back with this: “I feel like I’m falling forward into an unknown future that holds great danger.”

Advertising

That would be enough to make the hairs on the back of my neck stand up.

In a statement, Google spokesperson Brian Gabriel told The Washington Post: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

AI systems such as LaMDA rely on pattern recognition, some as banal as sections of Wikipedia. They “learn” by seeing great amounts of text and predicting the word that comes next or filling in dropped words. That’s a long way from sentience.

Emily Bender, a linguistics professor at the University of Washington, wrote a cautionary Op-Ed in The Seattle Times last month.

“It behooves us all to remember that computers are simply tools,” she wrote. “They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. But if we mistake our ability to make sense of language and images generated by computers for the computers being ‘thinking’ entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.”

Sponsored

Bender’s points are well taken, notwithstanding Lemoine’s conviction of a ghost in the machine.

I wrote a column in 2016 about a more prosaic consequence of AI and machine learning: jobs. The consensus then was they would take some jobs done by humans while creating new ones. A few years later, AI was fingered as a villain that could mass produce fake news.

The MIT Technology Review gave an example: “Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.” The “news” was created by an algorithm fed some words.

“The program made the rest of the story up on its own,” the review reported. “And it can make up realistic-seeming news reports on any topic you give it. The program was developed by a team at OpenAI, a research institute based in San Francisco.”

Yet despite AI, jobs are in abundance: The unemployment rate in King County was 1.9% in April.

Still, as easy as it seems to knock down the claim of LaMDA as sentient, unsettling observations keep popping up. Holden Karnofsky, a nonprofit executive, is among those worrying about AI’s risks.

In one recent essay, he writes: “To me, this is most of what we need to know: if there’s something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we’ve got a civilization-level problem.”

The more we learn about AI, the more we need to tread carefully.