Sponsor Content from

Issue 1
Chapter: What does it mean to get AI right?

Much of the current conversation around the rise of artificial intelligence can be categorized in one of two ways: uncritical optimism or dystopian fear. The truth tends to land somewhere in the middle—and the truth is much more interesting. These stories are meant to help you explore, understand and get even more curious about it, and remind you that as long as we’re willing to confront the complexities, there will always be something new to discover.

Feature

The New Rules of the Road

As autonomous systems take the wheel, they’re raising important questions about how to build trust inside and outside of the car.

By Daniel Oberhaus • Illustrations by Leonie Bos

Over the course of the past five years, fully autonomous vehicles have been notching tens of millions of miles on the roads. As they’ve navigated the streets, miles per disengagement, a metric researchers use to track how far autonomous vehicles travel without the need for human intervention, has been climbing steadily. And yet, public trust in autonomous vehicles continues to fall. A survey published by AAA showed the number of participants indicating they were afraid of autonomous vehicles jumped up to 68 percent in 2023, which is 13 points higher than the year before. As automated vehicles become safer, the public seems to trust them less.

In an age where people and machines are increasingly sharing the world’s roads, it’s critical that engineers, policymakers, and the general public work together to forge a new social contract—one that will ensure safety without stymieing technological progress. And that includes building autonomous vehicles that empower drivers and pedestrians to accurately calibrate their trust in the underlying technology.

It’s a big ask, but a challenge worth taking seriously given the benefits autonomous vehicles have to offer. They hold the promise of reduced congestion, better fuel economy, fewer parking headaches, and greater accessibility for those unable to drive. But perhaps most important, they have the ability to tremendously improve safety on the road. The vast majority of car crashes in the United States are due to human error and are often the outcome of driving while tired or distracted. Autonomous vehicles, by contrast, never nod off or check their phones, and can often see and react to roadway hazards that may escape the notice of a human driver.

The companies building and operating these autonomous vehicles face the challenge of overcoming a deep skepticism from the millions of drivers they share the road with. This doesn’t mean building systems that never make mistakes, which is an unattainable goal for any automated technology—or, for that matter, any human. Instead, it means building autonomous vehicles that riders, pedestrians, and other drivers can trust.

*

For engineers, trust in this context is difficult to define. But having a precise definition—as well as ways to objectively measure it—is key to building autonomous vehicles that passengers feel safe in. The elements of trust are something that Xi Jessie Yang, director of the Interaction and Collaboration Research Lab at the University of Michigan, thinks about a lot. At her lab, she and her collaborators spend their days studying how drivers and pedestrians interact with autonomous vehicle simulations. To do this, they use frameworks that help them identify which areas of trust are lacking and how they might be improved. The key, she says, is starting with a precise understanding of what, exactly, they’re looking for.

“We define trust as the attitude that autonomous agents will help achieve an individual’s goals in situations characterized by uncertainty and ambiguity,” Yang says. “This uncertainty and ambiguity is a huge part of trust. If there is no uncertainty or ambiguity, you just trust it 100 percent.”

The way a user comes to (dis)trust an autonomous vehicle is known in the research community as “trust calibration.” If users feel—rightly or wrongly—that the technology is unreliable, unpredictable, or high-risk, they reduce their trust in that system accordingly, and vice versa: If they feel it’s reliable, they increase their trust in it. Trust, in other words, is a dynamic variable that can change over time as a user gains more experience interacting with an autonomous system. The goal of Yang and her colleagues in the human-factors research community is to “influence the public to have well-calibrated trust.” This means working to understand the ways that engineering and design decisions, as well as human psychology, converge in the back seat of a fully autonomous vehicle.

As Leanne Hirshfield, an associate research professor at the University of Colorado, Boulder’s Institute for Cognitive Science, points out, building trustworthy autonomous systems is not the same as building systems that users should unquestioningly trust. Instead, it’s about creating transparent AI that helps a user understand how much trust they should have in an automated system.

As an example, Hirshfield imagines a driver in a car with semiautonomous features on a highway at night. In many instances, these technologies can perform better than a human driver at night because they use radars and other sensors that don’t depend on light. The driver, in this case, would be justified in trusting the self-driving features to help navigate these conditions. But if the car’s sensors aren’t performing as well as expected, the car’s computer can flag its diminished performance to the driver and encourage them to take the wheel for safety. In this case, even though the system performed worse than the driver might have expected, their trust in it is likely to increase because they know they can count on the vehicle to inform them when the automated system has reached its limits.

“Right now there are things that AI is way better at and things that humans are way better at,” Hirshfield says. Humans, for example, are great at soaking up new information, integrating it into their model of the world, and using it to reason across unfamiliar situations. This is what we might call common sense, and it comes naturally to us, but autonomous vehicles struggle with it even in relatively simple situations. AI systems, however, are great at handling more mundane tasks and situations that require fast reactions. “It’s about combining the two and figuring out how to do augmented intelligence,” Hirshfield says. “Ultimately you want to calibrate trust in the system so that the human knows when to step in and when to rely on the AI.”

An autonomous car’s ability to sense the world around it—and any hazards it might contain—is just one part of the equation, however. All drivers, regardless of whether they are human or machine, must contend with the unpredictability of the road. Human drivers have developed a staggering variety of informal communication methods to telegraph their intentions to other drivers, even beyond horns and turn signals. When we navigate the road, we make eye contact, flash our lights, wave our hands, or sometimes lift a middle finger to send messages to other drivers. These modes of communication can improve our safety, but they are harder to implement in an autonomous vehicle, making it more difficult for human drivers to understand the car’s intentions. If autonomous vehicles are going to share our roads, it’s important for them to develop a system that allows them to participate in our driver communication networks to create a shared understanding between other drivers and pedestrians about the autonomous vehicle’s intentions.

This is a challenge that Boris Sofman, a Senior Director of Engineering at Waymo, and his colleagues have focused on for years. (Waymo is a subsidiary of Alphabet Inc.) They study “ridership,” or best practices for how its autonomous drivers share the road with human drivers and pedestrians. From this research, they have uncovered several principles that help Waymo’s robotaxis integrate themselves with the existing social contract between drivers and pedestrians on U.S. roadways. One key point, says Sofman, is for autonomous vehicles to be predictable, confident, and consistent in their actions so that humans know what to expect from the car. Sofman points to the ways that Waymo’s robotaxis share the road with cyclists, which are based on extensive research about how much distance cyclists want between themselves and a car. If cyclists know that a Waymo robotaxi will give them sufficient space, they’re less likely to make an unexpected maneuver to avoid a surprise from the autonomous vehicle that might put themselves or others at risk. The same is true of pedestrians crossing the road, who need to be able to predict when the car is going to proceed through the crosswalk and when it is going to let them cross.

“When a pedestrian crosses the road, you do an unofficial handshake that says, ‘Okay, I’m going to go, then you’re going to go,’” Sofman says. “So we actually used a lot of the inputs from the human autonomous specialists that were supervising the Waymo cars as a key signal and point of comparison so we can effectively try to mimic these very familiar human behaviors and embed them inside the vehicles themselves.”

Operating a vehicle is a high-risk task that puts human lives on the line. The decision of whether to pass off responsibility to an AI driver could have deadly consequences—or it could save your life. So how is a user supposed to decide?

Catherine Burns is a professor of systems design engineering at the University of Waterloo’s Advanced Interface Design Lab, where she and her colleagues study automated decision support in safety-critical systems. These are systems that have a high degree of complexity and automation and operate where human lives are often at stake. Burns is adamant that “people really shouldn’t have to understand how the system works” to make a decision about when and whether to trust AI.

Instead, it should feel effortless. Burns’s research confirms what Sofman and the Waymo team have learned from building their autonomous drivers: Trust mostly comes down to whether the user knows what to expect from the system.

“Trust is tied really closely to reliability and the expectation of whether or not automation is going to surprise you,” Burns says. “Nobody wants a surprise from their vehicle, but people can actually handle quite a bit of automation unreliability if they’re aware of the possibility.”

In the summer of 2023, California’s Public Utilities Commission made history when it approved two companies—Cruise and Waymo—to commercially operate their fully autonomous vehicles around the clock in San Francisco without a human in the driver’s seat. The decision was controversial, but not particularly surprising. The city has been a proving ground for self-driving cars for nearly a decade, and Cruise and Waymo have operated their fleet in a limited commercial capacity for years. Limited fleets of AI drivers from various companies have also hit the streets in Los Angeles, Austin, Miami, Phoenix, and Las Vegas.

The rollout of full-time autonomous vehicles in San Francisco underscores both how far the technology has come since 2018 and the importance of properly integrating autonomous vehicles with human drivers and pedestrians on U.S. roads. Waymo’s user research shows that the level of trust that San Franciscans have in autonomous vehicles has been trending upward for years as locals grow more comfortable with AI drivers roaming their streets. “Once you get the service out and people have experienced it, they realize that there’s all these benefits, and they start to really like it,” Sofman says.

Of course, not everyone in San Francisco is welcoming of the new autonomous vehicle fleets. Some have even taken out their frustrations on the cars by placing traffic cones on the vehicles to disable them. For Sofman, these kinds of reactions aren’t particularly surprising, even if they are unfortunate.

“It’s just such a different technology,” says Sofman, who compares distrust of autonomous vehicles to innovations such as Airbnb and Uber that many people—only 10 years ago—were initially skeptical of. But once people have tried a ride in one of Waymo’s cars, he says, even the most dubious riders quickly relax. “You see that the numbers have completely flipped around in terms of trust and comfort once people have tried it,” he says. “The average customer gets in our car, and within two minutes, they’re on their phone checking their email or texting a friend.”

One reason riders seem to find it so easy to relax during their first time in a Waymo robotaxi is because the company has put a great deal of time, effort, and research into understanding rider trust—particularly when it comes to how users perceive the safety of the vehicle. There are several subtle tactics that Waymo uses to get riders to that level of comfort as quickly as they do. For example, there’s a screen inside the car that visualizes a simplified representation of what the car sees in terms of pedestrians, cyclists, traffic lights, and other cars, as well as the path that it’s going to take. “When you have this very simplified but meaningful representation of the world around you, it gives you a lot of confidence,” Sofman says. “It’s consistent. It looks exactly like what I see. It gives you this cue that the car knows what it’s doing.”

Trust between humans and machines is hard won and easily lost. It’s a reality that Sofman and the other autonomous vehicle engineers are acutely aware of and take extremely seriously. Although so much of the autonomous vehicle industry is understandably focused on demonstrating safety to foster trust, human-factors researchers and UX engineers have learned that trust is a complex, multifaceted psychological phenomenon that can’t be reduced to figures and statistics. It requires thoughtful and transparent approaches to the way autonomous vehicles and the companies that build them communicate with their riders, as well as a willingness on the part of riders to understand the capabilities and limitations of the vehicle. Autonomous vehicles are a technology that can have a massive positive impact on the world if passengers can be taught to trust it—and that requires coming together as users, engineers, and policymakers to build systems that are worthy of our trust.

But trust, like our roads, is a two-way street. While engineers like Sofman and his colleagues at Waymo are hard at work building autonomous vehicle systems that foster trust, the rest of us—as drivers, pedestrians, and passengers—must also reevaluate the social contract that defines our relationships on the road, and make space for autonomous vehicle technologies that hold the potential to save tens of thousands of lives every year. It won’t be easy but it is possible. It requires us to thoughtfully calibrate how and why we trust these vehicles, while simultaneously accepting the reality that no system—human or machine—can operate perfectly without error. This doesn’t mean lowering our standards of safety on the road; it just means giving AI drivers a fair shot by recognizing their limitations as well as their promise.

“At the end of the day, even when you’ve crossed the safety bar, it’s important that autonomous vehicles are viewed as a positive to society,” Sofman says. “That means positively interacting with the citizens and traffic around you and asking: Are you being a good citizen?”

Daniel Oberhaus is a science writer and the founder of HAUS Biographics, a marketing and communications agency for deep tech organizations. He is the author of The Silicon Shrink, a forthcoming book from MIT Press about the past, present, and future of AI in psychiatry, and was previously a staff writer at *Wired* magazine.