Skip to content

Autonomous vehicles must be safe for pedestrians before they become mainstream

Shan Bao
Shan Bao
Author
PUBLISHED: | UPDATED:

Creating pedestrian-safe, let alone -friendly, autonomous vehicles is still very much a work in progress. Bao says current sensor accuracy is still limited when it comes to detecting smaller objects like pedestrians or bicycles. And driverless car systems capable of navigating hundreds of complicated real-world scenarios, like jaywalking, which unpredictably break the rules, are still a long way off. Interestingly, though, solving for so-called “edge cases” can’t simply be approached as a technology problem, according to Bao. If the goal is to create AVs that are at least as safe to pedestrians as human-driven cars, we first must have a very thorough understanding of how drivers currently interact with pedestrians. Otherwise, the training we’ll provide them about what conditions to be prepared for is destined to be incomplete, and bad things are likely to happen.

Bao is currently immersed in such an effort, with hopes that it can provide something like an “edge case library” of future safety testing scenarios for AVs. One of the most interesting aspects of that work is a deep analysis of studies on “naturalistic driving behavior,” which offers insight into how human drivers and pedestrians typically navigate situations when they’re competing for the same space. Bao is indexing hundreds of such scenarios, but even considering a few reveals the complexity of the challenge. “If you’ve ever been to New York City, you know that pedestrians don’t obey the crosswalk signals,” Bao explains. “So even if the traffic light is green, a car won’t be able to go because there are pedestrians in the way. A human driver knows to safely nudge their way out toward the intersection – to indicate their intentions to pedestrians. But an AV today, it would basically detect pedestrians and refuse to move. It’d be stuck there forever.”

This kind of subtle communication and negotiation happens all the time between pedestrians and humans drivers. We use eye contact and hand gestures to signal who gets the right-of-way at neighborhood intersections. If you’re driving on a residential street and notice a kid playing in a driveway, you intuitively know to slow down in case their ball unexpectedly rolls out into the street. If you see someone approaching a crosswalk with their eyes glued to their phone, you’re a little more prepared to stop. Since the deployment of test AVs in cities, researchers even have their eye on a new kind of edge case: Pedestrians who, trusting that AVs will defer to them, feel empowered to break the rules even more.

It’s a huge challenge to code cars that can mimic all the little techniques we’ve mastered to safely share the road. But with help from researchers like Bao, who are helping account for more and more of them, developers can slowly begin chipping away at solutions. For example, in Europe, she says they’re now experimenting with driverless cars that use LED lights or audible commands to indicate to pedestrians that it’s safe to cross in front of them. And programmers are increasingly aware that their algorithms must account for the fact that humans come in all shapes and sizes, have different physical abilities that impact crossing speeds, and may sometimes be getting around with assistive technologies like wheelchairs.

Interestingly, a new phase of Bao’s work is applying this kind of anthropological lens to the current AI-powered technology. Her team recently got their hands on data from the MCity shuttle project – a low-speed Level 4 autonomous shuttle that operated through 2019, moving people and interacting with pedestrians in a real-world environment on UM-Ann Arbor’s North Campus. Equipped with multiple optical video sensors, the shuttle provides hundreds of hours of video footage that Bao can use to observe how it behaves in the presence of pedestrians. On the whole, she says the shuttle, which still had a human safety monitor onboard, does a pretty great job: It’s systems for detecting pedestrians are about 95 percent accurate, a figure she says is no doubt aided by a low-density environment and a 25 m.p.h. max speed (which makes its sensor systems more accurate). But she’s already noticing some areas for improvement. For example, sometimes the shuttle appears to react to pedestrians that it doesn’t have a conflict with, such as a person who has already finished a crossing. Even so, many folks would argue it’s far better to have an AV that’s overly cautious than the other way around.

In fact, Bao thinks that’s a safety ethos that is likely to shape the AV landscape for years to come. “With currently available features, like crash warning systems, you get false positives pretty frequently,” Bao explains. “But drivers have learned to tolerate them because the expectation is if it’s the real thing, the warning can save your life. I think it will be the same thing as AVs develop. The technology will start out overly safe, overly conservative, and gradually get better and better, step by step. And I think you could say we are still taking our first steps.”

In many ways, it’s human factors, not technological ones, that are causing the long development timeline. “Machines follow rules. Humans are complicated,” Bao says. And as long as we keep up our unpredictable, rule-bending habits, the machines face an uphill climb to keep up.