Sensor Fusion Challenges In Cars

As more pieces of the autonomous vehicle puzzle come into view, the enormity of the challenge grows.

popularity

The automotive industry is zeroing in on sensor fusion as the best option for dealing with the complexity and reliability needed for increasingly autonomous vehicles, setting the stage for yet another shift in how data from multiple devices is managed and utilized inside a vehicle.

The move toward greater autonomy has proved significantly more complicated than anyone expected at first. There are demands for high reliability over long lifetimes with zero field failures. In addition, these vehicles have to be safe, secure, and fully aware of their surroundings under all weather and driving conditions. And they need to do this at a cost point that is affordable.

That has put a spotlight on sensor fusion as the way forward, bringing together diverse and complementary sensing modalities.

“If we look at ADAS more closely and how that fits in the car, we see that many subsystems are used for many different functions,” said Pieter van der Wolf, principal R&D engineer at Synopsys. “Different technologies like radar, lidar and cameras are all used in a single car with, for example, long-range radar for cruise control, short-range radar to detect other traffic, as well as lidar and a camera-based subsystem for all kinds of other functions. And since there are so many subsystems in the car, it’s very important that the component cost is low.”

Fig. 1: Sensor fusion. Source: Synopsys

Each sensor mode is based on different physical principles, different goals, and different modalities. They all have a distinct purpose. Some of those overlap, but they all need to function separately, as well as together.

“Radars are used for detecting, locating, and tracking objects at considerable distances,” said Joseph Notaro, vice president of worldwide automotive strategy and business development at ON Semiconductor. “Lidar uses light to measure ranges (variable distances) to generate a 3D image of the area surrounding it. And vision sensors capture photons coming through the lens to generate an ‘image’ that can help not only detect, but recognize objects, traffic signs, pedestrians. Real-time operation is essential for AV/ADAS systems, so one of the major challenges is the ‘synchronization’ of data captured by each sensor to extract accurate and relevant information. The techniques used by each sensor to capture information are different, so a deep understanding of how each modality operates is a must to effectively ‘fuse’ these diverse datasets.”

The auto industry has undergone significant changes in defining the electrical/electronic (E/E) architecture. The initial idea was to put a supercomputer in the middle of a car, with dumb sensors on the side. That turned out to be unrealistic for multiple reasons.

“In this model, we’d transfer all the data and process the data in one place,” said Robert Schweiger, director, automotive solutions at Cadence. “People soon realized that the sensors are not quite there, the interfaces within the car, in the network connections, are not providing the required bandwidth to raw sensor data transmission. But the biggest challenge for sensor fusion is that everybody is doing a proprietary solution. That’s why we have not even reached Level 3 autonomy in production. Everyone is pursuing a different solution.”

Toward object fusion
Change is happening everywhere with automotive architectures. “For some time, raw sensor fusion was a big deal,” said David Fritz senior director autonomous and ADAS SoCs at Mentor, a Siemens Business. “How do I bring in all this raw data and process it centrally so that I can compare and contrast and do this processing? The way the industry is going, it doesn’t matter if I talk to Sony and it doesn’t matter if I talk to an OEM or a Tier 1 or a Tier 2. They’re all headed in the same direction. There have been some breakthroughs in the last year or two that have allowed a brand new approach, which makes a lot more engineering sense.

Until recently, the idea was that everything is computationally complex and AI is required for everything, which consumes a lot of power.

“The idea was there’s a lot of training so it makes sense to do that centrally with, say, a bunch of Nvidia GPUs or NPUs or something like that,” said Fritz. “What’s happened is a shift away from those high-performance, high-powered, high-cost solutions to something that is Arm-based, something that’s very small, very low power, very cost-effective, and getting away from these generic AI inferencing engines, to something that can be auto-generated and custom. Now it costs $1.50 to put all of that power right there at the sensor.”

There are two reasons why this is extremely important. “First, you have the opportunity to train those AI algorithms to handle situations that are particular to that sensor,” he explained. “For example, one sensor may handle fog differently than another, so a central computer that can handle all possibilities makes no sense. In the same way, you must account for the fact that over time the sensors could degrade. This can be done very differently for different camera sensors, for example, and you can train your algorithms to accommodate that. There are lots of things at the sensor level you want to handle at the sensor level. Second, instead of sending these terabytes of raw data over a big, heavy network in a car, which is sensitive to vibration and other things, why don’t we just send a short little 32-byte packet that says, ‘This is a mother pushing a baby in its carriage. Here’s where they are in the three dimensions, and that’s the direction they’re headed.’ Doesn’t that make a lot more sense than the central compute processes we call objects? This becomes object fusion instead of raw sensor fusion.”

Still, some OEMs believe all the AI algorithms need to come from them. Some sensor providers feel they need to put more processing power into the sensors. “And then, on the Ethernet side, now that we have 1 Gbps in production, it may still not be fast enough to connect high-end radar sensors,” said Schweiger. “So we need to go to 2.5 to 5, or even 10 Gbps. In parallel, there is the newly released MIPI A-PHY specification release, which provides the first incarnation of 16 Gbps. There are now a lot of different things going on, and people need to figure out how they can leverage those things and by when they will become available.”

Another trend is the movement toward very advanced process technologies because the complexity of the SoC that is sitting in the central compute unit is enormous. “We see huge chips in terms of the silicon area. We see huge chips in terms of processing power. We see all kinds of interfaces that you can imagine on those chips. And we see customers moving down toward 7nm, 5nm,” Schweiger said.

Sensor fusion issues
One of the big problems in sensor fusion is competing software architectures. That includes everything from AUTOSAR adaptive to open-source software like Autoware, proprietary solutions from Tesla, Waymo, and Daimler; and commercial platforms from Nvidia, and Intel. And that’s just on the architecture side.

There also are competing hardware platforms from companies like Nvidia, Intel, Visteon, and Aptiv, among others. All of these companies are building their own hardware platform, so there is no standard at present. Each of these companies is creating a platform and trying to win customers. Nvidia is seen to be in the lead because it has been in the market for quite some time, evolving its platform along with the CUDA software environment. But while it is considered a great prototyping system, it may be too expensive for production. For Daimler, which is promising an S-class next year that will achieve Level 3 autonomy in production, the cost of the autonomous driving platform can be amortized across an expensive luxury car. The same will not be true for more modestly priced vehicles.

Add to these challenges proprietary AI chips, hardware accelerators, and interface standards, as well as high power consumption and sensor robustness, and autonomous vehicle system development starts to look even more daunting.

Data issues
Another key ingredient in radar and lidar digital signal processing chips is the interconnect, which routes data back and forth between different processing elements within chips. Here too, there are no standards currently, even though they are needed.

“You could say it’s the Wild West, but you could also say there’s tons of innovation happening,” said Kurt Shuler, vice president of marketing at Arteris IP. “That’s true whether it’s on the sensor chips or whether it’s on the ADAS brain chips. Eventually you want to be able to explain things in symbolic terms, and have an intermediate layer such that once you get this data, the data as its transmitted is in some kind of lingua franca that both sides can understand even though they’re from two separate companies. What I don’t know is how much processing it will take to move something from more of a raw data format into something useful. Eventually, there has to be a data format.”

For now, the overriding issue with data is quantity and the fact that it’s being generated from more sources. “What we expect to happen in the future is data will be coming in faster, and with higher resolution,” said Steven Woo, fellow and distinguished inventor at Rambus. “We’re still in the early days, and there are a number of sensors with today’s resolutions. But a lot of people are talking about if they need to make better decisions, finer resolution cameras are needed, as well as lidars and things like that. And we need to be getting that data more quickly.”

Further, in order to make good decisions, there’s a window of time over which the data needs to be analyzed. “In a simple example, there might be video with a camera, just looking over what’s in front of a car,” said Woo. “It’s possible things are going behind other objects, and being included for a fraction of a second. So having a larger window of data helps determine those things. So the issue is really how much data we need to be keeping around to make a good decision. And in order to make a good decision, data basically needs to be in memory, and you need to be able to stream through it quickly in order to analyze it and figure out what’s going on.”

Functional issues
Within the context of all of this complexity, sensors need to function properly. But a sensor is only as good as its performance in ideal conditions.

“This means it is as good as it gets on a clear summer day,” said Larry Williams, distinguished engineer at Ansys. “However, even in the ideal conditions there is the challenge to combine the response of radar, lidar, camera, maybe also inertial navigation, and some sort of a accelerometer in a vehicle. We also look at the degradation of those sensors in real-world environments. Let’s say a radar sees multiple vehicles around you. You’re trying to determine if it is a bicycle or a person on a bicycle. The system compares that, or includes and fuses the information from the camera, and makes a decision based on whether it makes sense. But in various scenarios, what happens? For example, we have all experienced driving home in the late afternoon, traveling westbound with the sun right in your eyes. The sun is also in the eyes of the camera. The camera is degraded by such a situation, the radar not so much. So what’s nice about that is that having multiple modalities of sensing. One can enhance the other when it’s degraded. But in other situations, the degradation can be even more, especially in the rain. In the rain, the entire environment is diminished even more so. There’s raindrop accumulation on the lens of the camera, which is going to distort the image. There’s not just the precipitation from the sky, but there’s another vehicle that just drove by that blows extra raindrops onto the camera. One of the things we’ve been exploring is what happens in a real-world environment, not in the ideal environment. We can stimulate the ideal environment. But as sensors degrade, how does that degrade the performance of the system? It’s hard enough to do it with the ideal of sensors, but then it’s even worse under real-world conditions.”

This has an impact on sensor fusion.

“We have a heavy reliance on artificial intelligence to say how we train an AI engine in these vehicles to understand their environment, which we can do,” Williams noted. “But from a simulation perspective, we look at how to go about doing this training. Typically, this is presented to an AI engine, the brain inside the vehicle. It is presented with lots of different cases, and it works to identify or classify what the object are in the scene. This is done repeatedly, which eventually trains the neural network that’s inside this machine learning approach inside the AI engine to understand those things. How are you going to possibly train it? You need to do thousands of cases in a testing environment with emulation. The CEO of Toyota said, very famously, that 8.8 billion road miles would need to be traveled before you could make these vehicle safe enough. It’s not possible to do 8.8 billion miles. Waymo and Tesla and others put together all their data, and the last report said they only had around 6 million miles documented. You’ve got to do 1,000 times that to get even close. It’s important to understand the power of simulation and analysis for this kind of engineering task and software development.”

It’s less about the miles driven and more about the scenarios that can be captured, which includes a whole class of scenarios that will never be driven.

“Let’s say I’m testing a road scene and I’ve got a fuel truck on one side, and a school bus on the other side, and we hit a patch of ice,” said Shawn Carpenter, senior product manager for high frequency at Ansys. “I’m never going to set that up to drive it. But in simulation, I can smash as many bits as I want to. I can examine life-threatening or property-damaging scenarios that I would never come close to ever getting any kind of an okay to simulate, and I could really examine an active safety system simulation in a way that I could never do on the road.”

Security issues
Security is another aspect of the increasingly autonomous vehicle, and it’s one that is getting a lot of attention because the drive train must connect to the Internet for over-the-air updates throughout its lifetime. Combining security and safety makes this all the more pressing to get right.

“Just safety by itself or security by itself leaves a big open hole,” said Mentor’s Fritz. “Once we understand reliability and safety, the biggest change in security these days is that finally people are understanding that security is not a software problem. True security is a hardware problem. We have to be able to show there is no physical path from the important information and from the outside world — yet the software running inside of that does require access to that information in a secure way. Most hackers get in through stack overflows, or one mechanism or another. Those are easy to shut off. But that doesn’t prevent them from finding something that nobody considered, and if there’s physically no link between the outside world and that proprietary information, they’re never going to get to it. What is needed here has to happen in the SoC itself to guarantee the isolation between the outside world in any of the passwords or keys or anything like that needs to be an IP security section.”

This is part of the reason why everything is being consolidated onto SoCs, but that consolidation is causing problems of its own. “How can a customer that is used to building a system with 100 ECUs based all on microcontrollers make the transition to consolidated systems?” asked Cadence’s Schweiger. “An SoC provides a lot of opportunities to consolidate systems make them smaller, more power efficient, higher performance, so there are all of these options available. But if you have never designed an SoC, it means you must build up teams that really understand SoC design or architecture. Of course, you can work with service companies in building the chip, but somebody needs to define what is needed, and there must be a certain level of understanding. That’s why there are all kinds of different approaches. There are always vehicle manufacturers hiring chip designers doing their own chips. There are also other OEMs that may be more reluctant in this domain. But at the end of the day, if you’re really serious about autonomous driving, most likely you need to go into chip design.”

They also need to understand how security can be built into those chips. “We’ve increasingly seen that people have understood the importance of security, and that security is one of those things that you can’t design your system, and then try to retrofit it and hope to have a really secure environment. And so as people now think about the architecture of their chips, it really has become a primary design requirement, where not only do you think about what your processor core has to do and how you have to feed it with data, and how you want to move that data back and forth, but also the integrated view of security.”

But even with the best efforts, there always will be threats to automotive security.

“Sometimes we say we don’t mean to simulate these real-world things that could happen because we just don’t think that people have the skill to do it,” said Alric Althoff senior hardware security engineer at Tortuga Logic. “And if someone does have the skill, that puts them in more of a defense, DoD-type mindset. What I’ve seen from the people who are building and training models to do vehicle detection, free space detection, boundaries detection, is they’re hat they’re not talking about obvious things like losing a sensor, or losing a modality, and how does the model respond. If you give the model control, and you don’t understand how the model thinks about the world, you can’t really ask those questions. Or maybe we haven’t developed the technique to interrogate the model about its decision. How do we know how it’s going to respond? Those kinds of things are really important.”

This becomes particularly complicated with sensor fusion. “In sensor fusion, with fidelity improving as we involve more modalities, there’s also the idea of sensor independence,” said Althoff. “How can we make our sensors that are fused suddenly be completely independent? If suddenly their interdependence is so locked in and you lose one, suddenly it starts behaving really erratically in the vehicle, which is not good. We really have to be able to accommodate both fusion and independence, and decide where the critical boundary is because in the rain and the fog, a visual sensor is going to lose acuity. The lidar might not. Does the lidar know that now it’s the only vision sensor? Does the system know? And how does it respond? Does it signal the lidar and tell it to go into ultra high fidelity mode or something like that? Perhaps something like a spare tire: it can’t run like that all the time.”

That’s the physical side of it. Inside the car, the digital hardware needs to be protected and tamper-resistant. “Plugging into the CAN bus of the vehicle and just flashing the firmware is ridiculously easy, so there’s also the aspect of what if someone swaps out a sensor,” he noted. “Are the sensors authenticating themselves with the system? What are the demands of the system on the sensor to say, ‘What is your integrity right now?’ Does the camera have a broken lens? It might work just fine, but in that critical moment, you lose something really important. So there’s a sort of interplay between physical and digital worlds that we have to get over working in different silos. We all need to get in the same room and talk to one another.”

Modular design doesn’t help matters. “In electronics design, we’ve classically designed for modularity, and we design for a separation of concerns, such that ‘this piece does not influence this other piece, except through this narrow interface,’” he said. “But we’re entering a world where holistic verification and the interrelationship between things is becoming a real focal point. We really need to move beyond our mindset of modularity only, and have a dual view where we go back and forth between the classical method and the newer techniques, maybe even augmented by artificial intelligence to help us find things that we’re blind to, and this includes security. An adversarially robust model is needed in the training process of a neural network or decision function. We need to anticipate the replacement of that, and we need to anticipate that people are going to try to overwrite those. Even the home hobbyist is going to try to override the neural networks in their vehicle with new neural networks that enable them to get higher performance out of their car.”

Related
Are Today’s MEMS Gyros “Good Enough”?
Big improvements in precision may require new applications and market dynamics.
Data Strategy Shifting Again In Cars
How and where vehicle data gets processed continues to evolve.
Tracking Automotive’s Rapidly Shifting Ecosystem
Relationships and design strategies are in flux as OEMs and Tier 1s grapple for dominance.
5 Major Shifts In Automotive
How new technology developments will change the trajectory of the automotive industry.
Where Should Auto Sensor Data Be Processed?
An explosion in data and questions about how to best utilize it are slowing the rollout of autonomous vehicles.



Leave a Reply


(Note: This name will be displayed publicly)