BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Mind Tricks: How Digital Nudging By In-Car AI Will Shape Where Your Self-Driving Car Takes You

Following
This article is more than 4 years old.

When you use a ridesharing service such as Uber or Lyft, you usually provide your desired destination when you request a ride. Once your ride arrives, you jump into the car and are whisked away, presumably heading to your designated endpoint.

You can choose to alter your destination location during the driving journey, perhaps doing so because you realized that you had inadvertently specified the wrong address and opt to mid-course correct it.

As a tourist, you might even discuss your destination with the driver, and it could be that the driver might advise you against going to that specific location. The driver might know some local aspects that you are unaware of. Perhaps it’s a store that won’t open until later in the day, or maybe it’s a theme park that if you went to a different one it would be less crowded.

The driver might be offering local wisdom as a genuine means of helping you and seeking to make your journey a pleasant one.

Of course, the driver might also have other incentives to alter your mind about where to go.

In some places, the driver might be getting a commission by guiding tourists to a certain store or restaurant. Under the guise of merely trying to steer you toward something better than what you had originally indicated as your destination, there might be a kickback or other benefit for the driver to sway you.

Believe it or not, the notion of swaying you in making a choice is often done when you go online to make a purchase; it’s called digital nudging.

Suppose you use a web site to figure out which toaster oven to buy. While at the web site, the system displays two models, one that is pricey and looks ugly, and another one that is lower priced and looks slick. You click on the lower-priced one and merrily proceed to purchase it.

It could be that you just fell victim to the classic decoy technique of digital nudging.

In the decoy technique, the system purposely puts a lousy choice next to the choice that it really wants you to choose. In your mind, you fall for the notion that there are only two choices and that one of the choices is obviously worse than the other. You might have originally been assuming that you’d look at say ten different models to find the one that is best, but the system has cleverly short-circuited that idea and instead gotten you to narrowly select between a forced dichotomy of a good choice and a bad choice.

I’ll combine the digital nudging phenomena with the aspect of choosing your destination when using a ridesharing service.

You get into a ridesharing car and have indicated you want to go to a downtown area bar. The driver chats with you and explains that the bar you’ve selected is a dud and instead there’s a “happening” bar on the other side of town that would be a lot more exciting for you to visit. Thanking the driver profusely, you change the destination to the presumed better bar.

It could be that the driver is right and the bar you’ve been swayed toward is the better choice. Or, it could be that the driver has other incentives to get you to go to that bar, maybe the owner of the bar gives those recommending drivers free drinks, or perhaps the bar is a further driving distance away and therefore the driver will get paid more for the higher mileage driving journey.

Here’s an interesting question: With true self-driving cars, will you always be taken to whatever destination you’ve indicated, or might the AI system attempt to digitally nudge you to go to a different endpoint?

Most people assume that the AI system that’s driving the self-driving car will strictly do whatever you’ve specified and be completely obedient.

Yes, there are some automakers and tech firms that are right now focusing on having the rider indicate a destination and that’s all that the passenger needs to do. Come heck or high water, once you get into the vehicle, it’s going to do whatever it can to drive to that endpoint.

It’s often crudely done right now, such that some of those driverless car tryouts won’t let you change your desired destination once you get into the self-driving car and get underway.

In fact, I know some AI developers that say that if a person originally indicates a destination X, the driverless car should go to destination X, doing so with no variations allowed. When pointed out to the AI coder that the person might change their mind midway, these dogmatic AI developers insist that people shouldn’t be so stupid about where they want to go, and they are going to be stuck with going to where they originally stipulated to go. Tough luck otherwise.

I’ve repeatedly exhorted that there’s a lot more involved in taking a driving journey than just specifying upfront the desired destination. In the real-world, riders frequently decide during a driving journey to make short stops or swing through an interim way-point (I’m hungry, a passenger might say to a driver, please use that nearby drive-thru burger place to get me a combo meal).

There is going to be a great deal of exasperation and frustration with the use of self-driving cars if the public is going to be treated to such a simpleton approach to driving that you can only go from point A to point B, doing so without any kind of malleability of your driving journey mid-course.

If the AI developers eventually make their driving journey components savvier, not only will the AI driving system be able to adjust to your driving destination changes, it could take this pliability one step further and try to nudge you.

Do you want an AI system offering nudges to sway your mind about where you are going?

Some think it’s a wonderful idea, others are worried that it opens the proverbial Pandora’s box.

Let’s unpack the matter.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since the semi-autonomous cars require a human driver, such cars aren’t particularly going to alter the dynamics of how to choose your destination. There is essentially no difference between using a Level 2 or Level 3 versus a conventional car when it comes to interacting with the driver and therefore doesn’t merit any notable changes in your destination decisions.

Presumably, if you at times discuss destination changes with the driver, you’ll continue to do so in the future.

It is notable to point out that in spite of those dolts that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, do not be misled into believing that you can take away your attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the car, regardless of how much automation might be tossed into a Level 2 or Level 3.

True Self-Driving Cars And Destinations

For the use of Level 4 and Level 5 driverless cars, one of the most frequently asked questions about those autonomous vehicles is whether they might opt to take you to a place that you didn’t specify, akin to a Frankenstein monster gone a muck.

We ought to first agree that a human driver could certainly decide to take you a different destination than the one that you thought you were going to. In that sense, yes, a self-driving car could likewise opt to take you to a destination that’s not of your choosing.

If a human driver takes you adrift, you might not know that they are heading the wrong way, depending upon whether you happen to know the area or perhaps are looking at your smartphone GPS tracking. The same could be said about a driverless car.

Just as with a human driver, there’s a chance that the driver has gone askew and wants to take you someplace that you don’t necessarily want to go. A human driver or even an AI driving system could opt to kidnap you. These are all possibilities; of which we live today day-to-day with the chances that a human driver might act in an untoward manner.

One argument about the awry human driver is that you could presumably at least try to stop the driver from going to the wrong destination. You might attempt to discuss the matter with the driver or escalate by trying to grab the driver and stop the car.

For self-driving cars, you could interact with the AI system via its Natural Language Processing (NLP) capability, kind of like interacting with an Alexa or Siri and attempt to find out why you aren’t going where you specified. The NLP might be simplistic and beguiling to try and discuss such matters, or the AI system might stubbornly seem to be unable to realize your distress about the destination.

As such, many of the automakers and tech firms are putting in place a remote human operator aspect that’s like an OnStar. If you are concerned that the AI driving system isn’t doing what it is supposed to do, you could invoke the in-car OnStar-like remote connection and discuss the issue with a human operator.

In theory, the human operator might be able to take over the driving task, though I’ve repeatedly forewarned that allowing remote operation of a driverless car is a bad idea. More likely would be that the remote operator could relay new instructions to the AI on-board system, getting it to realize there is a different destination intended, and then let the AI itself drive the car to the desired destination (rather than the human operator doing so via remote controls).

Of course, one downside of the AI driving system is that you cannot grab it by the shirt collar and attempt to physically override what it is doing. Though you can obviously try to manhandle a human driver (or make that person-handle), the odds of something adverse occurring would seem to be quite high. While scuffling with the driver, the driver might lose complete control of the car and the result is that you and the driver could end up in a deadly car crash.

I’m not saying that a car crash is necessarily worse than if the driver is taking you to an untoward destination, and only pointing out that physically having an altercation with a human driver amid the driving act has its own innate risks.

In a driverless car, there’s not much physically you could do to overpower the AI. There are some pundits that believe there should be a governmental mandated “kill switch” included in all driverless cars. This is a topic I’ll be covering a future column since it notably has both pluses and minuses as a strategy to cope with these matters.

You could try to find where the computer processors are hidden inside the self-driving car and somehow rip them out or otherwise bash them into oblivion (well, yes and no, since you need to keep in mind that those on-board electronics are going to be generally well-protected and it won’t be easy to get access to them while the car is in motion).

Destroying the AI system though has a similar downside to roughing up a human driver. For a truly self-driving car, once the computers are out of commission, the car while in-motion can become an unguided missile or might come to a stop or halt in an undesirable manner.

You might be wondering: Why would the AI system anyway be trying to take you to a destination that’s not of your choosing?

There are several reasons, some benign, others that are worrisome.

One benign reason for a destination that’s not where you wanted to go could be due to the self-driving car being set up as a kind of shuttle or bus-like service. Perhaps the driverless car you got into was programmed to drive around town and stop at six pre-specified places. Or, maybe it is being used in a retirement community and only goes to the main townhouse and to several on-property bus-like stops.

Perhaps you hastily got into the self-driving car and assumed that you could tell it where you wanted to go. If indeed the driverless car is on a pre-determined mission, the odds are that you are now going along for a ride.

There is the potential for nefarious reasons that a destination might be off-putting.

Suppose a hacker managed to electronically send a remote command to the driverless car, instructing the AI to go to a destination that differs from what you specified. This possibility showcases the dual-edged sword of allowing a remote operator to tingle with the AI system. In some instances, it might be a bona fide remote operator authorized to do so, though this also then opens the door toward a bad hat actor trying to do the same.

Consider The Nudges

So far, the discussion been about the AI driving the self-driving car to a destination that you didn’t choose.

Time to revisit the earlier points about digital nudges.

Suppose the AI opts to sway you into choosing a destination that you didn’t originally have in mind, which might be a good nudge or a sour nudge.

There is the possibility that the AI could be genuinely aiming to help you out, and perhaps the AI developers realized there are going to be occasions when riders choose an inappropriate location. For example, you specify an abandoned and boarded-up hotel, not realizing that the hotel went out of business, and thus the AI perhaps informs you that you would be wise to select a different hotel.

You then do a quick online search and find a viable hotel, relaying the address to the AI system of the driverless car, or maybe just verbally telling the AI to take you to the new destination.

Notice that you are the one specifying the destination.

Another variant could be that the AI offers an alternative hotel, after first letting you know that the originally proffered hotel is closed-up. You then agree to the new hotel, and the AI proceeds to the newly agreed destination.

Once again, you are essentially specifying the destination, in this case, based on the AI’s recommendation.

These examples then are unlike the earlier circumstance of the AI choosing a destination that you were not involved in selecting.

The nudging by the AI might be for other less altruistic reasons.

Imagine that the owner of the self-driving car that you are using on a ridesharing journey has made a backroom deal with a large chain grocery store. When you perchance get into the driverless car, and if you specify a different grocery store that’s nearby the large chain version, the AI might coyly let you know that the large chain store is having a half-off sale today.

Upon being told about the half-off deal, you quickly tell the AI to switch to the large chain store as the desired destination.

Maybe the half-off deal is better for you, or maybe not.

Meanwhile, the owner of the driverless car has maybe just pocketed a little extra dough for having sway a rider to the large chain store.

Conclusion

Many assume that driverless cars will be a neutral form of transport and merely take people from point A to point B.

That’s a naive perspective.

Sure, right now, the focus is entirely on being able to get a self-driving car to properly navigate the roads and be safe in doing so. Once we get past the initial aspects of having driverless cars that work appropriately, there is no question that the driving efforts will further be monetized.

Fleet owners of self-driving cars are going to seek ways to wring more dollars out of their expensive driverless cars. There are in-car advertising possibilities, along with the driverless car acting as a roving billboard.

Another facet of making money would be to allow for the digital nudging of passengers, getting those riders to be mind-tricked into going to where there’s more money to be made by the owner.

Don’t assume that I am saying that this kind of driverless car induced greed is inherently bad or wrong since it could be that the outcome is better for the rider too.

My point is that we need to be awakened to the notion that where a self-driving car will take you is not a foregone conclusion and we’ll all need to be on our toes about going along for a ride.

Keep your eyes on that AI.

Follow me on Twitter