BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Ethics Battling Stubborn Myth That AI Is Infallible, Including That Autonomous Self-Driving Cars Are Going To Be Unfailing And Error-Free

Following

AI must always be right.

Wait for a second, does that sentence mean that AI is in fact always right, or does it suggest that if we are going to be making use of AI we ought to make darned sure that it is indeed right all of the time?

Let’s focus on the first interpretation.

Some people seem to fall into the mental trap that AI is axiomatically always right and altogether infallible. This is generally referred to as the AI infallibility myth by those that know about AI and are shocked or at least dismayed by those that don’t understand AI and incorrectly assign a perception of perfection toward AI systems.

You might find it hard to believe that there are people that willfully and openly contend that AI can do no wrong. It would seem nearly impossible to think this way. We’ve all had computer systems that mess us up. Sometimes a computer turns us down for a loan request when we fervently believe that we should have gotten approved. Wasn’t that an example of a computer not being infallible?

One supposes that a distinction is partially made by the phrasing of AI rather than merely blankly referring to a computer system per se. When you invoke the majesty of the vaunted term Artificial Intelligence you are seemingly going beyond the normal everyday computing that we already know is at times error-prone. There is a kind of haughty cachet about AI that for some people carries a connotation of perfection.

Perhaps there is a mental gap between the AI of Hollywood fame that works perfectly and the realization that today’s AI is just as error-laden as any other type of computing system. But even that doesn’t seem to explain why some insist on the infallibility of AI, particularly since there are plenty of sci-fi films and TV shows that highlight AI that has gone berserk. You would seemingly be able to point to that portrayal of AI “gone bad” as a sensible way to overcome the fictionalized imagery of AI perfection.

Compare the perception about what AI appears to be able to attain versus the likewise perception about humans and how human behavior plays out. You would almost find a universal agreement that humankind is not infallible. Humans are time again said to be and repeatedly show themselves to be fallible. No two ways about that. Of course, you might have egocentric people that claim to be infallible, though this is readily questioned and ultimately revealed as a falsehood.

Per the famous words of Thomas Jefferson: The wise know too well their weakness to assume infallibility, and he who knows most knows best how little he knows.

The field of AI has cleverly or by happenstance managed to garner an image that exceeds that of human assumed fallibility by magically getting AI into the infallibility camp. What an amazing accomplishment! Convincing breathing people that AI will always do the right thing, perfectly, unerringly is astonishing when you think about it. Turn back the clock before AI began to become a thing. If you asked a public relations guru to make people believe that AI or a machine is steadfastly infallible, they would probably realistically have told you that it isn’t a feasible task. Sure, you might fool some of the people for some of the time, but that wouldn’t last very long and the AI infallibility labeling would fade and inevitably die off.

You might be wondering whether it makes a difference that some people do seem to fall into the AI infallibility enchantment. So what, you might be asking?

The problem of such an AI infallibility belief is multifold, including that it is wrong, dangerous, and insidiously enduring.

As a quick indication of why it is both wrong and dangerous, consider what can happen if you are reliant upon an AI system and assume that it is infallible. Take for example the emerging use of AI-based autonomous vehicles such as self-driving cars. Some pundits are proclaiming that driverless cars will never get into car crashes. Never, ever. They say that AI makes those revered self-driving cars into being uncrashable. An AI driving system is apparently miraculous at driving and will ensure that a self-driving car avoids all possibilities of getting into a collision or crash. It is infallible AI.

Nonsense.

I have emphasized in my column discussions that there is no possible way for an AI driving system to guarantee that no car crashes will ever occur in any real-world public roadway driving scenarios, see the discussion and analysis at the link here. If a toddler suddenly wanders into the street from between two parked cars, and a self-driving car is coming down the street at the speed limit of say 35 miles per hour, the physics of the situation precludes the autonomous vehicle from stopping in time to avoid striking the youngster. AI does not overcome physics.

Consider what ominously will happen if some people begin to believe in the AI infallibility myth with regards to AI-based self-driving cars. Pedestrians brainwashed with that belief would certainly be willing to wander directly into the path of an oncoming driverless vehicle. In their minds, they believe that AI will ensure that the self-driving car does not strike them. Those pundits that are pushing the uncrashable narrative ought to be ashamed of their efforts in making such risky situations viable.

In short, the AI infallibility myth is both wrong and dangerous.

On top of that, the AI infallibility myth is exasperatingly and scarily enduring.

Here’s how that often happens. Somebody realizes that AI is not infallible. That’s good. A vocal AI researcher or some news headline touts that there have been incredible breakthroughs in AI technology. The person that once thought AI was infallible but had changed to thinking that AI was fallible, now switches back to the roots of believing that AI is once again infallible. That’s bad.

The old AI fallibility-infallibility switcheroo.

Yes, each day that we make advances in AI is likely to boost the AI infallibility myth. Those that beforehand didn’t think AI was infallible are now spurred into thinking that the moment has finally arrived. Some already assumed AI was infallible and therefore the latest AI advances simply reinforce that belief. And as mentioned some might have once believed that AI was infallible and changed their minds, yet switch again into the infallibility camp under the assumption that AI is really infallible after the latest and greatest advances that have been hyped.

The whole kit and caboodle can make your head spin.

Why do people accept or adopt the premise that AI is infallible?

That’s a great question and one that we will explore next. I will present to you five major reasons that the AI infallibility myth exists and endures. Before we jump into those considerations, you might be relieved to know that there is an ongoing battle to try and diminish or even extinguish the AI infallibility prevalence.

AI Ethics has been aiming to burst the bubble of the AI infallibility myth for quite a while. AI developers need to realize that they can overtly lead people down the false path of AI infallibility, either by the ways they build their AI or how they promote the use of their AI. This can be done by purposeful intent, namely wanting people to think that the AI is infallible or can be inadvertently prompted as a result of what the AI does or how it is portrayed.

Let me clarify that this isn’t just those that are programming AI systems since there is a slew of other stakeholders that equally share in promulgating or stoking the AI infallibility myth. For further details see my ongoing and extensive coverage of AI Ethics and Ethical AI, such as at the link here and the link here, just to name a few.

Allow me a brief sidebar to cover some essentials about AI Ethics.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Hopefully, this discussion has aided in your realizing that today’s AI is absolutely not infallible. AI as we know it is instead absolutely fallible. Meanwhile, those that wish to have dreamy debates about a futuristic AI that might someday (maybe, kind of) be wondrously infallible are welcome to do so, though they should undertake such discussions properly. Do not confound today’s AI with an unknown and fantastical kind of future AI. Do not convey your mystical unrealized AI in a fashion that gets people into a mindset of thinking that today’s AI is infallible. Act responsibly when you make or imply any of those AI infallibility overtones.

With that foundation in hand, we can next poke into why people are apt to get themselves unduly mired in the AI infallibility myth. I will couch the points in a semblance of comparison to humans and human behavior, doing so because that is a significant means of how people tend to think about AI. They do so by comparing AI to humans. I’ll say more about the wrongful anthropomorphizing of AI momentarily.

Here are my five key reasons that the AI infallibility myth endures:

  • Myth #1: AI is a machine that can repeat endlessly and unerringly, while in comparison humans are known to slip up and make mistakes.
  • Myth #2: AI is a machine that acts without emotion, while humans are emotionally fueled.
  • Myth #3: AI is a machine that is unbiased and neutral, while humans are biased and discriminatory.
  • Myth #4: AI is a machine that is trustworthy and will do as it is told, while humans are capricious and act on whims.
  • Myth #5: AI is a machine absent of motivational entailment, while humans have hidden agendas.

Note that there are many more reasons that can be identified for the widespread endurance of the AI infallibility myth. You’ve got the scammers and schemers that want people to think that AI is infallible. It is a sneaky and conniving way to make a quick buck. You’ve got unwashed that think AI is infallible and are eager to tell others that it must be so. You’ve got the AI proponents that in their hearts wish AI to be infallible and say such things aloud as though they are true.

The list of crazy notions and wild declarations is undoubtedly endless. We will focus herein on my five key reasons and take a deep dive into each of them.

Myth #1: AI is a machine that can repeat endlessly and unerringly, while in comparison humans are known to slip up and make mistakes.

One reason that people seem to think that AI is presumably infallible comes from the notion that machines can apparently do repetitive tasks without complaining or seemingly making a mistake. We imagine a factory floor that has machines working nonstop at a given task. Human laborers would be prone to taking rest breaks or readily mistakenly slipping up when undertaking a constantly repeating series of steps.

This is both a false analogy and a misleading comparison that is attempting to keep this myth alive.

You see, a rudimentary machine might be able to repeat tasks far beyond the patience or endurance of a human, but there is always still a chance of the machine breaking down or going awry. A wheel or integral part can wear out and stall the machine. Some components can go haywire and cause the machine to perform unexpectedly and possibly carry out endangering actions. And so on.

AI can potentially contain programming flaws or bugs that cause the AI to make mistakes or otherwise fail to work as intended. Even self-adjusting AI that perhaps utilizes ML/DL can turn itself toward performing tasks undesirably, yet the AI has no semblance of common sense that would aid in preventing it from going off the rails. AI builders are supposed to devise suitable system-oriented guardrails to prevent the AI from going adversely amiss, but the protection might not be sufficient or might not work as anticipated.

In brief, AI is not in any reasonable sense a machine that can repeat endlessly and unerringly.

Myth busted.

Myth #2: AI is a machine that acts without emotion, while humans are emotionally fueled.

This myth contends that AI acts without emotion and therefore in comparison to humans will perform tasks in a reliable and emotion-free manner. In a sense, the belief is that AI is like one of those sci-fi robots that speak in a neutral tone and never gets mad or upset. We seem to buy into this fiction and ascribe a sense of utter rationality to AI as though it transcends the vagaries of human sentiment.

AI researchers have characterized this faulty viewpoint about AI in this insightful way: “Of interest in this context is the psychological side to the black box problem. We have the tendency to view the decisions made by a black box more reliable than those made by humans, since algorithms are often perceived as coolly rational or even perfectly rational” (paper published in the AI And Ethics Journal entitled “Socio-Cognitive Biases In Folk AI Ethics And Risk Discourse” co-authored by Michael Laakasuo, Volo Herzon, Silva Perander, Marianna Drosinou, Jukka Sundvall, Jussi Palomaki, and Aku Visala).

You might be tempted to fall into this mental trap about emotions since indeed we do not usually expect AI to exhibit emotions. If AI doesn’t seemingly experience emotions, doesn’t that logically suggest that we do not need to be worried about AI making emotionally-based decisions?

The question is a dodge or distractor.

Just because AI might not exhibit emotions does not directly imply it won’t ever make any mistakes. As mentioned in the discussion about the first myth, an AI system is always still susceptible to problems such as flaws or bugs, hardware glitches, and other faltering facets. Thus, even if we somehow agree that AI is not emotionally-based, this does not whisk away all the other issues and limitations associated with AI.

We can go even further on this topic and point out that it is conceivable that we could devise AI that contains or at least exhibits emotions. This is a highly contested topic and I’ve covered it at length elsewhere, see the link here.

Briefly, some contend that AI can never embody emotions because emotions are a distinctly human-only element (well, some include animals, some include all living creatures, etc.). By definition, if AI is not a human it cannot ergo contain emotions, they so assert. But this flies in the face of the other perspective that emotions are exhibited by our actions, in addition to seemingly being humanly felt.

A person is said to feel their emotions, and meanwhile, they can exhibit those emotions by their facial expressions, their mannerisms, their actions, etc.

AI researchers are able to program AI to do exhibitory emotions by programming actions that are essentially simulated variants of emotions. If you want AI that can get mad or jealous, this can be programmed. I realize a purist would argue that the AI is not feeling the emotions and only pretending or acting them out, but nonetheless, from the perspective of those interacting with the AI, it appears as though the AI does in fact have emotions.

The practical gist is that an AI system could be programmed to exhibit emotions. In that case, the myth is blown up entirely since the core claim of the myth is that AI is not at all emotional in any manner whatsoever (feeling versus exhibiting).

Myth busted.

Myth #3: AI is a machine that is unbiased and neutral, while humans are biased and discriminatory.

This myth is that AI will always be unbiased and neutral. In comparison, we all know that humans can be biased and discriminatory.

I earlier herein pointed out that ML/DL can be infused with computational biases and act in discriminatory ways. We’ve already witnessed this in a variety of areas, such as facial recognition, interactive Natural Language Processing (NLP) online conversations, and other venues.

Myth busted.

Myth #4: AI is a machine that is trustworthy and will do as it is told, while humans are capricious and act on whims.

This myth is an especially popular one. The idea is that AI will only do as it is told. We know that humans are often rebellious and won’t do as they are told. A sense of relief then seems to arise by believing that AI will blindly do as it is programmed and will never veer from that programming.

Lots of holes exist in this flimsy contrivance.

First, even if it were true that AI would only do as it is programmed, it seems doubtful that we would be willing to trust an AI system that has been programmed to be deceptive and perform heinous acts. Yes, you might say that it is only doing as it was programmed, but I don’t think you’d find that very satisfying when the AI is wreaking havoc at the apparent behest of some evildoer.

Second, it is false and wholly misleading to assert that AI will only do as it is told. You tell the AI to save mankind and it kills all of mankind. How can that happen if the AI is only to do as it is told? A well-known thought experiment involves an AI system programmed to make paperclips, for which the AI eventually gobbles up all of Earth’s resources to achieve the paperclip making goals and simultaneously causes all humans to die off since there is nothing left on the planet for our survival (I’ve discussed the paperclip saga at the link here). A crucial concern is how are we to tell or program the AI to do things when today’s AI has no semblance of common sense or sentience to fully grasp what it is being programmed to do.

Third, as per the other busted myths, a programmed AI might contain flaws or bugs and the AI goes beyond what was thought that it was programmed to do. This can have demonstrative adverse effects.

Fourth, AI developers can program AI to go beyond the initially devised programming and undertake self-adjustments. This is especially noted these days in the use of Machine Learning and Deep Learning. The concern is that the AI as programmed becomes something other than what the starting programming consisted of. Whether the self-adjusted AI ends up doing good or doing bad can be generally unpredictable.

AI researchers emphasize that this aspect comes up quite a bit when discussing AI Ethics: “We want to draw attention to what happens if AI is viewed solely through the categories of ‘artifact’ and ‘tool’. It is common to think that AI, or more broadly, any computer executing programs, is doing simply what it has been programmed to do. This assumption is misleading and affects the discussion about AI ethics considerably. It is different to explicitly program an AI to perform a task than to program it to autonomously learn to perform the task. In both cases, the AI indeed only does what it has been programmed to do, but especially in the latter case it is hard for humans to intuitively follow and predict the complex, data-driven, and probabilistic decision making; hence, the aforementioned black box problem and the surprise about the way a learning system will perform a given task” (same paper published in the AI And Ethics Journal entitled “Socio-Cognitive Biases In Folk AI Ethics And Risk Discourse”).

Myth busted.

Myth #5: AI is a machine absent of motivational entailment, while humans have hidden agendas.

This last of the five myths entails the assumption that AI won’t be motivated by hidden agendas. There is presumably no kind of intrinsic motivation inside the AI. The AI doesn’t want fame or fortune. Humans tend to want fame or fortune and will perform egregious acts in the pursuit of such riches. We can breathe a sigh of relief that AI doesn’t embody greed and doesn’t seem to believe that greed is good, so it would seem.

Time to put a stake through the heart of this myth.

First, the AI developers that made the AI may be indeed motivated by various kinds of motivations, and they could have embedded into the AI programming those same moralities and cultural values. I am not suggesting that AI is sentient. I am only pointing out that the AI as programmed can reflect the values of those that program the AI.

Second, any AI, which we’ve agreed can contain flaws or bugs, could seem to exhibit a hidden agenda, though we might not conventionally construe the matter as purity of a human-like hidden agenda. Remember that there is a fine line between that which is going awry versus whether it is doing so by purposeful intent as a means of carrying out a devious scheme.

Third, akin to my earlier point, an evildoer could seek to devise AI that will perform a hidden agenda on their human-stoked motivational aspirations. Since the AI would seemingly lack any semblance of common sense or sentience of reasoning, it is perhaps as dangerous or worse so than a human that might opt to overturn the commanded wrongdoing.

Myth busted.

All told, I hope you can plainly see that AI infallibility is entirely busted, including that it is a wrongful, highly dangerous, and regrettably enduring false belief based on an incomplete or faulty understanding of what AI is about. Do keep in mind that I am talking about today’s AI. For those of you that want to speculate about some futuristic AI, you can readily try to create assumptions that would seem to undercut or fully support each of the myths, doing so in a conjured mythical world that we don’t know will arise.

I prefer to stay closer to the grounding of what we know today and that we anticipate will most likely be the case involving AI in the real-world future.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the AI infallibility myth. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the AI infallibility myth, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Infallibility Myth

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the AI infallibility myth.

Consider each of the five bases for stoking the AI infallibility myth and let’s see how they arise in the context of autonomous vehicles and especially AI-based self-driving cars:

  • Myth #1: AI is a machine that can repeat endlessly and unerringly, while in comparison humans are known to slip up and make mistakes.

We know that human drivers get into car crashes. Sadly, in the United States alone there are about 40,000 annual human fatalities and around 2.5 million injuries due to car crashes, see my collection of stats at the link here. Humans drink and drive. Humans drive while distracted. The task of driving a car seems to consist of being able to repetitively and unerringly focus on driving and avoid getting into car crashes. As such, we might dreamily hope that AI driving systems will guide self-driving cars repetitively and unerringly.

I earlier cited the misconceived and outrageously wrong assertion that self-driving cars will be uncrashable. Do not expect and do not pretend that AI driving systems will work perfectly and avoid all chances of getting into car crashes. This is an exceedingly bad assumption and one that could cost people their lives.

  • Myth #2: AI is a machine that acts without emotion, while humans are emotionally fueled.

Human drivers can get quite emotional while at the wheel of a car. The news is replete with stories of road rage that erupted when one driver got angry at another. AI that is appropriately devised for self-driving cars will, by and large, avert those kinds of emotionally sparked reactions that human drivers exhibit. That is good and welcomed. At the same time, there are some downsides which I describe at the link here.

  • Myth #3: AI is a machine that is unbiased and neutral, while humans are biased and discriminatory.

Turns out that AI underlying self-driving cars can and has at times been shown to contain various biases and discriminatory actions. This is partly a result of how Machine Learning or Deep Learning is at times employed. See my coverage at the link here.

  • Myth #4: AI is a machine that is trustworthy and will do as it is told, while humans are capricious and act on whims.

I already busted this myth in the earlier part of this discussion, namely that we cannot rely upon the notion that AI is only doing what it is told. As examples of how AI can still go awry and do so in the scenario of a self-driving car, see my analysis at the link here.

  • Myth #5: AI is a machine absent of motivational entailment, while humans have hidden agendas.

An AI driving system and the AI that aids in overseeing a fleet of self-driving cars can readily contain numerous hidden agendas. For my elaboration on these concerns, see the link here.

Conclusion

Throughout history, there has been an ongoing tussle to figure out whether infallibility can exist. We at times have ascribed infallibility to all sorts of conceptions.

I dare say we can pretty much all agree that humans are not infallible. Without going too far out on a limb, we can presumably all agree that AI is also not infallible. At least this seems abundantly true about the AI of today and for the foreseeable future.

Will we someday produce or somehow encounter infallible AI?

Any such infallible AI will ostensibly tell us the answer to that question.

Follow me on Twitter