BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Role Of Human Judgment As A Presumed Integral Ingredient For Achieving True AI

Following
This article is more than 4 years old.

Is the embodiment of human judgment a required ingredient in achieving true AI?

It is a rather seemingly simple question to proffer, though any mindful answer is likely to be notably elongated.

Here’s why.

Slightly restating the question, in order for AI to become a vaunted version of AI, which let’s say we might all collegially agree is demarked as the equivalent of human-like intelligence, this weighty question is asking whether there needs to be some means to encompass or include what we variously describe or denote as “human judgment” for AI to be true AI.

If you say that yes, of course, the only true AI is the type of AI that showcases its own variant of human judgment, you are then putting forth a challenge and a quest to figure out what human judgment portends and how to somehow get that thing or capability into AI systems.

Indeed, please be aware that some assert that human judgment is the missing secret sauce that is the Holy Grail toward arriving at true AI.

For those of you keeping score, as a side note to that assertion, there are debates about whether this is the only such secret sauce, or whether there are a myriad of other as-yet-figured-out additional secret sauces, all of which are equally to be considered as necessary and sufficiency conditions for landing on true AI.

I’ll save that saga for another day.

Back to the matter at hand.

If you say that human judgment is not a necessary facet for true AI, doing so implies that we don’t particularly have to be concerned about understanding the role and nature of human judgment, at least with respect to trying to forge true AI.

That would presumably be a relief, of sorts, since AI efforts to-date are rather stymied on exactly what human judgment consists of, along with scratching heads as to how to codify whatever it is that human judgment might be.

Some in AI would argue that human judgment is going to arise anyway within AI systems as a consequence of some form of “intelligence explosion” that might occur, and there’s no need to fret about how to code it or otherwise craft it by human hands.

Essentially, some believe that if you make a large enough kind of Artificial Neural Network (ANN), oftentimes today referred to as Machine Learning or Deep Learning, there is going to be an arising emergence of true AI by the mere act of tossing together enough artificial neurons.

One supposes that this is akin to an atomic explosion such that if you start by seeding a process and get it underway, there will be a chain reaction that becomes somewhat self-sustaining and grows iteratively.

In the case of a large-scale (well, really, really, massively large-scale) computer-based neural network, such proponents presuppose that there would an emergence of intelligence in all respects of a human-like manner, and perhaps it would even exceed humans, becoming super-intelligent (potentially arriving at the so-called singularity).

A few quick points to ground this discussion.

The human brain has an estimated 86 billion neurons and perhaps a quadrillion synapses (for more on such estimates, see this link here).

There is not yet any ANN that approaches that volume.

Furthermore, ANN is a far cry from being the same as the complex functions and aspects of biological neurons.

Therefore, if you are using today’s ANN’s as your cornerstone for the intelligence explosion hypothesis, I would argue that you are trying to align apples with oranges.

Anyway, let’s return to the assumption that we do need to understand human judgment and that one way or another it is bound to be significant for crafting true AI.

I recently attended a talk by Dr. Brian Cantwell Smith, Professor of AI and the Human at the University of Toronto (he was an invited speaker at The Stanford Institute for Human-Centered AI or HAI, an outstanding program), and having already read his fascinating and at times controversial book entitled “The Promise Of Artificial Intelligence” many of his remarks are directly pertinent to the question of what is human judgment and what might it consist of.

I’ll weave his comments into my analysis of this vital topic that both challenges and haunts those in AI.

Eras Of AI

A quick historical recount will be handy to understand where AI stands today.

Pundits tend to say that we are today in the AI Spring, referring to a seasonal-based metaphor, and that AI is flourishing as one might anticipate when Spring flowers sprout forth.

The AI Spring has been predominated by the advent of Machine Learning or Deep Learning, a data-based approach that uses pattern detecting algorithms and relies pretty much on having lots of data for training purposes.

Prior to the AI Spring, there was a slow down or shutdown of the AI enthusiasm that had occurred in say the 1980s and 1990s, and the subsequent downbeat period of time has become labeled as the AI Winter.

Those earlier heydays of AI, prior to the AI Winter, were considered dominated by the use of expert systems, also called knowledge-based systems, or referred to as symbolic systems, and consisted of explicitly articulating the “rules” of thinking that people seem to employ and programming computers accordingly.

So, the initial era was about symbolic AI systems, which I’ll refer to as the 1st Era of AI, while this second era of today is characterized by what some depict as sub-symbolic or data-based AI systems (labeled as 2nd Era).

Here’s a twist that as you’ll see in a moment ties to the debate about human judgment as an ingredient in AI.

The big question that I get asked repeatedly when speaking at conferences is whether or not this AI Spring, the second era of AI, will ultimately get us to true AI.

True AI would be AI that is essentially indistinguishable from human intelligence (refer to the famous Turing Test as an exemplar of that notion, see my analysis here).

Some believe that indeed the 2nd Era will inevitably find itself a path to true AI, whether by hook or crook, while others are skeptical that the existing 2nd Era has the right stuff to get us there.

I tend to agree with Smith’s indication that we are not likely to get to true AI via the prevailing 2nd Era approaches, and thus there is going to be some as-yet-defined 3rd Era that will be needed.

Additionally, I would suggest that we are probably looking at a further series of eras, perhaps a 4th Era and a 5th Era, before we’ll achieve true AI, though I don’t want anyone to feel discouraged about their AI research efforts today, so I’ll be upbeat and urge you to keep our eye on (and hopes for) 3rd Era achievements.

Smith handily lumps together the 1st and 2nd eras and opts to refer to the powerful duo as providing us with an ability to build AI systems that are “reckoning” systems (powerful, indeed, but lacking in embodying “judgment”).

I’ve previously exhorted that I like the idea of classifying the AI of these first two eras by using a particular moniker for them, allowing us to be clear that the AI to-date is not true AI and (as I’ve suggested) seems unlikely to become true AI, and thus the word “reckoning” seems handy (though be aware that other names are certainly viable too, such as some use AI-alpha for the duo, and what’s coming next will be AI-beta; hey, a rose is a rose by any other name).

What will get us to true AI?

Smith argues that we can refer to the next step as the advent of judgment, and therefore the 3rd Era will be the era of AI systems that exercise judgment.

Voila, I’ve brought us back to the matter of human judgment.

There are some important points about these various eras (I’m in concurrence with Smith on this):

·        We ought not to inadvertently denigrate what can be done with 1st Era and 2nd Era capabilities, and realize that this 2nd Era is going to spawn quite a lot of very impressive AI-lite systems, while presumably on our way to 3rd Era possibilities (the true AI or AI-heavy instances),

·        There is a danger afoot that we might assume that “judgment” exists in reckoning systems, when it in fact does not, and unknowingly allow 2nd Era AI systems to get us into hot water (we need to keep our eye out for that slippery slope),

·        And we might undermine our quest for 3rd Era entirely, drifting ever so incrementally away from judgment-encompassing AI, settling for “reckoning” systems instead, and fail to strive vigorously to get to the next and most pronounced step (keep your eye on the prize, I say).

Those call-to-arms points are also why I’m such a strong proponent of AI ethics (see my discussion at the link here), especially as it relates to concerns involving those that overstate what AI can do today.

If you want another AI Winter, it will surely occur as sparked by those that over-promise AI and society ultimately figure out that it was a ruse, namely they were getting AI-lite but thought they were getting true AI.

The backlash toward AI efforts could be substantive.

Let’s not let things slide in that direction, which would be to the detriment of all.

Judging Where Judgment Is Needed

As a reminder, the question earlier posed is whether or not there is a need to have the equivalent of human judgment embodied within AI for the AI to be considered true AI.

There is a begging question within the question: Just what is this thing or capability that is called human judgment?

And to that question, I wish you good luck trying to answer it.

Some of you are perhaps shocked to think that there isn’t a tightly delineated formalization that spells out what human judgment consists of.

Sorry, you won’t find it in any of your calculus books, nor economics books, nor psychology books, nor cognitive science books, etc.

Or, sure, you’ll find a slew of attempts at vaguely trying to pin down the nature and aspects of human judgment, but I assure you that it is not something so specified and so tangible that you could sit down at your keyboard and punch out a computer program to do it.

That’s also why some are hoping that the osmosis method might get us there.

By osmosis, I’m referring to the belief that without our being able to articulate what human judgment consists of and how it arises, we’ll have that intelligence explosion that will bring it to us, based on building blocks that we set in place.

In theory, even if we can’t say how it took shape, at least we can recognize its existence once it happens, as shown via the actions and outcomes of an AI system that presumably has it.

I wouldn’t hold your breath for that possibility.

In the meantime, here have been lots of forays into trying to pin-the-tail on the donkey of what human judgment is.

In this case, Smith lays out seven criteria that could be used to either realize we’ve landed on human judgment when it shows itself or could be used to try and devise AI systems that might embody human judgment. Those criteria consist of: (1) Orientation, (2) Appearance vs. Reality, (3) Stakes, (4) Legibility, (5) Actuality, Possibility, Impossibility, (6) Commitment, (7) Self.

In a future posting, I’ll be covering the myriad of theories and perspectives on what judgment entails.

Let’s shift gears for a moment.

Whenever discussing abstract topics about AI, it can be helpful to consider how the abstract is applicable to the applied.

One of the best ways to explore where AI is headed involves picking a meaty use case of AI and then using it as a foil to figure out what we understand about AI.

My favorite foil is the advent of AI-based true self-driving cars, as avid readers realize.

Let’s unpack the self-driving car puzzle and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And That Judgment Thing

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Let’s then tie together the herein overarching theme about the role of judgment.

Here’s the mind-bending question: Will AI-based true self-driving cars require an embodiment of human judgment?

If you assert that true self-driving cars will only work if they have true AI, and if you further assert that true AI must have an embodiment of human judgment, you are ergo making the claim that we won’t have such driverless cars until or if we achieve ensnaring human judgment into AI.

That’s a tall order.

I can tell you this, we are going to be waiting a long, long time for driverless cars if that’s the bar or threshold that must be reached.

Of course, it once again comes back to the meaning of “human judgment” and what you believe it constitutes.

For some, they believe that a driverless car does not need Artificial General Intelligence (AGI), which is a somewhat orthogonal way to refer to true AI, but even that the delineation is debatable as to whether AGI is the same as true AI (for example, some would suggest that AGI is solely focused on common-sense reasoning, yet this implies that AGI doesn’t need to encompass other elements of human intelligence).

Others claim that AGI (however defined) is needed for a self-driving car to operate.

Is the driving task something that is narrow enough that some form of narrow-AI is sufficient, or is the driving task of such a life-or-death consequence that the only appropriate deployment of self-driving cars would be if it can fully act as a human driver and thus presumably need true AI?

Some argue that the act of driving a car is not like having to devise a sonata or finding a cure for cancer.

You don’t seemingly need a lot of the inherent capabilities of humans and human intelligence to presumably drive a car.

Yes, they concede, driving a car is serious and should not be done lightly, but at the same time how far along on the spectrum of true AI do we need to be for an AI system to properly drive a car is a fundamental question to be addressed.

Likewise, keep in mind that there’s the act of driving and then there’s the act of driving safely.

Some would say that you could train a monkey to drive a car (see this analysis here), though it would likely ram into everything and everyone, therefore it isn’t achieving the spirit of what we mean by driving (namely that driving safely is part-and-parcel of the act of driving).

Smith makes a point that fits into this dialogue quite well: “That said, the issue we face as a society is one of configuring traffic in such a way as to ensure that vehicles piloted by (perhaps exceptionally acute) reckoning systems can be maximally safe, overall. It may be that in some situations, long distance highways, for example, we can restrict the driving context sufficiently, as we currently do for aircraft, so that reliable reckoning confers adequate safety.”

Thus, we don’t need to necessarily throw the baby out with the bathwater, and we can use 1st Era and 2nd Era capabilities (recall, referred to collectively as “reckoning” capabilities) for the purposes of putting self-driving cars on our roadways, albeit with crucial caveats about how we do so.

Further, Smith suggests: “Then, if and as we are able to develop systems that approach anything like judgment, the contexts in which they could be safely deployed will proportionally increase.”

This touches upon an ongoing and acrimonious debate about whether self-driving cars need to (for now) be confined to certain places for driving, such as the open highways, or perhaps operate only in certain zones that allow driverless cars but keep at bay human-driven vehicles.

These are matters not yet resolved and will inevitably and inextricably draw in all stakeholders, including the automakers, self-driving tech firms, ride-sharing entities, various regulators, the media, researchers, and others as we gauge the efficacy and readiness of driverless cars.

I’ll give the last word herein to Smith, providing us a reminder of what ought to be at the forefront of the thinking about AI:

“We should not delegate to reckoning systems, nor trust them with tasks that require full fledged judgment, should nor inadvertently use or rely on systems that, on the one hand, would need to have judgment in order to function properly or reliably, but that, on the other hand, utterly lack any such intellectual capacity.”

Yep, that’s a pretty good rule-of-thumb.

Follow me on Twitter