BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Female Robot Erica Slated As ‘Actress’ For New Movie Woefully Miscasts AI, Repercussions For Self-Driving Cars

Following
This article is more than 3 years old.

A recent news story shaking up Hollywood involves the reported casting of a robot to be the star of a $70 million budgeted science fiction movie.

You might be puzzled as to why any undue notice would come from opting to use a robot in a sci-fi movie, which has otherwise seemingly been the case for decades.

Here’s the twist.

The backers of the film are alleging that the robot will use AI and essentially act or perform in the same manner that a human actor might do so, thus, in their estimation, this will be the first time that a movie starred an artificially intelligent actor.

It is claimed that the robot has been “taught” how to act and embraces the revered approach known as method acting.

So, just to clarify, this is not a CGI kind of movie editing that will showcase the robot, and nor will the robot have a human hiding inside it or have a handler sitting off-camera with a remote control. Supposedly the robot will be using its embodied AI and acting through the use of voice and body-like movements, entirely on its own.

For all of you aspiring actors, if you weren’t already worried about the bleakness of landing an acting job, note that once those AI-based robots become part of the SAG (Screen Actors Guild) and begin auditioning for juicy roles on film and TV, you are going to become even more despondent about your chosen career path.

Imagine coming home after a grueling audition for a part in a new series, and upon being asked by a close friend about how it went, for which you then grumble that a darned robot seemed to win over the producer and director and you lamentedly have once again lost an acting gig to one of those robot-turned-actor androids.

Curse the robots!

In the case of this still being planned out sci-fi movie, the robot is considered a female, at least as stated by the filmmakers, and so the headlines are saying that the robot is the starring actress in the film.

Also, the robot has been given a name, Erica.

How do we know the robot is a female?

Because the robot maker says so, and since the robot has been given a facial look resembling a woman and the voice and mannerisms programmed into the robot are akin to what is considered a female (per the views of those making the movie).

If you are wondering whether the gender aspects go any deeper, it seems quite unlikely.

Of course, one obvious and immediate criticism is that this “female” that is an “actress” will be portraying whatever stereotypical assumptions that the robot maker and those involved in the film have about the nature of women and femininity.

That alone is worthy of concern.

There are many more concerns to pile onto this notion of a so-called artificially intelligent actor or actress.

From an AI perspective, the whole thing stinks and reeks of balderdash, unfortunately.

How so?

Where this appears to be headed involves the moviemaker suggesting that the AI instills the same thinking processes and capabilities as that of humans, in essence as though the AI has become sentient (for my explanation about AI and sentience, see the link here).

Please know this: There isn’t any AI today that is sentient, and no such AI is on the horizon, therefore any news or media that attempt to say otherwise is erroneously perpetuating an untoward myth and falsehood.

You might be tempted to shrug off any of the reports that allude to AI as being the equivalent of human thinking and see this as just idle fun and not a serious matter.

The danger with these attempts at anthropomorphizing today’s AI is that it can cause the public to believe that AI can do things it cannot do, and in that belief can get people into trouble by assuming that the AI will carry out activities in a human-like contemplative manner.

Do not fall for that fakery.

This is why the idea of a well-budgeted movie opting to try and foster the charade of AI as equivalent to human capabilities is summarily worrisome and downright troubling.

If the movie gets a strong box office upon being finished and released, the movie and its presumed marketing campaign are likely to further reinforce the outlandishness of what AI is. People watching the film might fall hook, line, and sinker into believing what they see.

Anyone serious about AI will maybe at first be excited to have such attention being brought to AI, though this initial elation by AI developers will get sober real quickly when they are asked to proffer AI that can do things that only humans can do today. 

Oops, the realization of limits to what AI can do will hit the proverbial fan.

In short, despite the allusion of casting an AI-based robot that seemingly can act and perform on its own, this is really still a programmed artifice that has no semblance of human intelligence and merely represents various trickery to seem human-like.

Ways To Create AI False Impressions

The robot named as Erica is known amongst AI insiders and has been around for several years as an ongoing research project (see this research paper and this one here).

From time-to-time, the robot has gotten some splashy stories written about what it does.

The problem with most of those showy stories is that they are often written by someone that has absolutely no clue of what AI is, nor how robots work, and thusly the writer tends to gush and become enamored that the final breakthrough in AI and robotics has arrived (which, they have no idea as to how to make such a judgment or proclamation).

It can be difficult to discern if those writers are naïve, or simply want to believe, or maybe are doing a wink-wink tall tale, or what they have in mind.

The most beguiling instances involve the writer being handed a script, containing pre-determined questions to ask the AI-based robot, and the writer does so, willingly and without questioning the appropriateness of such an approach to doing an “interview” or any semblance of investigative reporting.

We are all used to the advances in Natural Language Processing (NLP) that have emerged in the last several years, as evidenced by the popularity of Alexa, Siri, and the like. At first, people unfamiliar with modern-day NLP were shocked to discover that those NLP systems seemed to be responsive to verbal commands.

Anybody that has tried using those NLP systems for any length of time and for any kind of demanding dialogue is now of the realization that despite the great advances so far, the AI NLP is still a far cry from being the same conversationalist as humans are (indeed, there is a tremendous amount of research on conversational AI, attempting to push forward on those capabilities, see my indication at this link here).

If someone hands you a script of questions, and you ask those questions of an AI system, you do not have to be a rocket scientist to guess that the NLP will respond with potentially human-like answers, due to being programmed beforehand to do so.

The moment you veer from the script, it becomes possible to begin to detect the boundaries of what the AI can do. It might keep up briefly, and then as you get deeper into what would be an everyday discussion with a human, gradually the AI will falter in terms of remaining as a seemingly engaging discussant.

You might be interested in knowing some of the tricks of the trade that are used to create an impression that the AI NLP is human or has human-like abilities.

One approach is to have the NLP utter vocal fillers, such as saying “uhuh” or “ok” that a human might do when you are talking. This gives you the feeling that the AI is actively listening to you, but it is more of a gimmick rather than a semblance of understanding or comprehension.

Another handy tool is to use fallback utterances when needed.

Let’s imagine that you have stated a lengthy comment and the AI NLP has no clue what you stated, not having been able to parse the words and try and find some aligned response. In that case, rather than directly and honestly stating that the system does not grasp what you have said, which obviously is a giveaway that the AI NLP is weak, the reply instead would be something like “very interesting” or “tell me more.”

The beauty of those fallback utterances is that you will tend to think that the AI NLP did comprehend what you said and is engaged and desirous of further discussion.

Parroting is also a handy means to fool someone.

If a human says to the AI that they are tired, the reply can be simply crafted as “tell me why you are tired” and the human then thinks that the AI is being sympathetic and understood the discourse (there isn’t any kind of common-sense reasoning yet in today’s AI, and a long ways to go to get there, see my discussion at this link here).

The icing on the cake involves the addition of seemingly emotional actions such as offering a laugh or maybe a sigh, all of which appear to be human-like responses. This though can be a dual-edged sword in that if the AI NLP vocalizes laughter, but you have not said something presumably funny, it can potentially reveal that the canned laughter is insincere and break the veneer of being human.

You might find of interest a famous concept referred to as the uncanny valley.

It is a theory that as an AI-based robot proceeds from being an obvious robot toward appearing to be human, there will be a juncture at which the appearance begins to evoke repulsion from a human interacting with the robot. Seemingly, when you can readily discern that a robot is merely a robot, you are tolerant and willing to interact, but when it begins to get overly close to human-like behavior, the result seems creepy.

In that sense, the AI robot falls into a “valley” in terms of your feelings toward the system, and the only way out would be for it to retreat to its former lesser self, or climb out by leaping all the way to becoming indistinguishable from a human.

Not everyone agrees that this uncanny valley proposition is valid, though it does provide interesting fodder for considering how to best deploy an AI-based robotic system.

I’ll briefly bring up another facet that you might find of interest about AI.

Within the AI field, there is a kind of test known as the Turing Test (for detailed coverage, see this link here). The notion involves having an AI system be behind say a curtain, hidden from view, and have a human likewise behind another curtain, and a moderator begins asking them both various questions. If the moderator cannot distinguish between the two as to which is the AI and which is the human, presumably the AI has successfully demonstrated the equivalent of human intelligence and passed a test that attempts to make that assessment (the test is named by its inventor, the famous mathematician Alan Turing).

At first glance, the Turing Test seems perfectly sensible.

There are some potential problems.

Perhaps the biggest problem is associated with the moderator. If the moderator does a lousy job of asking questions and engaging the hidden contestants, the nature and scope of the interaction might be insufficient to properly make a judgment about which is which.

This is the same as my earlier point about those writers or reporters that go along with a predetermined script. In that sense, they are somewhat a “moderator” in conducting a test of the AI, yet they are sticking with a preset series of questions.

Keep in mind too that most of the AI NLP systems have a human-machine dialogue corpus, meaning a database that employs various AI techniques, and once you go beyond that established base, there is a degradation in what the AI NLP can do.

When you want to try and figure out how shallow or deep an AI NLP might be, the easiest means is to jump all around in terms of the knowledge areas involved, gauging what the boundaries are of the system.

Please do not misinterpret my remarks as though the use of AI NLP is somehow wrong or to be avoided.

There are lots of helpful uses for AI NLP and it should be heralded for what it can do.

Maybe you’ve used some of the latest AI NLP to prepare yourself for a job interview, or in the case of senior citizens, the AI NLP can be an easy means to operate appliances throughout their home. Chatbots have rapidly sprung up in online use for filling out a car loan application and similar automated assistance is occurring via NLP.

The problem becomes when the AI is portrayed as being more embellished and more capable than it truly is.

The rising interest in AI Ethics has been partially sparked by farfetched claims made by AI developers and those fielding AI systems that are overeager to depict their AI as being human-like when it is not that way at all (for aspects about the importance of AI Ethics, see my coverage at this link).

Seeking to improve AI NLP and robots toward the laudable goal of being human-like is fine and encouraged, but the results need to be shared with the public in a manner that offers needed caveats and overtly listed limits of what the technology can do.

In terms of the robot Erica that supposedly used “method acting” to perfect its craft, such a claim would undoubtedly cause Konstantin Stanislavski to turnover in his grave (he is a famous Russian theatre practitioner known for The Method of acting). In brief, the techniques involve a human actor finding their inner motives, intermingling their conscious and subconscious thoughts.

Trying to assert that any of today’s AI was able to do the same is not only hyperbole, but it also denigrates the substance of what method acting has become and how it works.

But that’s just par for the course when AI is oftentimes portrayed in hyperbolic ways.

Consider for a moment other areas in which AI is sometimes being inappropriately portrayed, such as the advent of true self-driving cars.

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Allusions

Returning to the points about AI potentially being misstated in terms of capabilities, there are plentiful examples in the self-driving car realm.

One that was just described consists of Level 2 and Level 3 cars, whereby some automakers and self-driving tech firms overstate or tend to imply that the semi-autonomous system can do more than it can.

And, for those of you that were doubtful about whether AI misrepresentations are important or serious, note that in the case of driving a car, this is a very serious business consisting of life-or-death consequences.

A human driver that does not understand the limits of the AI that is co-sharing the driving task is bound to end-up in dicey situations and get themselves injured or killed, along with any passengers and others that might be nearby when a car crash occurs.

In the case of Level 4 and Level 5, there will not be a human driver at the wheel, and thus the issue of co-sharing the driving is obviated.

That being said, just because an automaker or self-driving tech firm claims they have AI that can properly and safely drive a car does not mean that we should take them at their word. Having a true self-driving car roaming our streets entails a multi-ton vehicle that can do tremendous damage and destruction if it is not ready to be driving solo.

Conclusion

The problem with the sci-fi movie and its apparent effort to exaggerate the capabilities of AI are that this can spill over into other areas of AI usage.

Perhaps someone that watches the movie will become bolder with their Level 2 and Level 3 car, believing from the film that AI everywhere is magically capable and sentient, and therefore it is acceptable to be less attentive to the driving task.

That would be a shame (or worse) and turn what should have been an escapist sci-fi into a real-world catastrophe.

Don’t believe everything you see and especially be doubtful when today’s AI starts talking or acting as though it can think like a human, which I assure is nothing more than a form of programmatic method acting, whereby some nifty AI techniques are trying to take on an acting role as humans, despite not being anywhere near to human capacities and performing outside of their league.

The well-known actor Sanford Meisner, the creator of the Meisner acting technique, had famously said that “Acting is behaving truthfully under imaginary circumstances.”

I believe that we want AI that behaves truthfully under real-world circumstances.

Cut and print, that’s a wrap!

Follow me on Twitter