BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Ethics And That Viral Story Of The Chess Playing Robot That Broke The Finger Of A Seven-Year-Old During A Heated Chess Match Proffers Spellbinding Autonomous Systems Lessons

Following

You almost assuredly know that the world gets intrigued by those eye-catching eyebrow-raising man-bites-dog types of stories. Well, I guess you can add chess-playing-robot breaks finger-of-child as a notable stirring similar form of tale that provokes intense curiosity and fascination by us all.

Here’s the scoop.

The news and social media seem to be abuzz about a recent globally rousing “incident” in which a seven-year-old boy got his finger busted by a chess-playing robotic arm that gripped his finger erroneously (we assume). The boy is said to have been okay overall and the fracture was medically dealt with and healing.

A video is online that showcases briefly what occurred.

I am going to walk you through the details. My aim is to then bring up some AI Ethics insights and lessons learned that can be gleaned from the matter. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

First, some unpacking about what took place.

In context, the robotic arm was supposed to grasp various chess pieces on a chessboard, doing so one at a time and as commensurate with playing a normal chess game. The arm would swivel into a given position and a gripper at the end of the arm would then take hold of a chess piece. The gripper then lifts the chess piece and places it elsewhere on the board or possibly off the board, depending upon the chess-playing circumstances. To go ahead and place the chess piece down onto the board or a nearby surface, the gripper releases the piece accordingly.

All is well and good overall.

In this specific occurrence, here is what seems to be shown on the video depicting the finger-harming moment. First, the robotic arm grips a chess piece and moves the piece over to a small container to deposit the chess piece into a petite bucket intended to collect pieces that no longer are supposed to be on the chessboard. Meanwhile, the child seated at the chess board, directly across from the robotic arm (you would say they are opponents of this chess match), is moving his human arm and hand across the chessboard on his side of the board.

Two things happen next, nearly simultaneously. The boy appears to reach for a chess piece. The robotic arm appears to reach for perhaps the exact same piece. The robotic arm grasps the finger of the child, which we might assume is happening erroneously in that the gripper was presumably supposed to grab hold of the chess piece instead.

For reasons not yet explained, the gripper holds tightly onto the finger and won’t seem to let go. Almost a split second or so later, several adults standing adjacent to the brewing situation are quick to reach into the now scary setting and try to extract the boy’s finger away from the gripper. This seems to be arduous to do and thus we might assume that the gripper is just not giving up on the gripping action (the adults rushing into the scene tend to block the view of the camera and the video is therefore not visually evident as to what precisely happened).

They get the boy free of the gripper and move him away from the table.

One aspect worth noting in the video is that the nearby first adult to try and aid the boy is seemingly operating an iPad-like device and perhaps had some capability to electronically control the robotic arm. I say this too because this particular adult stops trying to physically aid the boy, allowing another nearby adult to do so, and returns to the screen of the device. It seems that this suggests that the person was hopeful that they could disengage the gripper via an electronic command (else, one must ask, why would the adult not continue to try and directly hands-on rescue the boy amidst the being-harmed tightly gripped finger).

The video is somewhat convoluted due to the chaos underway such that we cannot immediately discern whether perhaps the gripper was electronically commanded to release and did so, or whether the electronic command was not being abided and the boy via adult assistance had to wrench his finger from the gripper. I would also point out that we do not know for sure that the iPad-like device was being used to signal to the gripper at all, nor whether even if the device had that capability whether it was something that could readily be invoked in a fast enough time versus the tugging effort to extract the boy’s finger.

Per reporting about the newsworthy tussle, this took place at the Moscow Open last week. A representative of the Russian Chess Federation was quoted as indicating that the boy should have waited for the robotic arm to complete its move. The boy is alleged to have violated prescribed safety rules. Another representative said that the boy rushed on his move, failing to give sufficient time for the robotic arm to take its turn, and ergo the robotic arm grabbed the boy’s finger.

It was reported that the boy was apparently not traumatized by the actions. Indeed, it is said that the boy continued the next day in the tournament and was able to finish the chess matches, though maybe not able to record his own chess moves by himself and relied upon volunteers to do so (it is customary during chess matches that each player usually writes down their moves, doing so to keep a personal record and to some extent perhaps also mentally aiding the player as they note what moves they’ve played).

Representatives of the chess match seemed to insist that this was extremely rare, or that they had never witnessed such a happening in some 15 years of using such robotic arm chess-playing systems. Reportedly, the parents were going to contact the public prosecutor’s office. You might find of interest that the chess-playing robotic arm seemed to be in use throughout the rest of the chess tournament. The implication was that the robotic arm was deemed safe by the tournament organizers, as long as the players such as the children were dutifully cautious and performed their actions in accordance with the prescribed rules.

The seconds-long event has gone viral as a news story.

Some media influencers went with headlines that robots are on the verge of taking over. Headline-seeking bloggers breathed anxiously that this is an ominous foretelling of the future of robots and how they will summarily crush humanity. Reporting by others was more matter of fact. Given that the child seems to have come out of this okay, a bit of humor has also been infused into the coverage of the matter. For example, an outspoken media pundit in the US proffered a short rhyme or ditty, stating basically that if you say yes to playing robotic chess, a languid linger might cost you a finger.

Now that we are all on the same table as to what took place (a bit of pun, sorry), let’s do a deep dive into what we can discern about Artificial Intelligence (AI), autonomous systems, and AI Ethics from this chess playing lesson.

We can learn from the school of hard knocks, one might say.

Is It Safe?

Those of you that are movie buffs might remember the famous line in Marathon Man wherein the question of whether it is safe to proceed keeps repeatedly getting asked. Over and again, some of the main characters in the now-classic film wonder whether it is safe.

Is this robotic arm safe?

The tournament officials seemed to think it was safe, or shall we say safe enough.

The viewpoint appeared to be that since the robotic arm had seemingly rarely or maybe never before done this, and since the child was allegedly at fault, things were relatively safe when playing amidst the robotic arm. As long as the child or shall we say the “end-user” of the system obeyed certain rules, the odds supposedly are that no injury will result.

Relying on the end-user to avoid prodding or prompting the robotic arm into causing harm is a bit of an eye-rolling premise. Take for example the usage by children. Children are children, naturally so. Whether a child will strictly and always abide by some adult-established rules regarding actions when within the grasp of the robotic arm seems a nervy proposition. Even if adults only were utilizing the robotic arm, this still seems shaky such that a mere inadvertent veering physical action by the adult could produce adverse consequences.

We also need to realize that AI that might be initially envisioned as being used by adults will possibly, later on, be used by children. Thus, assumptions that adults are going to do responsible things when using AI are twofold mistaken. First, the adult might not do so. Second, a child might use the AI in lieu of an adult. For my recent analysis of how Alexa dangerously misled a young girl into almost putting a penny into an electrical socket, see the link here.

Some ardent AI developers especially like to play this kind of engineering you-lose game, as it were.

They design a system whereby the end-user has to be gingerly mindful else the AI will go astray. We might be willing to accept this premise if the consequences are mundane. For example, if you are filling in an online ordering form and mistakenly choose purple socks instead of your desired yellow socks, the outcome might be mildly disturbing that you get the “wrong” socks delivered to you. In contrast, moving your arm or hand across a chessboard that then a robotic arm is going to potentially grab your finger and nearly crush it, that’s a you-lose of a different caliber altogether.

The onus ought to be on the other foot, so to speak, namely the robotic arm.

The robotic arm should be crafted in such a fashion that it is unable to perform these kinds of human-harming acts. Extensive testing should be required. Furthermore, even better still, a mathematical provably verifiable system should be used, see my coverage of AI safety at the link here.

We can also strenuously question that the robotic arm and the human are within direct reach of each other. In most well-devised manufacturing facilities that use robotic arms, an active robotic arm is never within actual reach of the human. There are usually barriers that separate the two. If there is an absolute need to have the human and robotic arms be near reach of each other, and no other recourse can be devised, all sorts of secondary safety precautions are warranted.

In this chess-playing scenario, one such approach might be that the robotic arm is programmed to never reach anywhere within the scope of the chessboard unless there is no human appendage present. Thus, the robotic arm and the boy’s arm or hand would presumably not be able to end up in contention with each other. At any given point in time, either there is a human appendage roving over the chessboard, and the robotic arm is parked or positioned away from the board, or the robotic arm is hovering over the chessboard and the human is not doing so.

This obviously raises the question of what to do if the human opts to nonetheless put their human arm or hand somewhere atop the chessboard, doing so when the robotic arm has already committed to doing the same. In that case, the robotic arm ought to be programmed to go into a mode of either immediate stoppage or some akin risk-reducing positioning. You might want to have the robotic arm swivel away from the board, but that too could produce potential collisions with the human, thusly various options need to be considered as part of the AI programming.

I’ve covered extensively autonomous systems such as autonomous vehicles and especially self-driving cars, see my coverage at the link here. One of the most challenging elements for any AI driving system is how to enter into a Minimum Risk Condition (MRC), such that the autonomous system tries to find a means to curtail its actions and do so without causing any added potential harm. If a self-driving car gets into trouble while in the middle of the road and underway, should the AI try to drive the vehicle to the side of the road and park there?

You might assume that this is the best course of action. Suppose though that there is no side of the road per se or that the side is at the edge of a sheer cliff. Or that other cars or pedestrians are at the side of the road. Maybe the AI should just come to a halt in the middle of the roadway. Well, that too can be problematic. Other human-driven cars coming along might ram into that halted self-driving car.

Another facet that sometimes shocks those that aren’t familiar with AI real-time systems is the notion of safeness on a relative basis.

Allow me to briefly elaborate.

A self-driving car is coming up to a busy intersection. The self-driving car has a green light at the intersection and legally is able to proceed unabated. A human-driven car is nearing the intersection from a crossroad. This car is facing a red light. But the human driver is not slowing down.

What should the AI driving system do?

Legally, the AI can keep going and not bring the self-driving car to a stop since the green light is up ahead. On the other hand, and I’m sure that you’ve experienced this too, it might be sensible to slow down in anticipation that the other car is going to violate the red light and enter illegally into the intersection. The problem though too is that if the AI slows down the self-driving car, other human-driven cars behind the self-driving car might get irked and smack into the self-driving car.

This is the proverbial being between a rock and a hard place.

For our purposes herein, the question arises as to what is the safest thing to do. Notice that there is not necessarily a precise or clear-cut answer to that question. Safety is sometimes a relative factor. Also, safety typically encompasses probabilities and uncertainties. What is the chance that the human driver is going to zip into the intersection and violate the red light? What are the chances that if the AI slows down the self-driving car that a human-driven car behind will ram into the autonomous vehicle? How are we to reconcile these differing calculations?

Going back to the chess-playing robotic arm, you need to consider whether the robotic arm is “safe enough” for the worthiness of its use. A chess tournament does not need to use a robotic arm to move the pieces. A human could do the movement of the pieces, based on an AI chess playing system that merely displayed or spoke aloud the moves to be made.

You would seem hard-pressed to justify the robotic arm for a chess tournament in this setting, other than that it is a novelty and perhaps generates greater interest in the tournament. What though is the tradeoff between the added novelty versus the safety of the robotic arm?

In the case of AI-based self-driving cars, the hope is that the use of AI driving systems will reduce significantly the number of car crashes that occur. AI driving systems do not drink and drive. They don’t get sleepy. In the United States alone, there are currently about 40,000 annual human fatalities due to car crashes, and perhaps 2.5 million injuries, see my analysis of the stats at the link here. The belief is that the advent of self-driving cars will save lives. In addition, other benefits are sought such as enabling mobility-for-all by radically lowering the cost of using automotive vehicles for transportation needs.

How safe do AI self-driving cars need to be for us to be willing to have them on our public roadways?

A lot of debate is heatedly occurring about that. As they say, it is complicated (see my discussion at the link here).

Anyway, it seems that this chess-playing robotic arm does not contain or embody contemporary safety precautions, and we can openly question whether the robotic arm as it is constituted currently is reasonably “safe enough” to be used in the context thereof.

What’s With That Grip?

Imagine that you tell someone that you are going to grip their finger.

They cautiously extend their finger out for you to grip their delicate appendage. The person likely assumes that you won’t use excessive pressure. If you start to go overboard and squeeze hard on the finger, they will almost undoubtedly ask you to stop doing so. One supposes that if needed, the person will yell loudly, frantically attempt to pull back their finger, and possibly strike at you to force you to release their finger.

How does this compare to the chess-playing robotic arm incident?

The robotic arm and its robotic “hand” gripped the finger of the child. At that juncture, we would of course have wished that the gripper didn’t do any gripping at all. In other words, the gripper shouldn’t be gripping human fingers. Gripping chess pieces are okay. Gripping human fingers or any other human appendages is not okay.

So, the first thing that should have happened is that the robotic arm and the gripper should not have gripped the child’s finger. Period, end of story. Various sensors could have been included on the robotic arm (or perhaps mounted on some accompanying devices) to detect that the finger was not a chess piece. I am not saying that these are necessarily perfect precautions, but at least it would have been one layer of potential added precaution.

I assume that no such provision was provided on this particular brand and model of this specific robotic arm. If it was there, it sure didn’t seem to function. That would be another hefty concern.

Let’s assume that for whatever reason, the robotic arm and the gripper do opt to grip the child’s finger. As I say, this shouldn’t have happened to start with, but we’ll go with the erroneous flow anyway.

The gripper should have been devised with a pressure sensing capability. This would potentially provide feedback that the thing being squeezed is not the same consistency as a chess piece. Upon a certain threshold, the AI running the gripper ought to then do an automatic release under the assumption that something other than a chess piece is being grasped. This could cover all manner of other objects or living elements that might improperly come under its grasping grippers.

We can assume that no such safety feature was included. If it was included, it either malfunctioned or was not tuned appropriately for this kind of use case.

On a related tangent, modern robotic grippers usually also have a failsafe mechanism that can be pushed or pulled to cause an immediate release of the gripper. In this manner, the child could potentially have reached up and instigated a release, or one of the adults that were trying to extract the child could have done likewise. Perhaps this feature does not exist on this particular gripper. Another possibility is that the release mechanism exists, but no one there knew of it and did not know how to invoke it. Lack of training. Lack of a sense of ownership. Lack of a manual for guidance. Lack of evidentiary design to make this feature obvious and readily usable. Etc.

Moving on, I dare say that some AI developers might have not ever considered that a child or even an adult would put their finger in the same place that the gripper is going to grip. This might not have been on their list of considerations. Maybe the base assumption was that nothing other than a chess piece would ever become gripped. That solves any dilemma about what else to deal with. Namely, there aren’t any other concerns to be had.

Or perhaps the thought was that something other than a chess piece might inadvertently get gripped, but it would be some inanimate object and the consequences would be minor. For example, a mug is placed on the chessboard and the gripper grabs the mug. The mug might survive unscathed. The mug might get damaged or broken, but heck, that’s not a worry for the maker of the robotic arm and the gripper. Whoever foolishly put their mug in the way is the dolt in that scenario, the assumption goes.

Another possibility is that the AI developers did come up with some considered “extravagant” possibilities that they rated as extreme or unusual cases. These are often referred to as edge cases. The idea is that these are circumstances that rarely will occur. The question then comes up as to whether it is worth the AI development effort to deal with those. If you are under high stress to get the AI system out the door and into use, you might opt to place those edge cases on a future to-do list and figure that you will cross that bridge later on.

Some are worried that the same inclination is taking place with today’s self-driving cars that are being “rushed” out into public roadway use. Edge cases are perhaps being delayed for attention. That might work out, it might not. See my discussion at the link here.

One issue is that your semblance of edge cases might be radically different in comparison to someone else. Is a dog that runs suddenly into the street an edge case for self-driving cars? I would believe that most of us would insist that this is decidedly not an edge case. It happens often and the potential for severe outcomes is high. What about a deer that runs out into the street? We probably are less likely to say that this is on par with the darting dog. What if a chicken runs out into the street?

The gist is that edge cases are squishy and fuzzy. Not everyone will necessarily agree as to what constitutes an edge case versus being part of the considered core of whatever the AI is supposed to be able to do.

Would you consider that gripping a finger is an edge case for this robotic arm?

It is possible that some AI developers might think so.

In their defense, suppose they are told that the robotic arm will never be used in a setting such as the kind used at the chess tournament. Suppose the AI developers were assured that no human would be allowed within reach of the robotic arm. If that seems like a crazy assertion, envision that we opt to use the robotic arm to make all moves during the chess tournament. A human chess player tells the robotic arm what moves the human wants to make, and the robotic arm does the rest. Meanwhile, we completely cordon off the chessboard and surround the robotic arm and the chessboard with a strong barrier.

I wanted to mention this because the usual simple mindset is that the AI developers must have been wrong whenever there is an AI system that goes awry. We don’t know that for sure. It could be that what they devised was based on assumptions that seem perfectly sensible, but that the use of the AI system was far beyond what they had been informed.

Look, there is lots of fingerpointing that can be done when AI goes astray. The AI developers might be at fault. The leaders or managers that led the AI system development might be at fault. The operators of the AI system might be at fault. And, yes, the end-users might also be at fault. Everyone can get a piece of that action.

I realize that you might be surprised that I am willing to also fault the end-users.

Here’s why. Suppose that the robotic arm and chessboard were behind a barrier, and then a person climbed over the barrier and got into where the robotic arm was. Though I acknowledge that this is yet another safety case for the AI, I am merely trying to point out that sometimes end-users might wildly go out of their way to put themselves in danger.

How far you need to go to deal with end-user overt acts of endangerment is both a legal and ethical question, as I’ve covered at the link here.

The bottom line about this gripper and the finger, many stakeholders are to blame for the design and capability of the robotic arm and how it was being put into use. There are lots of heads to account for this.

We can be relieved that the child’s finger was not irreparably crushed. But we don’t know if a future occasion might in fact (sadly, lamentedly) lead to such an alarming and finger-losing result. Maybe so.

We also don’t know if there is a chance for a fatality, such as if the robotic arm when swinging to move a piece off the chessboard might strike someone in the head, especially a young child that has sought to impulsively look at the chessboard.

Are there safety precautions to prevent this?

Hopefully so.

Wakeup The AI

Some writers suggested that the AI intentionally harmed the child’s finger.

How so?

The contention is that the AI was “angry” at the child. The child had perhaps gone out of turn. Imagine that you were playing chess with someone and they didn’t let you take your turn. The other person rudely took two turns in a row. Outrageous! You would certainly be steamed, rightfully so.

Let’s make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

All of that is to mindfully explain that the AI or robotic arm did not respond in a human-like way embodying anger toward the child. There isn’t any anger there, in the semblance of human anger as we know it.

To clarify, you could argue that we could program AI to reflect the characteristics of anger. For example, if you define anger as lashing out at someone for doing something that you perceive as wrong, this is something that we can program AI to do in a somewhat akin fashion (again, not sentient). Suppose the AI developers wrote code that would detect if the other player, assumed to be a human, skipped the rightful turn of the robotic arm. The coding could then instruct the robotic arm to reach out to the human, grab their finger, and give it a heck of a squeeze.

Is that anger?

I don’t think it is the type of anger that we customarily think of. Yes, the actions appear to be of an angry flavor. The intention of the AI though is programmatic and not of the human sentient variety. Thus, I would suggest that claiming the AI was “angry” is overstepping the line in terms of anthropomorphizing the AI.

Were this particular robotic arm and the gripper programmed to do something dastardly like this?

I shudder to think it so.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be undercutting the golden goose by clamping down on advances in AI that proffer immense societal advantages.

Predictability Versus Unpredictability Of AI

As mentioned earlier, apparently the tournament authorities claimed that the robotic arm and the gripper had rarely if ever done something like this (we will assume that the “this” encompasses not only finger squeezing but other adverse actions too). Supposedly nothing bad like this had occurred in some 15 years of usage of the robotic system.

Let’s assume that this is an accurate depiction of the track record of this particular robotic arm. We don’t know for sure that this is a true assertion, but we’ll for sake of discussion assume it is so.

Can you put your erstwhile trust in an AI system that seemingly has a spotless track record?

I wouldn’t.

First, the rather obvious possibility is that some kind of software or systems update might have been deployed into the robotic arm. A software patch might have been posted that perhaps added some nifty new features. Or maybe a patch that dealt with deeply buried bugs that had been discovered, aiming to close them off before they appeared while in everyday use of the robotic arm and its gripper.

There might have largely been a spotless record for 14 years and say 11 months, but then a few weeks ago a new software patch was installed (we are pretending). Lo and behold, this patch did add new features. Meanwhile, it could have likewise introduced new problems. Perhaps the gripper code was impacted and no longer abided by various precautionary actions that it had done error-free for years upon years.

The Hippocratic oath is supposed to apply to AI developers, to first do no harm.

Unfortunately, this is not necessarily the case in actual practice.

There are tons of instances whereby a change made to an AI system was instrumental in causing the AI to subsequently do undesirable actions. Though this might have been a patch purposefully made for that devilish outcome, by and large, most of the time it is a result of inadvertently or accidentally turning out to mess up some other part of the AI coding.

In the instance of self-driving cars, you might vaguely know that the use of OTA (Over-The-Air) electronic updating of the AI driving systems is something that many are praising as a crucial advantage for these autonomous vehicles. Rather than having to take your car into a repair shop when the software needs to be updated, you merely have an onboard networking component that rings up a central server and downloads the latest AI software updates. Viola, easy-peasy.

This comes with some potential downsides, sorry to say.

It could be that the OTA brings into the self-driving car some software errors or bugs that were not caught beforehand by the AI developers that otherwise created the latest updates. In that sense, the AI driving system might do things differently, and possibly wrongly, whereas before those same actions did not seem to arise. There is also the chilling possibility that a cyber hacker might use the OTA to do evil things to the autonomous vehicles, see my explanation at the link here.

For the chess-playing robotic arm, we do not know immediately what version is the one that was running at the time of the incident and nor whether this is the same as it has been the entire time. Making a blind assumption that the robotic arm and the gripper are “good” because they have been that way for a long time is pretty much a falsehood that many people commonly can fall into about AI systems in general.

I’ll hit you with another reason to doubt that an AI is going to be consistently the same all of the time.

It has to do with predictability and unpredictability.

Sit down for this disturbing remark: AI can at times be unpredictable.

We are used to believing that AI is supposed to be strictly logical and mathematically precise. As such, you might also expect that the AI will be fully predictable. We are supposed to know exactly what AI will do. This pervasive myth of predictability is a misnomer. The size and complexity of modern-day AI is frequently a morass of code and data that defies being perfectly predictable. This is being seen in the Ethical AI uproars about some Machine Learning (ML) and Deep Learning (DL) uses of today, see my analysis at the link here.

Let’s also revisit the earlier chat about edge cases.

Imagine that an AI system is being used for a long time. This AI seems to be doing just fine. Out of the blue, an edge case arises that had not been previously encountered. The AI is perhaps not programmed for this edge case. The result is maybe shocking to us.

If there had never been a prior instance of a finger being in the same exact position of a chess piece, at the same precise moment that the gripper was wanting to grasp that anticipated chess piece, we presumably would not have prior experienced the over squeezing action that now took place. The edge case catches us off-guard.

Consider too that maybe this has indeed happened, but less so, in the sense that the person got their finger out of the way in the nick of time.

Let’s go with the notion that over the last 15 years, maybe there had been dozens or even hundreds of situations wherein a person had their finger nearly pinched by the gripper. The person managed to luckily withdraw their finger before it got fully caught by the gripper.

The odds are that few if any would report that they had gotten nearly stuck by the gripper. There was nothing to report per se since they extracted their finger in sufficient time to avoid any overt hardship or undue problem. They might individually too feel that it was their fault, so they for sure decide not to tell anyone, else they might be embarrassed.

Or, if the rules were that you weren’t supposed to do that, you can well imagine that this is then an even greater reason to not divulge that you did so. You don’t want to be banned or tossed out of a chess tournament as a result of having violated a rule about where your hands or fingers were. You want to win the tournament by playing chess and not got drummed out by some other seemingly arcane rules about that darned robotic arm.

There is a type of well-known human bias often referred to as survivorship bias. This is a frequent cognitive fallacy that can be applied here. If the tournament authorities never hear about the various incidents of the gripper gripping unduly, the assumption is that all must be a clean bill of health. No worries. The absence of stated complaints or reports of adverse gripping implies that it isn’t happening. You would have to have the presence of mind to try and seek out whether this has been occurring, and just not being reported, for which such an “extra” step would probably not occur to most people.

Conclusion

Speaking of finding things that were not necessarily obvious to the naked eye, I hope that my closely examining this case of the chess-playing robotic arm rousing incident has revealed to you some under-the-hood revelations that have piqued your interest in AI, autonomous systems, and AI Ethics.

A typical list of AI Ethics principles contains these types of factors (see the link here for more of them):

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Etc.

I’ve lightly touched upon several of those Ethical AI precepts during my assessment of the finger-harming chess play. For those of you that have a smattering of interest in AI Ethics, you might want to mull over the robotic arm story in light of the above-listed AI Ethics principles and see if you can apply them further to this particularly newsy tale.

There is plenty more grist there for the AI Ethics mill in this happenstance.

A final closing statement for now.

One thing that didn’t seem to be announced is whether the seven-year-old won that chess game. Maybe the AI was behind and desperately sought a win by opting to distract and unarm the human opponent. If the game was called as a tie or considered uncounted as a result of the incident, this seems unfair.

Score one for the chess-playing kid that will always have the chess boasting story of how as a youth he got international press due to an AI that couldn’t compete and had to brazenly cheat its way out of losing.

That dishonest anger-fed charlatan rip-off artist chess-cheating AI.

Follow me on Twitter