Error, error —

When robots screw up, how can they regain human trust?

New study shows tactics robots can use to redeem themselves in the workplace.

A toy robot with sparkler hands.
Enlarge / Today, in robots-and-fireworks news.

Establishing human-robot harmony in the workplace isn't always easy. Beyond the common fear of automation taking human jobs, robots sometimes simply mess up. When this happens, reestablishing trust between robots and their human colleagues can be a tricky affair.

However, new research sheds some light on how automated workers can restore confidence. Largely, the study suggests that humans have an easier time trusting a robot that makes a mistake if it appears somewhat human and if the machine offers some kind of explanation, according to Lionel Robert, an associate professor at the University of Michigan's School of Information.

When robots mess up

Even though robots are made of metal and plastic, Robert said we need to start considering our interactions with them in social terms, particularly if we want to have humans trust and rely on their automated co-workers. "Humans mess up and are able to keep working together," he told Ars.

To study how robot workers can regain trust after similar errors, Robert and Connor Esterwood—a post-doctoral student at the U of M—recruited 164 online participants on Amazon Mechanical Turk. They then set up a simulation, somewhat like a video game, in which they had to work with a robot loading boxes onto a conveyor belt. The robot picked the boxes while the humans had to review the number on the boxes and make sure that they were correct. Half the humans worked with a human-like robot, while the others worked with a robot arm.

The digital robots were programmed to periodically pick up the wrong boxes and then issue a statement designed to re-establish trust with their human colleagues. These statements were broken down into four categories: apology, denial, explanation, and promise.

After the experiments, the respondents filled out a questionnaire about their experience. Part of the questionnaire was designed to measure how effective each trust-repair strategy was on different metrics: ability (can the robot do the task), integrity (will it do the tasks), and benevolence (will the robot do it the right way).

Robert noted that previous studies have studied human-robot trust, but their study differs in that it includes the explanation method and differentiated between human-like and nonhuman-like robots. It also broke down the concept of trust into the three metrics. After all the questionnaires were filled out, Robert and Esterwood compiled and studied the data.

Apologetic automatons

The team found that the ways robots attempted to regain trust impacted the three metrics differently, as did the form the robots took. Overall, the respondents reported that it was easier to trust the eerily anthropomorphic robot after it messed up. This is particularly true when the human robot used the explanation option—which was especially effective in convincing humans of the robots' integrity. The human-like robot also had an easier time restoring benevolence when offering apologies, denials, and explanations.

Explaining an error might work better for human-like robots because it eliminates a bit of the ambiguity behind how the robot operates. We don't even wholly understand how human beings work, Esterwood said. "A smart dishwasher, we think we understand. A human-like robot, we might not be totally able to sleuth out," he told Ars. As such, an explanation might be seen as more transparent when coming from an agent that appears more complex than an arm on a box.

But the robot arm had an easier time restoring trust in some cases. For example, these faceless automatons had an easier time restoring integrity and benevolence with promises than their human-mimic kin.

Going forward, Robert and Esterwood hope to expand this research. They had initially planned to perform their study in person using virtual reality, but the pandemic made that impossible. Further, they hope to look at how different combinations of trust-restoration strategies might work—explanation and apology, for instance.

Getting the most from your machines

According to Robert, robots are increasingly going to be deployed in the workforce, so workers will need to be able to trust them. Further, he noted, some workforces might hope to deploy robots that learn, a process that will involve making errors.

If a worker can't trust or feels uncomfortable with a robotic colleague, these feelings can create stress and negatively impact their well-being. This means the worker is less happy and less effective overall. At an extreme, this could mean constantly double-checking a robot's work or even just getting rid of it entirely. Another risk: "You just put them in the corner and ignore them," Esterwood said.

However, according to Kasper Hald, a post-doc researcher at Aalborg University's Department of Architecture, Design, and Media Technology, keeping human-robot trust at an appropriate level is more important than trusting blindly.

Say, for example, you work in a meat-packing plant and have a robot there to assist you with the more strenuous and repetitive tasks there, tasks that could lead to muscular and skeletal diseases later in life. The machine is quite powerful and works quickly. If you trust it too little, you could hesitate in using it. But if you trust it too much, you could end up getting too close to it or not pay as much attention to it or its position in relation to your hands. This could put workers in a risky situation.

"Especially for robots in the workplace, it's as much about keeping an appropriate level of trust—not just maintaining trust, but maintaining it at an appropriate level," Hald told Ars.

DOI: Deep Blue, 2021. 10.7302/1675 (About DOIs).

Channel Ars Technica