BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

MIT Researchers Discover Whether We Feel Empathy For Robots

Following
This article is more than 9 years old.

Research shows that people anthropomorphize robots (that is to say they attribute human forms or personality to them).  Kate Darling of MIT, a rising star in robotics law and policy and her colleagues Palash Nandyand Cynthia Breazeal conducted a research study, the preliminary results of that experiment (specifically the impact the experiment will have on law and public policy) have been posted as a draft here.  The experiment itself has not yet been published, but the policy paper itself is fascinating.  The paper explains how experimenters found that anthropomorphic framing can impact how we think about robots, can alter human behavior and may even impact public policy.

The paper begins by noting:

We were ... interested in evaluating framing as an alternative mechanism to influence anthropomorphism. In our experiment, we observed people who were asked to strike a bug-like robot with a mallet under different conditions. Participants hesitated significantly more to strike the robot when it was introduced through anthropomorphic framing (such as a name or backstory). In order to help rule out hesitation for reasons other than anthropomorphism, we measured the participants’ psychological trait empathy and found a strong relationship between tendency for empathic concern and hesitation to strike robots with anthropomorphic framing.

The experimenters took bug like robots (Hexbug Nano's photographed below), and asked 29 participants to observe the Nano, and then strike it with a mallet.  The experimenters timed the relative hesitation of the participants to strike the Hexbug, as well as participants’ self-assessment of hesitation to strike, reasons for hesitation, perception of the robot, and emotional affectedness.

The experimenters created two different types of framing through anthropomorphic narrative.  In the first narrative the Hexbug had a name and personified backstory (e.g. "This is Frank, he’s lived at the Lab for a few months now. He likes to play" etc.). In the other narrative the Hexbug was described as a nonpersonified object, but with a backstory that lent itself to anthropomorphic projection (e.g. "This object has been at the Lab for a few months now. It gets around but doesn’t go too far. Last week, though, it got out of the building" etc.).

The experimenters found that participants hesitated significantly longer to strike the Hexbug in the anthropomorphic framing conditions compared to the non- framing conditions as depicted in the table below.   Many participants’ verbal and physical reactions in the experiment were indicative of empathy (asking, for example, "Will it hurt him?", or muttering under their breath "it’s just a bug, it’s just a bug" as they visibly steeled themselves to strike "Frank").

One question in the post-experiment survey asked participants to describe in their own words why they hesitated. Many participants used empathic terms to explain their hesitation, for example “I had sympathy with him after reading his profile because I am also here in the Lab for a few months."

It's both interesting and important that the robot design in the study didn't change, what changed was the framing.  As the study notes in reviewing the literature, researchers have already shown that people anthropomorphize artificial entities, even when they know the projection is not real.  Prior studies have shown that people treat computers and virtual characters as social actors and robots tend to amplify this social actor projection due to their embodiment and physical movement.  Social robots are specifically designed to be embodied, but people will also anthropomorphize non-social robots.

What we've learned from the study is that  social framing in media, law or regulations may affect what we think about robots.   In the words of the author "our results confirm that framing can have an impact on people’s reactions to robots. The findings also show a more pronounced effect of framing for participants with higher capacity for empathic concern. This suggests that anthropomorphic framing activates people’s empathy."

As human robot partnerships become more common, decisions will need to be made by designers and users as to whether the robot should be viewed  anthropomorphically.  In some instances, Darling notes, it might be undesirable because it can impede efficient use of the technology.  The paper uses the example of a military robot used for mine clearing that a unit stopped using because it seemed "inhumane" to put the robot at risk in that way.

In other circumstances though, it might make sense for the robot to take on human characteristics.  Darling states "[a]necdotes from workplace and household integration indicate there might be reason to encourage the anthropomorphic perception of certain robots."  A great anecdote from the paper is this:

The CEO and employees of a company that develops and deploys hospital robots to deliver medicine tell stories of hospital staff being friendlier towards robots that have been given human names. Even people’s tolerance for malfunction is allegedly higher with anthropomorphic framing (“Oh, Betsy made a mistake!” vs. “This stupid machine doesn’t work!”). The company has recently begun to ship their square-shaped, non-anthropomorphically designed hospital delivery robots with individual (human) names, attached to the robot like a license plate."

As robots become a more important part of our lives, Darling states "To preserve the advantages and future potential of robots, as well as facilitate the adoption of beneficial robotic technology, we should consider distinguishing between those robots whose use is hindered by anthropomorphic framing, and those whose use is enhanced by it. Framing could be a helpful tool in effecting this difference."

She continues:

it makes sense to distinguish between use cases where we want to encourage anthropomorphism, and cases in which we do not. Where anthropomorphic projection diminishes the main function of the robot, this can cause serious problems. For robots that are not inherently social in design, nor enhanced through social interaction, we should consider discouraging anthropomorphism using every tool at our disposal. Rather than viewing science fictional narratives and personification as harmless fun, those building or implementing robotic technology should be aware of framing effects. While the lifelike movement of robots also encourages projection, it may be a more difficult factor to adjust, because movement is often central to the functionality of the robot.

For example, companies like Boston Dynamics are building military robots that mimic animal-like movement and physiology, because animals have evolved into structures that happen to be incredibly efficient for mobility in our world. Even when military robots are made less animal-like, they still need to move around in some form or another. Focusing also on framing by objectifying robots in language (“it”) and encouraging names such as “MX model 96283” instead of “Spot” will probably not make anthropomorphism disappear completely, but it may have a helpful effect. There is the other case, however, where anthropomorphic projection enhances the acceptance and use of robots, as well as the case where it directly supports the main function of the robot (social robot technology). These cases should be separated from the above at every level, from design to deployment, and can even be separated at a regulatory and legal level.

There are so many interesting ideas for design, law, and policy embedded in this paper that I highly recommend reading the full piece, which is available for download here.

Follow me on Twitter or LinkedInCheck out my website