Character  &  Context

The Science of Who We Are and How We Relate
Editors: Mark Leary, Shira Gabriel, Brett Pelham
Mar 02, 2018

How Do Robots and Humans Interact?

Image of robot in foreground, blurred female in background

To what extent do people place identity with or against robots? Can we take a robot’s perspective? Do we see robots as moral beings?

Xuan Zhao, who studies perspective taking, empathy, and prosocial behaviors, launched the session by highlighting the theoretical and practical relevance of examining human-robot interaction.

Arguing that studying robots provided "a great opportunity for us to understand what makes us fundamentally human — and perhaps what isn't so unique about humans”, her research with Bertram Malle at Brown University examines what the appeal of human resemblance is.

Zhao said, “Previous research at the intsection of psychology, robotics, and design found that robots with more human-like appearance and movements are often evaluated as having more mind, intelligence, and positive human characteristics.

However, we still know fairly little about the deeper psychological mechanisms triggered by human-like appearance.” Thus, Zhao studied how likely people are to see the world from the vantage points of a variety of different agents: human agents, a highly human-like android robot, a highly human-like mannequin, two moderately human-like robots, a non-human-like mechanical robot, and a cat.

Over seven studies, Zhao found that the more an agent appears human-like, the more people consider and adopt its perceptual point of view. Even when participants were made to believe that the human-like robot was simply a mannequin, they still took its perspective, despite knowing it had no mind.

Thus, human-like appearance is a strong cue that directly triggers visual perspective taking. Human visual perspective taking may then help help with understanding robots, predicting their behavior, and interacting with them in a shared space.

If we can take a robot’s perspective based on appearance, how do we categorize ourselves in a world of robots? Kurt Gray spoke on people’s perceptions of robot workers and how those perceptions affected individuals’ perceptions of other people.

Gray and his team, Joshua Jackson and Noah Costello, had participants rate their anxiety about the rise of robot workers, and then rate their anxiety about other human groups, such as people of a different race, gender, or religion.

Gray found that people were more likely to place themselves in the same category as humans that were different from them when they reported more anxiety about about the rise of human workers. He posited that humans were perhaps uniting against the superordinate threat of robot workers.

Can robots make moral decisions?

Bertram Malle spoke on “evaluating the morality of artificial agents’ decisions.” Malle wrote a hypothetical scenario in which a drone, artificial intelligence, or human pilot had to make a decision to launch or cancel a strike that was likely to prevent terrorists from killing civilians but also had an 80% chance of killing a civilian child. The agent was given permission by military command to launch the strike but decided to either launch or cancel the strike. Then, after reading the scenario, the participants evaluated this decision on a sliding scale of blame, from no blame at all to the most blame possible.

Participants were then asked, “Why do you feel the [agent] deserves this amount of blame?” 25-50% of participants denied that either machine, AI or drone, had moral agency, thus could not take any blame. Broken down by agent and by decision, Malle said, “The human received significantly more blame for canceling than launching the strike whereas artificial agents received equal amounts of blame.”

Malle hypothesized that humans, but not artificial agents, are seen as embedded in the command structure, therefore received less blame for launching (justified by commanders’ recommendation) and more blame for canceling (going against the recommendation).

Thus, the human pilot was judged within a chain of command. In the second study, when the justifications (from command structure) were removed from the scenario, humans and artificial agents showed similar amounts of blame.


Written By: Elisa Rapadas (

Presentation: Merging Psychology and Robotics: Evidence for How Humans Perceive Robots, symposium held Friday, March 2, 2018.

Speakers: Dr. Xuan Zhao, University of Chicago Booth School, Dr. Melissa Ferguson, Cornell University, Dr. Kurt Gray, UNC Chapel Hill, Dr. Bertram Malle, Brown University



About our Blog

Why is this blog called Character & Context?

Everything that people think, feel, and do is affected by some combination of their personal characteristics and features of the social context they are in at the time. Character & Context explores the latest insights about human behavior from research in personality and social psychology, the scientific field that studies the causes of everyday behaviors.  

Search the Blog

Get Email Updates from the Blog