Charlotte Foster
Technology

Beware the robot bearing gifts

In a future filled with robots, those that pretend to be your friend could be more manipulative than those that exert authority, suggests a new study published in Science Robotics.

As robots become more common in the likes of education, healthcare and security, it is essential to predict what the relationship between humans and robots will be.

Overview of authority HRI study conditions, setup, and robot behaviors. Credit: Autonomous Systems and Biomechatronics Lab, University of Toronto.

In the study, led by Shane Saunderson and Goldie Nejat of the University of Toronto, Canada, researchers programmed a robot called Pepper to influence humans completing attention and memory tasks, by acting either as a friend or an authority figure.

They found that people were more comfortable with, and more persuaded by, friendly Pepper.

Authoritative Pepper was described by participants as “inhuman,” “creepy,” and giving off an “uncanny valley vibe”.

“As it stands, the public has little available education or general awareness of the persuasive potential of social robots, and yet institutions such as banks or restaurants can use them in financially charged situations, without any oversight and only minimal direction from the field,” writes James Young, a computer scientist  from the University of Manitoba, Canada, in a related Focus.

“Although the clumsy and error-prone social robots of today seem a far cry from this dystopian portrayal, Saunderson and Nejat demonstrate how easily a social robot can leverage rudimentary knowledge of human psychology to shape their persuasiveness.”

Read more: Meet the robots representing Australia at the ‘robot Olympics’

To test a robot’s powers of persuasion, Pepper assumed two personas: one was as a friend who gave rewards, and the other was as an authoritative figure who dealt out punishment.

A group of participants were each given $10 and told that the amount of money could increase or decrease, depending on their performance in set memory tasks.

Friendly Pepper gave money for correct responses, and authoritative Pepper docked $10 for incorrect responses.

The participants then completed tasks in the Test of Everyday Attention toolkit, a cognition test based on real-life scenarios.

After the participant made an initial guess, Pepper offered them an alternative suggestion – this was always the right answer. The participant could then choose to listen to Pepper or go with his or her original answer.

The results showed that people were more willing to switch to friendly Pepper’s suggestions than those of authoritative Pepper.

Image credit: Shutterstock

This article was originally published on cosmosmagazine.com and was written by Deborah Devis.

Tags:
Technology, artificial intelligence, robots, manipulation