Quantcast

The First Steps Towards Robots and Humans Working in Harmony

By Wesley Fenlon

A scientific study applies a psychological principle to human-robot interaction.

Getting robots and humans to work better together is no easy task. It's partially an issue of trust. How can we build trust? Giving robots faces and personalities could help. Giving them chainsaws, on the other hand, will probably not foster a new era of human-robot cooperation. A recent study thinks human psychology can help us out, or at least help robots work alongside us more efficiently. A reporter for the New Scientist took part in an experiment with Abbie, an industrial robot arm, to see if cross-training--switching roles in the experiment to better understand the other participant--would lead to better performance.

The goal of the study was to see if Abbie and its human counterpart would settle upon a shared plan or procedure by becoming familiar with each side of the activity. And to make it tough, the scientists programmed Abbie to do the exact opposite of what the human wants.

Photo credit: Interactive Robots at MIT

In the test, New Scientist's Celeste Biever had to insert three screws into a tabletop, and Abbie had to tighten them down. Biever wanted to have Abbie follow up each placement sequentially, instead of waiting for all the screws to be in place before tightening them. The scientists programmed Abbie to do the opposite, so during cross-training, Celeste jumped in to tighten down the screws after each placement.

Supposedly the cross-training is successful. "Abbie's machine-learning software analysed my behaviour as I carried out her role, which gave her a glimpse of my expectations of her," Biever wrote. "Because I performed it differently to the way she had been programmed, she should have modified her behaviour to meet my expectations...We start the task Abbie's way but end up completing it in a hybrid fashion – she starts to do things my way once I've positioned the second screw."

It's not the most convincing experiment; the procedure is so simple, it seems more practical to simply program the robot to behave a certain way, and the adaptation is all one-sided. Biever wrote that "I know I have adapted as a result of the rehearsal but has Abbie?" but it seems as though the burden is on the robot to do all the adapting. Outside of strictly programmed scenarios, it seems like very sophisticated machine learning will be required to have robots truly learn and adapt to how we do things.

Still, it's a start, and New Scientist points out that governments and scientific organizations are working to create standards that would allow for safe human-robot interactions in workspaces. The goal is to have safe shared workspaces in 2014. And maybe a decade after that, our machines will be learning how to better work with us.