You're in a building. The building is on fire. Luckily, an emergency robot is there to show you the way out – but it appears to be malfunctioning… or at least behaving strangely. Do you put your trust in the robot to help direct you to the exit, or try to find your own way out of the burning building?

In the situation described above – the actual setting for a first-of-its-kind experiment designed to test human trust of robots in emergency situations – participants largely placed their faith in the machine to help get them to safety, despite a prior demonstration that it might not be working properly.

"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," said research engineer Alan Wagner from the Georgia Institute of Technology (Georgia Tech). "In our studies, test subjects followed the robot's directions even to the point where it might have put them in danger had this been a real emergency."

The experiment is part of a long-term study examining the nature of human trust in robots. As we come to rely more and more on artificially intelligent machines for things like transport, labour, and maybe other stuff too, the question of how much we actually trust robots becomes increasingly important.

emergency-robot-2Rob Felt/Georgia Tech

But the finding that people will almost blindly follow the instructions of what could be a malfunctioning machine in an emergency demonstrates that we're only at the beginning of trying to understand what takes place in human-robot relations.

"We wanted to ask the question about whether people would be willing to trust these rescue robots," said Wagner. "A more important question now might be to ask how to prevent them from trusting these robots too much."

emergency-robot-3Rob Felt/Georgia Tech

According to the scientists, it's possible the robot became an authority figure in the eyes of the participants, making them less likely to question its guidance. Interestingly, in previous simulation-based testing carried out by the researchers – which didn't include a 'real life' emergency component acted out – participants showed they didn't trust a robot that had previously made mistakes.

The findings of the research, presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in New Zealand, show we've got a lot to learn when it comes to trusting robots. Obviously we need to be able to place our faith in machines, given our complete reliance on them in our everyday lives, but we should never stop thinking for ourselves at the same time, especially when personal danger is involved.

"These are just the type of human-robot experiments that we as roboticists should be investigating," said one of the researchers, Ayanna Howard. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."