Teaching robots and humans to trust each other

Blog
Publication Date
05/07/2022
Author(s)
Professor Helen Hastie FRSE
A person posing for the camera
Professor Helen Hastie FRSE, Professor of Computer Science, Heriot-Watt University

How do you know if a dog trusts you? Dog owners will tell you it’s in the eyes. It’s said that dogs have a sixth sense, an ability to read their human companions, which seems to be instinctive. Can we teach robots to do the same?

Creating robots with this level of emotional intelligence and emulating the complex interactions that humans have with one another is very challenging. While robots are already deployed in the workplace—usually performing repetitive tasks with human oversight, like assembling cars on production lines—we hesitate when considering them for more complex roles. 

To build trust, robots need to understand what the human needs, their intent, and their emotional state. A lot of information can be gleaned from what the human says, how they say it, and through body language. Humans pick up cues from one another and adapt them to different people, tasks, and situations. For robots, this is obviously very difficult. 

Our research, which is delivered in collaboration with teams at Imperial College London and the University of Manchester, aims to teach robotic and autonomous systems how to recognise when and why trust is lost with their human counterparts.

Building trust—a highly subjective concept—brings multiple challenges. Our £3 million research project, funded by UK Research and Innovation as part of their Trustworthy Autonomous Systems (TAS) programme, brings together expertise in robotics, cognitive science, and psychology to tackle this conundrum.

To build trust, robots need to understand what the human needs, their intent, and their emotional state. A lot of information can be gleaned from what the human says, how they say it, and through body language.

Through a range of experiments, our aim is to explore how to establish, maintain and repair trust between humans and robots. For example, one of the fundamental tests we’re conducting as part of our research is maze navigation. If a robot unintentionally gives a human the wrong advice on which way to turn, how does the robot gauge whether it has lost the human’s trust, and how does it then rebuild it? To answer this, we need to develop a cognitive model that can help robots to better understand human behaviour. Once this, and the appropriate amount of trust is established, robots can make a huge contribution to society, beyond factory production workers.

In our new ‘Assisted Living Lab’ at the National Robotarium, we will be testing robots and AI systems to see how they can support those with assisted living needs to live independently for longer; and exploring how robots can support and complement carers.

For industry, technology created by the ORCA Hub is helping to develop autonomous systems to revolutionise the way renewable energy assets are inspected and maintained. These assets are often difficult to reach and are hazardous. We need to create methods to reassure operators that robots are competent in these conditions, especially when working underwater.

This research is essential as we design and build robotic and autonomous systems. Only by embedding the appropriate level of trust will we ensure greater acceptance, usability and, ultimately, adoption of robotics in our daily lives.


Professor Helen Hastie FRSE is a Professor of Computer Science at Heriot-Watt University, and Fellow of the Royal Society of Edinburgh.

This article originally appeared in The Scotsman on 5 July 2022.

The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.