In human-robot interactions, humans and robots communicate with each other using speech, gestures, facial expressions, and body postures. Similar to human-human interactions, when there is a lot of communication and interaction, it is quite common that errors happen. Therefore, robots that are programmed to interact with humans need to be able to recognise when errors occur. They should also know whether the errors were caused by themselves or by the human. Only then will robots be able to resolve errors.

How can robots recognize whether an error occurred? Error recognition is an extremely difficult task because errors are unexpected and, thus, hard to detect. One way for robots to recognize that an error has occurred is to observe the behavior of the human. For that, the robot needs to know how humans behave in the event of an error. In order to research this, researchers from the Human-Robot Interaction Group at the Center for Human-Computer Interaction analyzed over 200 videos of user studies. The videos showed situations of humans interacting with a robot that executes an error. The researchers found the following interesting results:

  • In the analysed human-robot interactions, two types of errors occur. The first type of errors are social norm violations in which the robot behaves in a socially inappropriate way. The second type are technical failures, obvious technical shortcomings of the robot.
  • In roughly 50% of the error situations, humans show significantly increased verbal and non-verbal signals.
  • During technical failures, humans react faster to the situation but show significantly less verbal and non-verbal signals.
  • In the case of errors, humans move their heads more often, for example, to look to other humans for help or to look for a solution of the task at hand.
  • Humans talk more during error situations, but only if the error situation was caused by a social norm violation of the robot and not by an obvious technical failure.
  • Humans show more non-verbal social signals in the event of errors, but only when another human is present.

More information about this work can be found in the cited papers below.

The knowledge from this analysis could be used to train automatic error detection modules for robots using machine learning. In a second step, researchers from the Human-Robot Interaction Group conducted a human-robot interaction user study to collect additional videos from humans that experience error situations while interacting with a robot. They recorded the videos with a Microsoft Kinect camera, which lets the researchers extract the body postures from the recorded study participants. This information can then be used to train an automatic error recognition module. The results from this study show that the accuracy for recognizing errors is quite high when the module is trained with data from single humans.

Research by