A matter of consequences
Understanding the effects of robot errors on people’s trust in HRI
On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of
open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world
applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the
successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular
when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust,
and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a
virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative
using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated
interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of
robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In
particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the
beginning of the interaction and had severe consequences.
Our results also provided insights on how these errors vary according to the individuals’ personalities,
expectations and previous experiences.
Article outline
- 1.Introduction
- 2.Background & related work
- 2.1Robots’ errors in HRI
- 2.2Antecedents of trust
- 2.3Trust in long-term human-robot interactions
- 3.Methodology
- 3.1Study 1: Interactive storyboard
- 3.1.1The robot
- 3.1.2Motion picture generation
- 3.1.3Experimental design
- 3.1.4Participants
- 3.2Study 2: A repeated-interactions study
- 3.3The tasks
- 4.Study 1: Trust and robot’s errors
- 4.1Participants’ trust in the robot Jace in relation to the robot errors
- 4.2Analysis of the explanations participants gave for decision-making in the emergency scenario
- 5.Study 1: Antecedents of trust and robot errors
- 5.1Effects of people’s personality on trust
- 5.2Effects of people’s past experiences on trust
- 5.3Effects of perception of robots
- 5.3.1Perception of a robot as a companion
- 5.3.2Expectation of a robot’s capabilities
- 5.3.3Perception of a robot’s role
- 5.4Effects of perception of the robot Jace in relation to the magnitude of consequences of the errors
- 5.4.1Perceived companionship
- 5.4.2Perceived reliability and faith in the ability of the robot
- 5.4.3Perception of the robot’s role
- 6.Study 2: Evolution of trust and erroneous robot behaviours
- 6.1Trust in Care-O-bot 4 in relation to the robot’s errors
- 6.2Antecedents of trust
- 6.3Perception of Care-O-bot 4
- 6.4Evaluation of robot errors
- 7.Limitations
- 8.Conclusion & future work
- 8.1Original contributions to knowledge
- 8.2Future works
- Notes
-
References