• myGriffith
    • Staff portal
    • Contact Us⌄
      • Future student enquiries 1800 677 728
      • Current student enquiries 1800 154 055
      • International enquiries +61 7 3735 6425
      • General enquiries 07 3735 7111
      • Online enquiries
      • Staff phonebook
    View Item 
    •   Home
    • Griffith Theses
    • Theses - Higher Degree by Research
    • View Item
    • Home
    • Griffith Theses
    • Theses - Higher Degree by Research
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

  • All of Griffith Research Online
    • Communities & Collections
    • Authors
    • By Issue Date
    • Titles
  • This Collection
    • Authors
    • By Issue Date
    • Titles
  • Statistics

  • Most Popular Items
  • Statistics by Country
  • Most Popular Authors
  • Support

  • Contact us
  • FAQs
  • Admin login

  • Login
  • Enhancing Humans Trust in Robots through Explanations

    Thumbnail
    View/Open
    Javaid Misbah_Final Thesis_redacted.pdf (15.58Mb)
    Author(s)
    Javaid, Misbah
    Primary Supervisor
    Estivill-Castro, Vladimir
    Other Supervisors
    Hexel, Rene
    Year published
    2021-01-18
    Metadata
    Show full item record
    Abstract
    Robots have moved away from manufacturing environments and are now deployed as social robots in human environments such as in hotels, shops, hospitals and as office coworkers. These robots complement human capabilities and skills with their own robotic skills. With the advancement in the technological capabilities of robots, the roles of such sophisticated robots are evolving from obedient deterministic machines to companions or teammates. Meanwhile, the role of humans is also changing from operators to that of team members. Therefore, robots are expected to collaborate and contribute productively with humans as teammates. ...
    View more >
    Robots have moved away from manufacturing environments and are now deployed as social robots in human environments such as in hotels, shops, hospitals and as office coworkers. These robots complement human capabilities and skills with their own robotic skills. With the advancement in the technological capabilities of robots, the roles of such sophisticated robots are evolving from obedient deterministic machines to companions or teammates. Meanwhile, the role of humans is also changing from operators to that of team members. Therefore, robots are expected to collaborate and contribute productively with humans as teammates. We expect robots to develop social intelligence to behave smartly, and also assist us to perform complex tasks. Still, robots lack the features that would permit them to be considered full-fledge teammates by their human counterparts. Inadequacy of humans trust has been identified as a pre-eminent factor behind the unacceptability of robots as trustworthy teammates. Trust is an essential factor for accomplishing the full potential of human-robot teamwork. Trust directly affects a human's willingness to receive robot-produced information and suggestions, and hence, the future use of robots also depends on trust. If humans do not trust robots, they may not utilize their robotic features to their full potential. Research is on-going to address the establishment and endorsement of efficient and successful approaches for an extensive spectrum of Human-Robot Interaction issues. Pragmatic evaluations and investigations in the field of Human-Computer Interaction have already examined humans' trust in technical systems mostly on issues such as reliability and accuracy of performance. We hypothesize that to integrate robots into human-environment successfully, robots must make their decision-making transparent to the humans in the mixed human-robot team. We argue that the trust humans place in their robotic companions is influenced by the humans' achieving some understanding of the robot's decision-making process. We propose to achieve higher levels of trust in robots, by making the robots produce explanations in human understandable terms. Our thesis is that the explanations from robots shall express how a decision is made and why the decision-made is selected as best among all other decisions. By augmenting robots with explanation capabilities, we facilitate humans to comprehend the behaviour of robots and help in establishing successful and trustworthy human-robot interaction. Artificial intelligence researchers, within the area of expert systems, have also provided sufficient motivation to consider the contribution of explanations to building humans trust and to the acceptability of these systems. Also, systems that provide explanations after some failure received more tolerant behaviour from humans. Providing explanations for decisions is believed to be one of the most important capabilities of robots. However, to the best of our knowledge, there is still a gap in the current human-robot interaction literature. We notice there is very little experimental verification that could show that explanations facilitate and certainly affect humans trust and acceptance of robots. Previous research [1] used a different method to increase transparency by having a simulated robot to provide explanations of its actions. Explanations did not improve the team's performance and trust was identified as an influential factor only under the conditions of high-reliability. To better comprehend the emerging topic of trust, we adopted the human-in-the-loop approach, by providing clear explanations, with emphasis on the transparency and justification of the robot decisions. We report on two user studies investigating the effect of a robot's explanations with different modalities (text and audio) on the humans' level of trust during human-robot physical interactions. For user study 1, our setting consists of an interactive game-playing environment (the partial information game Domino), in which the robot partners with a human to form a team. Since in the game there are two adversarial teams, the robot plays two roles: the already mentioned partner with a human in a team, but also as an adversary facing the second team of two humans. Explanations from the partner robot not only provide insight into the robot's decision-making process, but also help in improving humans' learning of the task. We evaluated the human participants' implicit trust in the robot by performing multi-modal scrutiny i.e., recording human participants' facial expressions, and affective states during the game-play sessions. We also used questionnaires to measure participants' explicit trust and perception of the robot attributes. Our results show that the human participants considered the robot with explanations' ability as a trustworthy team-mate. For user study 2, human participants performed a decision-making task in collaboration with a real robot. For the proposed method, we set the focus of our inquiry through humans' conformation and acceptance of the robot's answers, as a new objective measure of the human-robot trust relationship. We found that human participants trusted and conformed more with the robot's decisions (communicated with explanations), as compared to their own decisions. Meanwhile, subjective measures using questionnaires also reported an increase in trust of human participants towards the robot. Through our experimental investigations, we conclude that explanations can be generally used as an effective communication modality for robots to earn human trust in social environments.
    View less >
    Thesis Type
    Thesis (PhD Doctorate)
    Degree Program
    Doctor of Philosophy (PhD)
    School
    School of Info & Comm Tech
    DOI
    https://doi.org/10.25904/1912/4071
    Copyright Statement
    The author owns the copyright in this thesis, unless stated otherwise.
    Subject
    Robots
    Humans
    Interactions
    Trust
    Publication URI
    http://hdl.handle.net/10072/401638
    Collection
    • Theses - Higher Degree by Research

    Footer

    Disclaimer

    • Privacy policy
    • Copyright matters
    • CRICOS Provider - 00233E
    • TEQSA: PRV12076

    Tagline

    • Gold Coast
    • Logan
    • Brisbane - Queensland, Australia
    First Peoples of Australia
    • Aboriginal
    • Torres Strait Islander