Project Advisor(s) (Students Only)
Dr. Forrest Stonedahl, Dr. Ian Harrington
Presentation Type (All Applicants)
Oral Presentation
Disciplines (All Applicants)
Applied Behavior Analysis | Artificial Intelligence and Robotics | Computer Sciences | Graphics and Human Computer Interfaces | Information Literacy | Other Computer Sciences | Psychology
Description, Abstract, or Artist's Statement
As artificial intelligence and robotics continue to advance and be used in increasingly different functions and situations, it is important to look at how these new technologies will be used. An important factor in how a new resource will be used is how much it is trusted. This experiment was conducted to examine people’s trust in a robotic assistant when completing a task, how mistakes affect this trust, and if the levels of trust exhibited with a robot assistant were significantly different than if the assistant were human. The task was to watch a computer simulation of the three-cup monte or shell game where the assistant would give advice and the participant could choose to follow, ignore, or go against the advice. The hypothesis was that participants would have higher levels of trust in the robotic assistant than the human, but that mistakes would have a larger impact on trust levels. The study found that while there was not a significant difference between the overall levels of trust between the robotic assistant and the human one, mistakes did have a significantly larger impact on the short-term trust levels for the robotic assistant versus the human.
Augustana Digital Commons Citation
Thomson, Abigail L.. "Investigating Trust and Trust Recovery in Human-Robot Interactions" (2017). Celebration of Learning.
https://digitalcommons.augustana.edu/celebrationoflearning/2017/presentations/6
Creative Commons License
This work is licensed under a Creative Commons Attribution-No Derivative Works 4.0 International License.
Included in
Applied Behavior Analysis Commons, Artificial Intelligence and Robotics Commons, Graphics and Human Computer Interfaces Commons, Information Literacy Commons, Other Computer Sciences Commons
Investigating Trust and Trust Recovery in Human-Robot Interactions
As artificial intelligence and robotics continue to advance and be used in increasingly different functions and situations, it is important to look at how these new technologies will be used. An important factor in how a new resource will be used is how much it is trusted. This experiment was conducted to examine people’s trust in a robotic assistant when completing a task, how mistakes affect this trust, and if the levels of trust exhibited with a robot assistant were significantly different than if the assistant were human. The task was to watch a computer simulation of the three-cup monte or shell game where the assistant would give advice and the participant could choose to follow, ignore, or go against the advice. The hypothesis was that participants would have higher levels of trust in the robotic assistant than the human, but that mistakes would have a larger impact on trust levels. The study found that while there was not a significant difference between the overall levels of trust between the robotic assistant and the human one, mistakes did have a significantly larger impact on the short-term trust levels for the robotic assistant versus the human.
Comments
Honors Capstone