Project Advisor(s)

Dr. Forrest Stonedahl, Dr. Ian Harrington

Presentation Type

Oral Presentation

Disciplines

Applied Behavior Analysis | Artificial Intelligence and Robotics | Computer Sciences | Graphics and Human Computer Interfaces | Information Literacy | Other Computer Sciences | Psychology

Description, Abstract, or Artist's Statement

As artificial intelligence and robotics continue to advance and be used in increasingly different functions and situations, it is important to look at how these new technologies will be used. An important factor in how a new resource will be used is how much it is trusted. This experiment was conducted to examine people’s trust in a robotic assistant when completing a task, how mistakes affect this trust, and if the levels of trust exhibited with a robot assistant were significantly different than if the assistant were human. The task was to watch a computer simulation of the three-cup monte or shell game where the assistant would give advice and the participant could choose to follow, ignore, or go against the advice. The hypothesis was that participants would have higher levels of trust in the robotic assistant than the human, but that mistakes would have a larger impact on trust levels. The study found that while there was not a significant difference between the overall levels of trust between the robotic assistant and the human one, mistakes did have a significantly larger impact on the short-term trust levels for the robotic assistant versus the human.

Comments

Honors Capstone

Creative Commons License

Creative Commons Attribution-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-No Derivative Works 4.0 License.

 
May 3rd, 12:00 AM May 3rd, 12:00 AM

Investigating Trust and Trust Recovery in Human-Robot Interactions

As artificial intelligence and robotics continue to advance and be used in increasingly different functions and situations, it is important to look at how these new technologies will be used. An important factor in how a new resource will be used is how much it is trusted. This experiment was conducted to examine people’s trust in a robotic assistant when completing a task, how mistakes affect this trust, and if the levels of trust exhibited with a robot assistant were significantly different than if the assistant were human. The task was to watch a computer simulation of the three-cup monte or shell game where the assistant would give advice and the participant could choose to follow, ignore, or go against the advice. The hypothesis was that participants would have higher levels of trust in the robotic assistant than the human, but that mistakes would have a larger impact on trust levels. The study found that while there was not a significant difference between the overall levels of trust between the robotic assistant and the human one, mistakes did have a significantly larger impact on the short-term trust levels for the robotic assistant versus the human.