This workshop appears from the need to create a multidisciplinary research community of people who study the different perspectives and layers of trust dynamics in human-AI teams. Human-AI teamwork is no longer a topic of the future. With the increasing prominence of these teams in diverse industries, several challenges arise that need to be addressed carefully. One of these challenges is understanding how trust is defined and how it functions in Human-AI teams. Psychological literatures suggests that within human teams, team members rely on trust to make decisions and to be willing to rely on their team. Besides that, the multi-agent systems (MAS) community has been adopting trust mechanisms to support decision-making of the agents regarding their peers. Finally, in the last couple of years, researchers have been focusing on how humans trust AI and how AI can be trustworthy. But when we think of a team composed of both humans and AI, with recurrent (or not) interactions, how do these all come together? Currently, we are missing approaches that integrate prior literature on trust in teams in these different disciplines (esp. Psychology and Computer Science). In particular, when looking at dyadic or team-level trust relationships in such a team, we also need to look at how an AI should trust a human teammate and how trust can be defined in such teams. Furthermore, the trust of the human in the AI team member and vice-versa will change over time and also affect each other. In this workshop, we want to motivate the conversation across the different fields and domains. Together, we may shape the road to answer these questions and more.
This workshop calls for contribution and/or participation from several disciplines, including Psychology, Sociology, Cognitive Science, Computer Science, Artificial Intelligence, Robotics, Human-Computer Interaction, Design and Philosophy. Topics related to this workshop include: