International Workshop on Multidisciplinary Perspectives on Human-AI Team Trust

26th June 2023

This workshop is part of HHAI conference

Design Offices Macherei: Weihenstephaner Str. 12, 81673 Munich, Germany

Register Now

About

MULTITTRUST is a workshop of HHAI conference 2023. This workshop appears from the need to create a multidisciplinary research community of people who study the different perspectives and layers of trust dynamics in human-AI teams. Human-AI teamwork is no longer a topic of the future. With the increasing prominence of these teams in diverse industries, several challenges arise that need to be addressed carefully. One of these challenges is understanding how trust is defined and how it functions in Human-AI teams. Psychological literatures suggests that within human teams, team members rely on trust to make decisions and to be willing to rely on their team. Besides that, the multi-agent systems (MAS) community has been adopting trust mechanisms to support decision-making of the agents regarding their peers. Finally, in the last couple of years, researchers have been focusing on how humans trust AI and how AI can be trustworthy. But when we think of a team composed of both humans and AI, with recurrent (or not) interactions, how do these all come together? Currently, we are missing approaches that integrate prior literature on trust in teams in these different disciplines (esp. Psychology and Computer Science). In particular, when looking at dyadic or team-level trust relationships in such a team, we also need to look at how an AI should trust a human teammate and how trust can be defined in such teams. Furthermore, the trust of the human in the AI team member and vice-versa will change over time and also affect each other. In this workshop, we want to motivate the conversation across the different fields and domains. Together, we may shape the road to answer these questions and more.

Topics

This workshop calls for contribution and/or participation from several disciplines, including Psychology, Sociology, Cognitive Science, Computer Science, Artificial Intelligence, Robotics, Human-Computer Interaction, Design and Philosophy. Topics related to this workshop include:

  • Measures of (team) trust in human-AI teams
  • The role of (human) trust and trustworthiness in human-AI teams
  • Trust dynamics in human-AI teams
  • Hybrid techniques (knowledge driven + data driven) to assess trust and trustworthiness in human-AI teams
  • Machine learning techniques to detect trust and trustworthiness in human-AI teams and teammates
  • Evaluation methods for trust and trustworthiness models in human-AI teams
  • Experimental settings for trust dynamics in human-AI teams
  • Design of systems that take into account trust dynamics in human-AI teams

Call for contributions

You are kindly invited to participate in our workshop with or without submission. However, as the idea of the workshop is to create a community, we highly advise you to submit a short paper containing your current, past or future work on one of the topics mentioned above. The contributions can be work in progress, extended abstracts, summary of previous research, or breakthrough ideas that would enrich our discussions during the workshop. Submissions will be anonymised, peer-reviewed and selected based on relevance and quality of writing. In case there are too many submissions, we will also give preference to a varied programme.

Upon acceptance, at least 1 author should attend the workshop in person and be ready to do a short presentation (lighting talk of approx. 7 minutes). On rejection, you are also welcome to still join our workshop. To attend the workshop, participants must be registered for HHAI conference.

All submissions should adhere to IOS formatting guidelines. Papers should be written in English and have 2-4 pages (excluding references).

Please anonymise your submission.

You can submit through easychair.

Important Dates

  • Submission deadline: 31st March 5th April 2023 AoE
  • Notification deadline: 28th April 2023
  • Camera Ready: 31st May 2023
  • Workshop: 26th June 2023

Keynote Speakers

Lionel P. Robert

University of Michigan

The Problematic Problems of Human Trust in Robots:
Is Trusting a Robot More like a Teammate or a Tool and should we really care?

As robotics advances and permeates various aspects of our social and work lives, the question of how humans we view and ultimately trust robots has become increasingly pertinent. Do humans view them as mere machines, automated tools designed to serve their needs or do they embrace a more empathetic approach, viewing and trusting them as actual teammates (i.e. humans)? On the one hand, proponents of robots as possible humans argue that computers are social actors (CASA) and that humans mindlessly interact with computers in much the same way they do humans. This view is often used to justify the employment of human-to-human theories and their corresponding measures to understand human-robot interactions. On the other hand, advocates of mechanization contend that humans do not view robots as humans but instead as automated tools. This view discourages using human-to-human theories and their corresponding measures to understand human-robot interactions. They advocate for more human-to-automation theories and measures of constructs like trust. In this thought-provoking presentation, I will explore the arguments supporting both perspectives and consider the potential consequences of each approach. Ultimately, this presentation aims to provide a balanced understanding of the complexities involved to encourage a nuanced dialogue on the subject.

Myrthe L. Tielman

Delft University of Technology

Let's talk about trust

Trust is a hot topic. It’s something very important to humans, it’s important to teams, and it’s important for AI. So many people are looking into trust, and as human-AI team researchers it seems something we should care a lot about. But what do we actually mean when we talk about trust? There’s a lot of different perspectives and definitions. Should we care about that, or try to come to an agreement? In this talk, I argue that meaning is more important than agreement when it comes to words. But meaning is crucial, as through looking at the different meanings of trust, we also might gain new perspectives on how to achieve it.

Organizers

Carolina Centeio Jorge

Delft University of Technology

Anna-Sophie Ulfert-Blank

Eindhoven University of Technology

Programme Committee

  • Filipa Correia, ITI-LARSYS, PT
  • Cristiano Castelfranchi, ISTC-CNR, IT
  • Alessandro Sapienza, ISTC-CNR, IT
  • Michelle Zhao, Carnegie Mellon University, US
  • Rino Falcone, ISTC-CNR, IT
  • Catholijn Jonker, Delft University of Technology, NL
  • Siddharth Mehrotra, Delft University of Technology, NL
  • Beau Schelble, Clemson University, US
  • Filippo Cantucci, ISTC-CNR, IT
  • Mengyao Li, University of Wisconsin-Madison, US
  • X. Jessie Yang, University of Michigan, US
  • Connor Esterwood, University of Michigan, US
  • Samuele Vinanzi, Sheffield Hallam University, UK
  • Alan R. Wagner, Penn State University, US
  • Ewart de Visser, USAFA, US
  • Glenda Hannibal, Ulm University, DE
  • Hebert Azevedo-Sá, Military Institute of Engineering, BR
  • Eleni Georganta, University of Amsterdam, NL
  • Ruben Verhagen, Delft University of Technology, NL

Programme

Time Activity
9h00 Networking activity
9h30 Lightning talks 1 + discussion

Perception of AI teammate's trustworthiness
The Trustworthiness Assessment Model – A Micro and Macro Level Perspective
Nadine Schlicker and Markus Langer
AI-Enabled Decision Support Systems: Tool or Teammate?
Myke C. Cohen and Michelle Mancenido
10h00 Break
10h30 Lightning talks 2 + discussion

Improving AI teammate's trustworthiness
Communicating AI intentions to boost Human AI cooperation
Bruno Berberian, Marin Le Guillou and Marine Pagliari
The Effects of Social Intelligence on Trust in Human-AI Teams
Morgan Bailey, Benjamin Gancz and Frank Pollick
11h00 Keynote 1 Problematic Problems of Human Trust in Robots:
Is Trusting a Robot More like a Teammate or a Tool and should we really care?


Lionel P Robert
12h00 Lunch
13h30 Keynote 2 Let's talk about trust

Myrthe Tielman
14h30 Lightning talks 2 + discussion

Calibrating Human-AI trust in teams
Investigating Human-Robot Overtrust During Crises
Colin Holbrook, Daniel Holman, Alan Wagner, Tyler Marghetis, Gale Lucas, Brett Sheeran, Vidullan Surendran, Jared Armagost, Savanna Spazak, Kevin Andor and Yinxuan Yin
Mutually Adaptive Trust Calibration in Human-AI Teams
Ewart de Visser, Ali Momen, James Walliser, Spencer Kohn, Tyler Shaw and Chad Tossell
15h00 Break
15h30 Lightning talks 4 + discussion

Decision-making in Human-AI teams
Causing Intended Effects in Collaborative Decision-Making
André Meyer-Vitali and Wico Mulder
Artificial Trust for Decision-Making in Human-AI Teamwork: Steps and Challenges
Carolina Centeio Jorge, Catholijn M. Jonker and Myrthe L. Tielman
16h00 Lightning talks 5 + discussion

Human-AI team trust
Trust Dispersion and Effective Human-AI Team Collaboration: The Role of Psychological Safety
Tilman Nols, Anna-Sophie Ulfert-Blank and Avi Parush
Piecing Together the Puzzle: Understanding Trust in Human-AI Teams
Anna-Sophie Ulfert-Blank, Eleni Georganta, Myrthe L. Tielman and Tal Oron-Gilad
16h30 Reflections
17h00 End

Registration

Registraton is open and can be done through HHAI conference 2023 website.

Contact

For any question related to the workshop please contact Carolina Jorge:

C dot Jorge at tudelft dot nl (also hiperlink available at Organizers section)