In a human-AI collaboration, users build a mental model of the AI system based on its veracity and how it presents its decision, e.g. its presentation of system confidence and an explanation of the output. However, modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust. In order to build trustworthy AI, we must understand how user trust is developed and how it can be regained after potential trust-eroding events. We study the evolution of user trust in response to these trust-eroding events using a betting game as the users interact with the AI. We find that even a few incorrect instances with inaccurate confidence estimates can substantially damage user trust and performance, with very slow recovery. We also show that this degradation in trust can reduce the success of human-AI collaboration and that different types of miscalibration---unconfidently correct and confidently incorrect---have different (negative) effects on user trust. Our findings highlight the importance of calibration in user-facing AI application, and shed light onto what aspects help users decide whether to trust the system.
This work is accepted at EMNLP 2023, read it here. Written by Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, and Mrinmaya Sachan from ETH Zurich, Department of Computer Science.
@inproceedings{dhuliawala-etal-2023-diachronic,
title = "A Diachronic Perspective on User Trust in {AI} under Uncertainty",
author = "Dhuliawala, Shehzaad and
Zouhar, Vil{\'e}m and
El-Assady, Mennatallah and
Sachan, Mrinmaya",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.339",
doi = "10.18653/v1/2023.emnlp-main.339",
pages = "5567--5580"
}
First, generate user queues with python3 src_queues/baked_queues/generate_base.py
(or generate_with_types.py
).
Then, the source can be built as
cd src_ui
npm install
npm run dev # to launch the server locally
npm run build # to generate JS that can be uploaded
You can access the collected data on huggingface zouharvi/trust-intervention
:
from datasets import load_dataset
data = load_dataset("zouharvi/trust-intervention")
The collected data is also stored in data/collected_users.jsonl.tar.gz
.
Unpack as:
tar -xvzf data/collected_users.jsonl.tar.gz
# 18664
wc -l data/collected_users.jsonl
and load in Python as:
import json
data = [json.loads(x) for x in open("data/collected_users.jsonl", "r")]
The figures and tables in the paper are generated by scripts in src_analysis
.