From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.
|Number of pages||18|
|Publication status||Published - 15 Jun 2022|
|Event||1st International Conference on Hybrid Human-Artificial Intelligence: HHAI2022 - Amsterdam, Netherlands|
Duration: 13 Jun 2022 → 17 Jun 2022
|Conference||1st International Conference on Hybrid Human-Artificial Intelligence|
|Period||13/06/22 → 17/06/22|