Explainability and Trust in AI-Driven Public Health Decision Making: Barriers to Adoption Among Policymakers and Frontline Health Officials

Authors

  • Luca O'Neill University of Oxford Author

Keywords:

Explainable AI, public health decision making, XAI adoption, algorithmic trust

Abstract

 Artificial intelligence systems for infectious disease surveillance and outbreak response have demonstrated substantial technical capability, yet their adoption among public health policymakers and frontline health officials remains limited and uneven. This paper examines the barriers to adoption from a human and organizational perspective, drawing on human-computer interaction research, public health governance literature, and case evidence from deployed AI surveillance systems. We identify six primary adoption barriers spanning opacity, automation bias, accountability ambiguity, regulatory vacuum, cultural resistance, and alert fatigue. We propose a structured taxonomy of explainable AI techniques matched to the needs of two distinct actor groups and present a set of governance and design interventions grounded in published evidence. The analysis demonstrates that technical explainability is necessary but insufficient: sustained adoption requires simultaneous investment in governance standards, training, and institutional accountability frameworks.

Downloads

Published

2025-12-24