The Trust Paradox: Analyzing Auditor Reliance on Hallucinating Generative AI Models in Internal Control Testing
Abstract
Auditing is entering an era where generative artificial intelligence (AI) models are increasingly assisting in tasks such as internal control testing. This paper examines the “trust paradox” auditors face when relying on these AI systems that can hallucinate, produce plausible yet fabricated information. We combine qualitative and quantitative methods to investigate how auditors use and trust generative AI in evaluating internal controls. Interviews with audit professionals reveal both enthusiasm about AI’s efficiency and deep concern over its reliability. In an experimental simulation, we find that while an AI model can efficiently analyze vast control data and identify issues, it also generates false outputs (hallucinations) that could mislead auditors. A survey of practitioners further shows a cautious approach: most auditors are willing to use AI suggestions only with verification, balancing the benefits of automation against the risk of error. Our analysis highlights that over-reliance on AI without skepticism can undermine audit quality, yet under-utilizing AI forfeits potential improvements. We discuss strategies to resolve this paradox, including maintaining professional skepticism, implementing AI output validation controls, and enhancing model transparency. The study contributes actionable insights for audit firms and standard-setters on integrating generative AI into internal control testing in a responsible, trust-balanced manner.
How to Cite This Article
Aduragbemi Joshua Olaseinde, Bolanle Busirat Azeez (2022). The Trust Paradox: Analyzing Auditor Reliance on Hallucinating Generative AI Models in Internal Control Testing . International Journal of Artificial Intelligence Engineering and Transformation (IJAIEAT), 3(1), 38-49. DOI: https://doi.org/10.54660/IJAIET.2022.3.1.38-49