Understanding the Interplay Between Trust, Reliability, and Human Factors in the Age of Generative AI

Research output: Contribution to journalArticlepeer-review

Abstract

In the swiftly evolving landscape of Generative AI, particularly through Large Language Models (LLMs), there is a promising utility across diverse applications. While these tools promise heightened accuracy, efficiency, and productivity, the potential for misinformation and "hallucinations" underscores the need for cautious implementation. Despite proficiency in meeting user-specific demands, LLMs lack a comprehensive problem-solving intelligence and struggle with input uncertainty, leading to inaccuracies. This paper critically examines the nuanced challenges surrounding Generative AI, delving into trust issues, system reliability, and the impact of human factors on objective judgments. As we navigate the complex terrain of Generative AI, the presentation advocates for a discerning approach, emphasizing the necessity of verification and validation processes to ensure the accuracy and reliability of generated outputs. The exploration serves to illuminate the multifaceted dimensions of trust in technology, providing insights into how human factors shape our ability to make objective assessments of the reliability and accuracy of artefacts produced by Generative AI. This contribution to the academic discourse fosters a comprehensive understanding of the intricate dynamics inherent in the responsible utilisation of Generative AI technologies.
Original languageEnglish
JournalInternational Journal of Simulation: Systems, Science & technology
Early online date5 May 2024
DOIs
Publication statusPublished - 5 May 2024

Cite this