TY - JOUR
T1 - Understanding the Interplay Between Trust, Reliability, and Human Factors in the Age of Generative AI
AU - Thorne, Simon
PY - 2024/5/5
Y1 - 2024/5/5
N2 - In the swiftly evolving landscape of Generative AI, particularly through Large Language Models (LLMs), there is a promising utility across diverse applications. While these tools promise heightened accuracy, efficiency, and productivity, the potential for misinformation and "hallucinations" underscores the need for cautious implementation. Despite proficiency in meeting user-specific demands, LLMs lack a comprehensive problem-solving intelligence and struggle with input uncertainty, leading to inaccuracies. This paper critically examines the nuanced challenges surrounding Generative AI, delving into trust issues, system reliability, and the impact of human factors on objective judgments. As we navigate the complex terrain of Generative AI, the presentation advocates for a discerning approach, emphasizing the necessity of verification and validation processes to ensure the accuracy and reliability of generated outputs. The exploration serves to illuminate the multifaceted dimensions of trust in technology, providing insights into how human factors shape our ability to make objective assessments of the reliability and accuracy of artefacts produced by Generative AI. This contribution to the academic discourse fosters a comprehensive understanding of the intricate dynamics inherent in the responsible utilisation of Generative AI technologies.
AB - In the swiftly evolving landscape of Generative AI, particularly through Large Language Models (LLMs), there is a promising utility across diverse applications. While these tools promise heightened accuracy, efficiency, and productivity, the potential for misinformation and "hallucinations" underscores the need for cautious implementation. Despite proficiency in meeting user-specific demands, LLMs lack a comprehensive problem-solving intelligence and struggle with input uncertainty, leading to inaccuracies. This paper critically examines the nuanced challenges surrounding Generative AI, delving into trust issues, system reliability, and the impact of human factors on objective judgments. As we navigate the complex terrain of Generative AI, the presentation advocates for a discerning approach, emphasizing the necessity of verification and validation processes to ensure the accuracy and reliability of generated outputs. The exploration serves to illuminate the multifaceted dimensions of trust in technology, providing insights into how human factors shape our ability to make objective assessments of the reliability and accuracy of artefacts produced by Generative AI. This contribution to the academic discourse fosters a comprehensive understanding of the intricate dynamics inherent in the responsible utilisation of Generative AI technologies.
U2 - 10.5013/ijssst.a.25.01.10
DO - 10.5013/ijssst.a.25.01.10
M3 - Article
JO - International Journal of Simulation: Systems, Science & technology
JF - International Journal of Simulation: Systems, Science & technology
ER -