Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets

Research output: Contribution to conferencePaperpeer-review

5 Downloads (Pure)

Abstract

Generative AI and Large Language Models (LLMs) hold promise for automating spreadsheet formula creation. However, due to hallucinations, bias and variable user skill, outputs obtained from generative AI cannot automatically be considered accurate or trustworthy. To address these challenges, a trustworthiness framework based on evaluating the transparency and dependability of the formula. The transparency of the formula is explored through explainability (understanding the formula's reasoning) and visibility (inspecting the underlying algorithms). The dependability of the formula is evaluated through reliability (consistency and accuracy) and ethical considerations (bias and fairness). The paper also examines the drivers to these metrics in the form of hallucinations, training data bias and poorly constructed prompts. Finally, examples of mistrust in technology are considered and the consequences explored.
Original languageEnglish
Publication statusPublished - 4 Jul 2024
Event Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9
-
Duration: 4 Jul 20245 Jul 2024

Conference

Conference Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9
Period4/07/245/07/24

Keywords

  • Generative AI
  • Spreadsheets
  • Spreadsheet risks
  • software engineering

Cite this