Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets

Allbwn ymchwil: Cyfraniad at gynhadleddPapuradolygiad gan gymheiriaid

12 Wedi eu Llwytho i Lawr (Pure)

Crynodeb

Generative AI and Large Language Models (LLMs) hold promise for automating spreadsheet formula creation. However, due to hallucinations, bias and variable user skill, outputs obtained from generative AI cannot automatically be considered accurate or trustworthy. To address these challenges, a trustworthiness framework based on evaluating the transparency and dependability of the formula. The transparency of the formula is explored through explainability (understanding the formula's reasoning) and visibility (inspecting the underlying algorithms). The dependability of the formula is evaluated through reliability (consistency and accuracy) and ethical considerations (bias and fairness). The paper also examines the drivers to these metrics in the form of hallucinations, training data bias and poorly constructed prompts. Finally, examples of mistrust in technology are considered and the consequences explored.
Iaith wreiddiolSaesneg
StatwsCyhoeddwyd - 4 Gorff 2024
Digwyddiad Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9
-
Hyd: 4 Gorff 20245 Gorff 2024

Cynhadledd

Cynhadledd Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9
Cyfnod4/07/245/07/24

Dyfynnu hyn