Abstract
Generative AI and Large Language Models (LLMs) hold promise for automating spreadsheet formula creation. However, due to hallucinations, bias and variable user skill, outputs obtained from generative AI cannot automatically be considered accurate or trustworthy. To address these challenges, a trustworthiness framework based on evaluating the transparency and dependability of the formula. The transparency of the formula is explored through explainability (understanding the formula's reasoning) and visibility (inspecting the underlying algorithms). The dependability of the formula is evaluated through reliability (consistency and accuracy) and ethical considerations (bias and fairness). The paper also examines the drivers to these metrics in the form of hallucinations, training data bias and poorly constructed prompts. Finally, examples of mistrust in technology are considered and the consequences explored.
Original language | English |
---|---|
Publication status | Published - 4 Jul 2024 |
Event | Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9 - Duration: 4 Jul 2024 → 5 Jul 2024 |
Conference
Conference | Proceedings of the EuSpRIG 2024 Conference "Spreadsheet Productivity & Risks" ISBN : 978-1-905404-59-9 |
---|---|
Period | 4/07/24 → 5/07/24 |
Keywords
- Generative AI
- Spreadsheets
- Spreadsheet risks
- software engineering