Abstract
The emergence of large language models (LLMs), exemplified by ChatGPT, has ushered in a significant transformation in our interactions with artificial intelligence (AI). However, the widespread integration of AI technologies has raised significant concerns about user privacy protection. To establish a robust relationship between users and AI while safeguarding user privacy, trust, and trustworthiness are fundamental factors in building user confidence and ensuring the responsible and ethical use of AI. Within this context, the authors reviewed previous research to examine the current global landscape of public trust in ChatGPT and its evolution over time. Furthermore, this chapter investigates the trustworthiness of AI by examining the risks and threats posed by ChatGPT from technical, legal, and ethical perspectives. The authors also explore the factors that influence user trust in AI, and propose strategies aimed at enhancing the trustworthiness of AI systems in safeguarding user privacy. In doing so, supporting people to be more aware and sensible of what they should and should not trust online with regard to AI. This chapter offers insights to a broad audience, including industry professionals, policymakers, educators, and the general public, all collaborating to achieve a harmonious equilibrium between user trust and AI trustworthiness. The goal is to enable the public to harness the benefits of advanced technology while safeguarding their privacy in the ChatGPT era.
Original language | English |
---|---|
Title of host publication | Data Protection |
Subtitle of host publication | The Wake of AI and Machine Learning |
Publisher | Springer Nature |
Pages | 103-127 |
Number of pages | 25 |
ISBN (Electronic) | 9783031764738 |
ISBN (Print) | 9783031764721 |
DOIs | |
Publication status | Published - 1 Jan 2025 |
Keywords
- AI
- ChatGPT
- Data protection
- Information security
- Privacy protection
- Trust and trustworthiness