With the advancement of technology and the internet, we seem to have more and more to worry about in terms of privacy risks. It is not uncommon to hear people say, “Be careful what you put out on the internet,”, “nothing can ever be truly deleted online,” and frankly, there is so much more truth in that than we often recognize.
The more I use AI and learn about how companies and AI tools, especially LLMs handle our data, the more scared I am of just how much these tools know about us, knowingly or unknowingly. Whilst several concerns about AI use exist, for example in academic circles, concerns about depleting basic learning blocks and reducing creativity, to concerns in healthcare such as misinformation and spread of harm, I think another basic yet equally important concern that we often forget is how much AI knows about us and what that information is currently being used/ will be used to do in the future.
In this week’s reading (HAI article on privacy in an AI era), the author discusses how AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information (Miller, 2024).
So I do in fact have concerns about AI companies’ approach to user data both for now and the future.
Source: Miller, K. (2024, March 18). Privacy in an AI Era: How Do We Protect Our Personal Information? Hai.stanford.edu; Stanford University. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information