How safe is our data? (Post 5)

With the advancement of technology and the internet, we seem to have more and more to worry about in terms of privacy risks. It is not uncommon to hear people say, “Be careful what you put out on the internet,”, “nothing can ever be truly deleted online,” and frankly, there is so much more truth in that than we often recognize. 

The more I use AI and learn about how companies and AI tools, especially LLMs handle our data, the more scared I am of just how much these tools know about us, knowingly or unknowingly. Whilst several concerns about AI use exist, for example in academic circles, concerns about depleting basic learning blocks and reducing creativity, to concerns in healthcare such as misinformation and spread of harm, I think another basic yet equally important concern that we often forget is how much AI knows about us and what that information is currently being used/ will be used to do in the future. 

In this week’s reading (HAI article on privacy in an AI era), the author discusses how AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information (Miller, 2024). 

So I do in fact have concerns about AI companies’ approach to user data both for now and the future. 

Source: Miller, K. (2024, March 18). Privacy in an AI Era: How Do We Protect Our Personal Information? Hai.stanford.edu; Stanford University. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information

Charlie Scoggin – Post 5 Academic AI

Question – What do you think of AI companies’ approach to user data – do you have any concerns, now or for the future?

The usage of AI and the popularity surrounding the topic of artificial intelligence itself is truly mind boggling. To think that a robot can be so powerful while influencing decisions and answers is scary, and we must draw the line somewhere. For brainstorming purposes, think of it this way, if we have access to the internet, the internet has access to us. “The difference is the scale: AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information,” said Stanford Universities AI Privacy Article. With that said, I think it is absolutely neccescary for concerns to be raised. Security and privacy are two big problems when using AI, which is why it is so important to take precautionary steps while engaging. It’s concerning to think that AI has access to all of our personal information, even if it may be a bit inaccurate, they still may know some details. Countries like Serbia and Austria are in progress of establishing artificial intelligence laws that protect humans and their personal data, but it seems that some nations aren’t taking notice to the severity. So in the future, this is something to keep an eye on as it is a bit concerning.

Works Cited: King, J., & Meinhardt, C. (2024, March 18). Privacy in an AI era: How do we protect our personal information? Stanford Institute for Human-Centered Artificial Intelligence. Read article

Post 5: Academic AI

This week’s readings tackled an issue that we all have been facing in the academic world. Dinsmore and Fryer (2026) brought forth many issues that arise when students bypass the struggle of learning new content by asking LLMs for summaries. It was surprising to me that summarizing is actually an important part of the learning process and accumulating information into our current schemas. During the NotebookLM portion of class, when asked to summarize our sources, NotebookLM referenced this “illusion of mastery,” or the belief that because one read an LLM summary of the content, one understands it. According to Dinsmore and Fryer, this is not the case, as one never went through the process of comprehending the material.

LLM companies harvesting data is concerning and an issue that needs to be brought to the forefront more. It is worrisome that you make the effort to remove your data from their systems. I believe that the more and more we mention these issues, the more and more pressure we put on these large companies to change their privacy policies. After these readings and experiments, I am making more of a conscious effort to remove data I do not want to share with the world from ALL LLMs that I have used.

Dinsmore, Dan L., and Luke K Fryer. “What Does Current genAI Actually Mean for Student Learning?” Elsevier, 2026, www.sciencedirect.com/journal/learning-and-individual-differences.

Post 5 Academic AI – Matthew Kaley

To me, AI use in academia often feels like a shortcut people take because they do not want to put in the effort to do it the long way. Dinsmore & Fryer (2026) remind us that LLMs are essentially “very powerful prediction tools” that are passed off as actual intelligence. They point out that for inexperienced LLM users, outsourcing tasks such as summarization could hinder their own skill development. You can’t skip the basics and expect to be an expert in your field using AI as a shortcut.

Data collection is another troubling aspect of LLMs. Most AI tools use your personal chats to train their models by default, and opting out is difficult because the option is hidden in settings. This is concerning to me because many people seem to have grown accustomed to talking to LLMs as if they have intelligence, or even using them as a therapist or a method of coping with their traumas. This data feels highly personal, and I think using it to train LLMs could have some negative implications. Overall, I think AI can be a useful tool, but copy/pasting whatever jargon the LLM outputs for you is not the most effective way to utilize LLMs.

Source: Dinsmore, Dan L., and Luke K. Fryer. “What Does Current GenAI Actually Mean for Student Learning?” Learning and Individual Differences, vol. 125, Jan. 2026, p. 102834, https://doi.org/10.1016/j.lindif.2025.102834.

Post 5- Academic AI

In any context, when generating opinions, thoughts, and knowledge from AI and claiming it as ours, this is where AI usage becomes too much. It is extremely important for individuals to have the ability to gain their own knowledge, and generate their own thoughts and opinions. These skills make people unique, and if everyone is developing these skills using AI, everyone will be the same. With this, AI can be extremely detrimental. Additionally, students gaining knowledge from AI can lead to a lack of knowledge being gained in schools. “The danger we face now is the indiscriminate use of genAI to scaffold students’ learning by those who do not fundamentally understand how humans learn.” (Desinmore & Fryer)

AI companies storing data is also concerning. If people are not careful, AI can have access to extremely private, personal information. However, people should have this understanding and be careful with what they share with AI. The big Grey area with AI is creativity. In my opinion, using AI to create original work is not creative. However, many would argue that building upon ideas and samples from AI is a creative skill. In syllabuses, professors should let students know of these dangers. I do not believe AI should be able to store personal information about the user. This should be added to the policies that AI should abide by. AI can do a lot that humans do. For example, AI can create emails and messages to customers in a business. To stick out, employers should make messages more personable, as people would still much rather talk to a human than a robot.

Post 5 – Academic AI

Would you rather have a robot tell you whether you should or shouldn’t have a job or a person? For me, It’s a simple answer and I’d rather have a person. From Stanford University they stated “…there have been instances where the AI used to help with selecting candidates has been biased. For example, Amazon famously built its own AI hiring screening tool only to discover that it was biased against female hires”. Companies should look away from using AI to evaluate a suitable person for a job. It is already hard for people to get jobs and having it be unfair to add to that just isn’t right. It wouldn’t be hard for companies to go through applications and choose a person that they would want instead of what an AI prefers. I would hope companies see the harm in these practices and move away from them.

Privacy in an AI era: How do we protect our personal information? | Stanford HAI. (n.d.). https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information