Post 5: Academic AI

This week’s readings tackled an issue that we all have been facing in the academic world. Dinsmore and Fryer (2026) brought forth many issues that arise when students bypass the struggle of learning new content by asking LLMs for summaries. It was surprising to me that summarizing is actually an important part of the learning process and accumulating information into our current schemas. During the NotebookLM portion of class, when asked to summarize our sources, NotebookLM referenced this “illusion of mastery,” or the belief that because one read an LLM summary of the content, one understands it. According to Dinsmore and Fryer, this is not the case, as one never went through the process of comprehending the material.

LLM companies harvesting data is concerning and an issue that needs to be brought to the forefront more. It is worrisome that you make the effort to remove your data from their systems. I believe that the more and more we mention these issues, the more and more pressure we put on these large companies to change their privacy policies. After these readings and experiments, I am making more of a conscious effort to remove data I do not want to share with the world from ALL LLMs that I have used.

Dinsmore, Dan L., and Luke K Fryer. “What Does Current genAI Actually Mean for Student Learning?” Elsevier, 2026, www.sciencedirect.com/journal/learning-and-individual-differences.

Post 5 Academic AI – Matthew Kaley

To me, AI use in academia often feels like a shortcut people take because they do not want to put in the effort to do it the long way. Dinsmore & Fryer (2026) remind us that LLMs are essentially “very powerful prediction tools” that are passed off as actual intelligence. They point out that for inexperienced LLM users, outsourcing tasks such as summarization could hinder their own skill development. You can’t skip the basics and expect to be an expert in your field using AI as a shortcut.

Data collection is another troubling aspect of LLMs. Most AI tools use your personal chats to train their models by default, and opting out is difficult because the option is hidden in settings. This is concerning to me because many people seem to have grown accustomed to talking to LLMs as if they have intelligence, or even using them as a therapist or a method of coping with their traumas. This data feels highly personal, and I think using it to train LLMs could have some negative implications. Overall, I think AI can be a useful tool, but copy/pasting whatever jargon the LLM outputs for you is not the most effective way to utilize LLMs.

Source: Dinsmore, Dan L., and Luke K. Fryer. “What Does Current GenAI Actually Mean for Student Learning?” Learning and Individual Differences, vol. 125, Jan. 2026, p. 102834, https://doi.org/10.1016/j.lindif.2025.102834.

Post 5- Academic AI

In any context, when generating opinions, thoughts, and knowledge from AI and claiming it as ours, this is where AI usage becomes too much. It is extremely important for individuals to have the ability to gain their own knowledge, and generate their own thoughts and opinions. These skills make people unique, and if everyone is developing these skills using AI, everyone will be the same. With this, AI can be extremely detrimental. Additionally, students gaining knowledge from AI can lead to a lack of knowledge being gained in schools. “The danger we face now is the indiscriminate use of genAI to scaffold students’ learning by those who do not fundamentally understand how humans learn.” (Desinmore & Fryer)

AI companies storing data is also concerning. If people are not careful, AI can have access to extremely private, personal information. However, people should have this understanding and be careful with what they share with AI. The big Grey area with AI is creativity. In my opinion, using AI to create original work is not creative. However, many would argue that building upon ideas and samples from AI is a creative skill. In syllabuses, professors should let students know of these dangers. I do not believe AI should be able to store personal information about the user. This should be added to the policies that AI should abide by. AI can do a lot that humans do. For example, AI can create emails and messages to customers in a business. To stick out, employers should make messages more personable, as people would still much rather talk to a human than a robot.

Post 5 – Academic AI

Would you rather have a robot tell you whether you should or shouldn’t have a job or a person? For me, It’s a simple answer and I’d rather have a person. From Stanford University they stated “…there have been instances where the AI used to help with selecting candidates has been biased. For example, Amazon famously built its own AI hiring screening tool only to discover that it was biased against female hires”. Companies should look away from using AI to evaluate a suitable person for a job. It is already hard for people to get jobs and having it be unfair to add to that just isn’t right. It wouldn’t be hard for companies to go through applications and choose a person that they would want instead of what an AI prefers. I would hope companies see the harm in these practices and move away from them.

Privacy in an AI era: How do we protect our personal information? | Stanford HAI. (n.d.). https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information

Post 4: Creative AI

Creativity, according to Arriagada, is the ability to come up with ideas or make things that are original, unexpected, and meaningful. From this perspective, something is considered creative if people experience it that way, even if it wasn’t made with intention or awareness.

One of the prompts we asked chatGPT was to write a poem as if it were Edgar Allen Poe.

This was the response:

Beneath the moon’s unblinking eye,
Where whispers coil and spirits sigh,
I trod the halls of shadowed thought
And mourned the peace I never caught.

A tapping—soft—upon my mind,
As though some grief I’d left behind
Returned to claim its wretched due
In midnight’s cold, unyielding hue.

No dawn shall break this sable air,
No solace find me waiting there—
For in that gloom I came to see
The darkest haunt… resides in me.

What makes this poem interesting and creative is how it uses a dark, spooky mood similar to Edgar Allan Poe, with images like shadows, whispers, and midnight that create a mysterious feeling. It also stands out because it is not just about something scary happening but shows that the real fear comes from inside the speaker, which gives the poem a deeper meaning. This connects to the idea in The Artifact isn’t the art: Rethinking creativity in the age of AI by Ashish Bhatia that creativity is not just about the final product but about the effect it has on the audience. Even if something like this poem could be imitated by AI, it still counts as creative because it produces emotion, atmosphere, and meaning for the reader.

I was also surprised that AI can create a poem in a similar style to Edgar Allan Poe because I expected that kind of emotional and atmospheric writing to be something only humans could do well. The way it was able to copy the dark tone, imagery, and rhythm made me realize how advanced AI has become at mimicking creative styles. It challenged my idea that creativity always has to come from human experience, since the poem still feels meaningful even though it was generated by AI.

These experiments shifted my thinking about creativity because they showed me that creativity is not only a human skill, but something that can also be produced by AI. I used to think creative work had to come from personal experience and emotion, but seeing AI generate a poem in a style similar to Edgar Allan Poe made me realize that similar results can be created without those human experiences. It also made me question what really makes something creative, since the AI poem still felt meaningful and well crafted even though it was not written by a person.

In conclusion, the AI did demonstrate what I consider to be creative because it was able to produce a poem with strong mood, imagery, and emotional impact. Even though it was not written by a human, it still created a meaningful experience for the reader and successfully captured a style similar to Edgar Allan Poe. This made me rethink my understanding of creativity and recognize that creative outcomes can come from both humans and AI if they are new, engaging, and affect the audience in a powerful way.

Post 4: Creative AI

Creativity is how people express themselves through art, knowledge, science, and all of work and hobbies.

I was experimenting with google gemini by asking them to create a poem about the stars in William Shakespeare format and this is the result I got:

“When I behold the night’s deep velvet gown, Broidered with sparks of gold and ancient light, The weary world puts off its heavy crown, To watch the silent pageant of the night. Each burning orb, a sentry in the sky, Doth mock the fleeting breath of mortal men; Though kings and empires in the dust shall lie, These silver fires shall surely rise again.

Think not their distance makes their counsel cold, For every beam is steeped in mystery; They write in fire what time hath not yet told, And chart the tides of every restless sea. So let me gaze, till dawn doth break the spell, And find in stars the truths no tongue can tell.”

I noticed something’s:

  1. It was able to generate a poem in a way that Shakespeare could’ve written.
  2. however, reading this poem has felt uninspiring. Like, there is no potential or inspiration that wants me to connect with this generated poem.

Sure, you generate something quick and easy, but for some reason it’s not very inspiring when it’s not coming from a human’s perspective on life. Art is meant to be cherished in some ways, something to tell our tell from the heart, mind, and spirit on how we see life from our view and share that wisdom to others to connect and inspire.