Matthew Kaley – Post 6: What’s Next?

People have raised concerns that AI will steal your job or write your homework for you. One concern that I don’t see enough people talking about is that we are outsourcing our thinking to LLMs. This has wide-ranging impacts on lots of things. You might be thinking, “Why should I be thinking about how to specifically phrase an email when Claude can hit all the key points you want to in seconds?” Some people have even started using ChatGPT as a replacement for Google, which I think shows a lack of understanding of how LLMs work. For many people, the problem-solving skills that forced our brains to figure things out are being smoothed away by AI.

This matters because human relationships all run on the ability to reason independently and to change your mind based on your own thinking. LLMs don’t even have the capacity to “think” for themselves. The moments when we don’t understand aren’t meant to be optimized by LLMs. If everyone delegated their capacity to think based on models trained on yesterday’s consensus, we wouldn’t form the unique opinions that make humans special. So, I say, write that boring email, and do it in your own voice.

Post 5 Academic AI – Matthew Kaley

To me, AI use in academia often feels like a shortcut people take because they do not want to put in the effort to do it the long way. Dinsmore & Fryer (2026) remind us that LLMs are essentially “very powerful prediction tools” that are passed off as actual intelligence. They point out that for inexperienced LLM users, outsourcing tasks such as summarization could hinder their own skill development. You can’t skip the basics and expect to be an expert in your field using AI as a shortcut.

Data collection is another troubling aspect of LLMs. Most AI tools use your personal chats to train their models by default, and opting out is difficult because the option is hidden in settings. This is concerning to me because many people seem to have grown accustomed to talking to LLMs as if they have intelligence, or even using them as a therapist or a method of coping with their traumas. This data feels highly personal, and I think using it to train LLMs could have some negative implications. Overall, I think AI can be a useful tool, but copy/pasting whatever jargon the LLM outputs for you is not the most effective way to utilize LLMs.

Source: Dinsmore, Dan L., and Luke K. Fryer. “What Does Current GenAI Actually Mean for Student Learning?” Learning and Individual Differences, vol. 125, Jan. 2026, p. 102834, https://doi.org/10.1016/j.lindif.2025.102834.

Matthew Kaley Post 4: Creative AI

This week, we explored creative writing with AI. Defining creativity was a challenging part of some of the readings we looked at. For these assignments, I would define creativity through the definition of personal creativity, the kind that shapes an individual’s perception or experience. These poems lack the boundary-breaking elements that transform entire domains, as capital-C creativity does (FreeThink.com).

In this assignment, I used Claude to write a poem in the style of Shel Silverstein. The initial poem had a basic prompt and produced a weak result. Then, I asked what other possibilities for a poem in Shel Silverstein’s style would look like. Claude suggested that objects come to life, or that kid logic vs. adult logic are styles of Silverstein. One idea I found interesting was about someone who wished for a million dollars, but instead got a million one-dollar bills on their head. The visual was funny, so I asked for a poem about that. After a lot of fine-tuning, I produced this poem.

Careful What You Wish

I wished upon a shooting star

For one million dollars, please!

The star said, “Sure!” and dumped them all

From forty thousand feet with ease.

The first one hit me on the nose,

The second clipped my ear,

By bill five hundred I was gone,

By thousand — disappeared.

They found me Tuesday, mostly fine,

Beneath a paper hill.

The doctor said, “You’re very rich.”

The nurse said, “And very still.”

The bank said, “Sir, we’ll need to count

Each bill before we store it.” I said,

“Of course!” and settled in.

(I should have just ignored it.)

One, two, three, four, five, six, seven —

The banker sneezed. I lost it.

One, two, three, four, five, six —

I yawned. My eyelids crossed it.

I woke up sometime Thursday noon

And counted forty-seven.

I hit eight hundred thousand once —

The banker sneezed.

One. Two.

The poem is not very good, but I was able to use effective prompts to produce a decently interesting poem that has some personal creative value for me. I think that creative writing using AI is an interesting novelty, but it would struggle to produce boundary-breaking work that transforms entire domains. Overall, I was more impressed by the creative writing capabilities of LLMs, but I remain skeptical.

Source: https://www.freethink.com/opinion/studio-ghibli-chatgpt-creativity?utm_source=tldrai

Matthew Kaley – Post 3: Prompting LLMs

One strategy that I have found helpful when prompting LLMs is refinement. Oftentimes, LLMs do not quite hit the mark on the first try. Refinement helps make the outputs more centered on what you are actually looking for. During the activity on Tuesday, I had Claude create a week-long workout plan for a college swimmer at the D-III level, which focused on the 200 Fly and the 200 IM, the two events I swim. I found that the main sets were far more physically demanding than a typical practice. I then brought these concerns to Claude, who asked questions about how the schedule should be restructured. The revised schedule (picture below) that Claude gave was better, but still had sets that looked correct on the surface, but would be exhausting and unlikely to lead to improvements if actually implemented. More refinement could help Claude develop a workout plan I could actually use, a strategy recommended by UT in Tuesday’s reading, saying that refinement turns prompt sessions into a lengthy “dialogue” with your chatbot.

Prompt 1: Give me a swim workout

Prompt 2: I am a college-level swimmer at the Division III level. Create another workout centered on training for the 200 IM.

Prompt 3: Create a training block for a week. My main events are 200 Fly, 400 IM, and 200 IM

Prompt 4: I have concerns that doing that many 200 flys and 400 IMs in a row will be highly taxing on my body

Claude then asked me this (in a pop-out menu with responses I could click):
Q: How would you like to address the load on Saturday?
A: Keep Saturday but reduce volume/intensity
Q: Which event feels most taxing on your body right now?
A: 400 IM

Matt Kaley – Post 2: AI Ethics

For this class activity, I explored the Notebook LM and its capabilities in summarizing the implications of A.I. in facial recognition. I asked Notebook LM to scan the web for surface-level sources relevant to this topic. There was an option for a “deep search” as well, but the surface-level search returned a vast number of peer-reviewed sources, all relevant. Some sources examined the implications for law enforcement, while others examined the implications for health care.

One of the most troubling implications that Notebook LM laid out for me was the high likelihood that darker-skinned females were misidentified. Light-skinned males had an error rate of 0.8%, compared to 34.7% for darker-skinned females. Scraping social media to find training data is another concern. Just because an image is publicly viewable does not mean its biometric data should be a free resource for corporations to train their LLMs.

I think Notebook LM can be used to synthesize sources. I would ask Notebook LM to find its own sources sparingly, as I am concerned about the credibility of the sources that LM generated for me. I think, after finding sources for yourself, LM could be a good tool for summarizing the information sources have in common, though it’s definitely not a substitute for actually reading these sources if you need to write a literature review, or something like that.

Group members who explored ChatGPT found it was very confident in all its responses, even though the ideas were more general than those of Notebook LM. One thing I found interesting about Notebook LM is that it often backs up its passages with citations directly from the sources, so you can go back and read the author’s words to make sure LM isn’t hallucinating. I think Notebook LM has uses in writing papers. They could help you see connections between sources that are hidden in plain sight.

Understanding Facial Recognition Algorithms | RecFaces

Source: https://notebooklm.google.com/notebook/e8482b60-de6d-4b3c-bd07-23e7e34e3b71