What stood out to me here was how emotionally grounded it felt. Instead of jumping straight into a dramatic plot, the AI focused on a quiet, almost poetic moment. That made it feel more human than I expected.

For me, creativity isn’t just producing something new—it’s producing something that feels meaningful or intentional. Based on our readings, I’d connect this to the idea that creativity involves both novelty and value. Something can be original, but if it doesn’t resonate or feel purposeful, it doesn’t fully count. At the same time, I’m not convinced creativity requires consciousness or lived experience, even though some philosophers argue that intention

So, did the AI demonstrate creativity here? I’d say partially, but not in the same way humans do. The passage is clearly novel in the sense that it wasn’t copied from anywhere, and it has emotional value—it creates a vivid feeling. But I don’t think the AI “understood” what it was doing. It was recombining patterns it learned from human writing. In class, we talked about functionalist arguments—if something produces outputs indistinguishable from creative work, maybe that’s enough to call it creative. This example kind of supports that idea, because if I didn’t know an AI wrote it, I might assume a human did.

At the same time, there were limits. As the story continued, it slipped into predictable territory—loneliness, overload, eventual isolation. It felt like it ran out of originality once it moved past the opening mood. That made me think the AI is strongest at style and imitation, but weaker at sustaining deeper, intentional development.

This experiment definitely shifted how I think about creativity and AI. Before, I saw AI writing as mostly generic and mechanical. Now I think it can produce moments of real beauty or insight—but those moments are inconsistent and not driven by genuine understanding. It made me realize that creativity, at least for humans, is not just about producing something that sounds good, but about having a perspective behind it.

Margaret Boden, “Creativity and Artificial Intelligence” (1998)

Matthew Kaley Post 4: Creative AI

This week, we explored creative writing with AI. Defining creativity was a challenging part of some of the readings we looked at. For these assignments, I would define creativity through the definition of personal creativity, the kind that shapes an individual’s perception or experience. These poems lack the boundary-breaking elements that transform entire domains, as capital-C creativity does (FreeThink.com).

In this assignment, I used Claude to write a poem in the style of Shel Silverstein. The initial poem had a basic prompt and produced a weak result. Then, I asked what other possibilities for a poem in Shel Silverstein’s style would look like. Claude suggested that objects come to life, or that kid logic vs. adult logic are styles of Silverstein. One idea I found interesting was about someone who wished for a million dollars, but instead got a million one-dollar bills on their head. The visual was funny, so I asked for a poem about that. After a lot of fine-tuning, I produced this poem.

Careful What You Wish

I wished upon a shooting star

For one million dollars, please!

The star said, “Sure!” and dumped them all

From forty thousand feet with ease.

The first one hit me on the nose,

The second clipped my ear,

By bill five hundred I was gone,

By thousand — disappeared.

They found me Tuesday, mostly fine,

Beneath a paper hill.

The doctor said, “You’re very rich.”

The nurse said, “And very still.”

The bank said, “Sir, we’ll need to count

Each bill before we store it.” I said,

“Of course!” and settled in.

(I should have just ignored it.)

One, two, three, four, five, six, seven —

The banker sneezed. I lost it.

One, two, three, four, five, six —

I yawned. My eyelids crossed it.

I woke up sometime Thursday noon

And counted forty-seven.

I hit eight hundred thousand once —

The banker sneezed.

One. Two.

The poem is not very good, but I was able to use effective prompts to produce a decently interesting poem that has some personal creative value for me. I think that creative writing using AI is an interesting novelty, but it would struggle to produce boundary-breaking work that transforms entire domains. Overall, I was more impressed by the creative writing capabilities of LLMs, but I remain skeptical.

Source: https://www.freethink.com/opinion/studio-ghibli-chatgpt-creativity?utm_source=tldrai

Post 4: Creative AI

My definition of creativity is the ability to create a unique, original product from scratch. This week in class, my group, group 5, decided to have LLMs create a poem in the style of Yaet, the rapper. ChatGPT’s response was not unique and not even created from scratch. Many of the rhymes that ChatGPT created are rhymes that I’ve heard many times before. When prompting ChatGPT to, “Create a short poem in the style of yaet the rapper.” ChatGPT provided the following poem.

“Yeah
I step in the night with the lights all glowin
Money keep callin my phone it keep showin
Diamonds they dance and they freeze in the moment
I told em I got it I meant it I own it

Pull up in silence the engine be hummin
Thought I was lost but I turned into somethin
Stack it up tall like it came out the oven
Countin it twice cause I came from the nothin

Yeah
They tryna ride on the wave that I made
I been up late turnin dark into day
Voices in my head but they fade with the bass
I keep it movin no time to just wait

I been on top and I never look down
Came from the dirt now I walk with a crown
Life like a loop it keep goin around
I just keep floatin I don’t make a sound”

While I cannot argue that this poem is awful, and I cannot argue that this is not in the style of Yeat, I can argue that this is unoriginal as it gets. “They tryna ride on the wave.” Is a lyric that has been used by Yeat in a real song before. This is completely unoriginal, and therefore, I would not say ChatGPT used any form of creativity in the output of our group’s prompt. This week’s experiments shifted my thought on AI’s responses to prompts that include writing. In this assignment, ChatGPT stole lyrics from other songs, and used it as its own. Because this is illegal for humans to do, I argue that ChatGPT should not be able, or allowed to do this either. Yeat is king, and I find it preposterous that ChatGPT would take Yeat’s creative work, and claim it as its own.

Post 3 – Prompt

The prompt that I used was “What is the best fundamental way to catch a football”? This prompt could be well used with people who play football, or specifically, people who play the wide receiver position. They would ask this question if they are struggling to catch a football or if they are just becoming a wide receiver and need a little advice. The answer that Gemini gave was clear and fairly simple:

The “best” way depends on where the ball is, but for most catches, the fundamental gold standard is the Diamond Technique. Whether you’re a pro or playing in the backyard, it’s all about creating a “pocket” for the ball to settle into.

Here is the breakdown of the mechanics:


1. Hand Positioning (The Diamond)

If the ball is at chest height or higher, you want to form a diamond shape with your hands.

  • The Shape: Bring your hands together so your index fingers and thumbs are nearly touching.
  • The Target: Look at the ball through that diamond window as it arrives.
  • Pinkies Down: If the ball is below your waist, flip your hands so your pinkies are touching and your palms are facing up (the “basket” catch).

2. “Soft Hands” (The Shock Absorbers)

The biggest mistake beginners make is having “stone hands”—keeping arms and wrists rigid.

  • Snatch and Give: Don’t just let the ball hit your palms. As the ball makes contact, your arms should slightly retract toward your body.
  • The Sound: A good catch should be relatively quiet. If it makes a loud slap against your hands, you aren’t absorbing the force properly.

3. Eyes on the Tip

Don’t just look at “the ball.” Focus on the front tip of the football.

  • Track the point of the ball all the way into the diamond of your hands.
  • The “Tuck”: Do not look away to see where defenders are until you have physically felt the ball hit your hands and begun the tuck.

4. The Three-Point Tuck

As soon as the ball is secured, move it to the “tuck” position to prevent fumbles:

  1. The Palm: Covering the front tip of the ball.
  2. The Forearm: Pressed firmly against the side of the ball.
  3. The Bicep/Ribs: Squeezing the ball against your body.

Common Pitfalls to Avoid

MistakeWhy it’s badThe Fix
Body CatchingUsing your chest to stop the ball.It causes the ball to bounce off your pads; catch with hands extended.
Fighting the BallReaching out too aggressively.Let the ball come to you and “cradle” it in.
Early TurningLooking upfield before the catch.“Look the ball in”—stare at it until it’s tucked.

From a football player that has played for many years, this is what I would tell someone if they are just starting out as well. The only thing with this answer is that it is a little lengthy and involves some parts that were not questioned like how to not fumble the ball after the football has been caught.

This shows that LLM is good for responses and can give you the answer you are typically looking for. The fall off is that it is given in a lengthy response or typically adds more than you are looking for or wanting. This clearly goes over what we talked about in the discussion on March 31st when we talked about the drawbacks of AI.

Google Gemini. Gemini. https://gemini.google.com/app/9042412a074caa57

Post 3 :Prompting LLMs

Prompting LLMs is very interesting and in my experience, you never know what you will get unless you are 100% specific on what you are looking for. This brings me to my point on the most helpful prompting strategy I have learned which is, “outlining exactly what you want and describing your ideal output. ” (Open AI Academy, 2025).

Using ChatGPT, I tested out how different the outputs will be if I were specific versus not specific in my prompts. The first prompt (non-specific) was “Explain vaccine hesitancy”. This was the output it then gave me: Vaccine hesitancy is when people are unsure about, delay, or refuse vaccines even when they are available. It’s not just “anti-vaccine” behavior, it’s more of a spectrum ranging from mild uncertainty to strong refusal.

At its core, vaccine hesitancy is shaped by three big factors:

1. Confidence (trust)
This is about how much people trust:

  • the safety and effectiveness of vaccines
  • healthcare providers
  • institutions like governments and pharmaceutical companies

If trust is low, hesitancy increases. For example, misinformation about side effects or historical medical mistreatment can reduce confidence.

2. Complacency (perceived risk)
Some people don’t see diseases as a real threat anymore. Because vaccines have been so successful, illnesses like measles or polio feel “rare,” so people think:

  • “I probably won’t get sick anyway”
  • “It’s not that serious”

This lowers motivation to get vaccinated.

3. Convenience (access and barriers)
Even people who are willing may delay vaccines because of:

  • cost
  • transportation issues
  • time constraints
  • confusing healthcare systems

So hesitancy isn’t always about beliefs, it can also be about logistics.


Other important influences:

  • Misinformation (especially on social media)
  • Cultural or religious beliefs
  • Personal experiences with healthcare
  • Peer and family influence
  • Political climate and trust in authority

Why it matters:
Vaccine hesitancy can reduce vaccination rates, which weakens herd immunity and increases the risk of disease outbreaks.

This was surprisingly detailed and included several examples to help with understanding. It was also quite easy to understand and overall the output had a good amount of depth. This first output would be very useful for people seeking a general and easy understanding of vaccine hesitancy, or for someone looking for a quick definition they can say or use. It even goes further than the basics, which is helpful because the examples it gives would allow for more understanding beyond the definition alone.

I then asked the more specific prompt: “Act as a public health educator. Explain vaccine hesitancy to college students in simple terms, include one real-world example, and briefly mention why it matters for community health.” This was the output it then gave me:

Think of vaccine hesitancy as being unsure about getting a vaccine, not necessarily being fully against it. It’s when people delay or question vaccines even when they’re available.

In simple terms, it usually comes down to three things:

  • Trust: Do I believe this vaccine is safe and that the system behind it is honest?
  • Risk perception: Do I think the disease is actually a threat to me?
  • Access: Is it easy for me to get the vaccine?

A lot of college students fall into the middle, not strongly anti-vaccine, but influenced by things like social media, friends, or confusing information.

Real world example:
During the COVID-19 pandemic, many young adults delayed getting vaccinated. Some felt they were low risk, others were unsure because of things they saw online about side effects. Even when vaccines were free and available on campuses, hesitation still slowed uptake.

Why it matters for community health:
Vaccines don’t just protect you, they protect the people around you. When fewer people get vaccinated, diseases can spread more easily, especially to those who can’t be vaccinated or are more vulnerable. This can lead to outbreaks that affect entire communities, including college campuses.

This response was highly specific and tailored to college students just like I prompted. Like a public health educator, it explained simply and in a “teaching manner.” The example was also centered on college students and communal living on campus, and overall, the language was way more simple and easy to understand than the first general prompt. This prompt is useful for college students and contextualizes vaccine hesitancy specific to their experience. Compared to the first prompt, a college student might find this more relevant and specific to their situation.

All of this connects to Open AI’s prompting guide which talked about the importance of providing ChatGPT with background and what context you would like the output to be framed within. I was surprised to see how much detail the first prompt had but just like I expected, when I framed the second prompt, I got an output specific to the demographic (college students) that I was interested in.

Source: Admin. (2025, August 6). Prompting%20. OpenAI Academy. https://academy.openai.com/public/clubs/work-users-ynjqu/resources/prompting

Post 3

The prompt I used was “make me a workout plan.” I kept it super simple and straight to the point. In class, we discussed how a vague prompt usually leads to a generic answer. I gave the prompt to ChatGPT, and I got a very general workout plan. It was obvious the program didn’t understand what I was asking. It was simply producing something that sounded like a generic workout plan. 

This prompt would be helpful if a person simply wants a quick starting point, like a beginner who does not know where to begin with fitness. However, for a person like me, it is not particularly helpful without more information. For instance, I would want more information regarding specific goals, scheduling, and perhaps equipment availability. This is particularly relevant to our previous discussion in class regarding how the usefulness of an LLM depends on how specific a prompt is. For instance, when I improved the prompt by adding more information such as 5-day gym plan focused on muscle gain with dumbbells and machines, it became more relevant. 

ChatGPT provided a structure for a weekly routine (e.g., chest day, leg day, etc.) this only works effectively once you change the prompt and modify it. This is really the key to understanding why this model is not thinking but simply reacting to patterns based on common requests for information. That’s also why it can sometimes make recommendations that are off or general. 

This all ties together with the readings we had in class. The OpenAI guide emphasizes that good prompts should clearly define the task, provide context, and specify the desired output, which I didn’t do at first, leading to a weaker response. Similarly, the Claude prompting guide explains that you should treat the model like a “new employee” who has no prior context and needs clear instructions. That idea really clicked for me here. my original prompt didn’t give the model anything to work with, so it gave me the most average answer possible. 

The output ChatGPT gave me: 

Schedule 

  • Day 1 – Workout  
  • Day 2 – Rest  
  • Day 3 – Workout  
  • Day 4 – Rest  
  • Day 5 – Workout  
  • Weekend – Rest  

Workout A 

  • Squats – 3 × 8  
  • Push-ups (or bench press) – 3 × 8  
  • Lat Pulldown (or assisted pull-ups) – 3 × 8  
  • Plank – 3 × 20–30 sec  

Workout B 

  • Deadlifts – 3 × 5  
  • Dumbbell Shoulder Press – 3 × 8  
  • Dumbbell Rows – 3 × 8  
  • Crunches – 3 × 12 

Citation 

OpenAI. Prompting Guide. OpenAI Academy.