Post 4: Creative AI

Before this week, I assumed generative AI would either feel like a shortcut or a threat to originality. What surprised me instead was how useful it can be without actually replacing the creative process. When I asked it to write a poem in the style of Edgar Allan Poe, the result was convincing on the surface, but it did not feel fully alive. That gap made me realize AI is better suited as a tool for expanding perspective than as a substitute for creativity.

To me, creativity is the ability to transform personal experience, emotion, and interpretation into something meaningful and original. It is not just about making something new. It is about making something that reflects a point of view. This builds on Mihaly Csikszentmihalyi’s idea that creativity involves meaningful contribution, but I think perspective is just as important as contribution. Without a perspective, creative work feels interchangeable.

Here is a short excerpt from an AI generated poem I received:

“A tapping rose — a hollow sound, like knuckles made of bone,
As though some soul beneath the ground sought solace in my home.
I dared not breathe; my heart became a prisoner in my chest,
For in that knock I heard my name, in sorrowful unrest.”

What made this interesting is how well it imitates Poe’s style. The dark imagery, rhythm, and tone are all there. But it feels more like a blend of familiar patterns than a unique expression. This connects to the article The artifact isn’t the art: Rethinking creativity in the age of AI, which argues that AI can generate convincing outputs without engaging in the deeper human processes like struggle, ambiguity, and lived experience that shape real creativity. In that sense, the poem is stylistically creative, but not deeply original.

Because of this, I would use AI to augment my creativity by helping me explore different perspectives rather than replacing my own voice. For example, if I were writing about a theme like grief, I could ask AI to generate interpretations of that theme from different angles, such as historical, psychological, or even from different fictional voices. This would expose me to ideas I might not have considered, almost like having multiple brainstorming partners. Then, instead of copying those ideas, I would filter and reshape them through my own experiences and intentions.

The benefit of this approach is that it pushes me beyond my default way of thinking. It helps me see blind spots, challenge my assumptions, and experiment with styles or viewpoints I would not normally try. However, the risk is that I might start leaning too much on AI generated perspectives and lose confidence in my own instincts. There is also the danger of blending too many external ideas and ending up with something that lacks a clear, personal voice.

That week definitely shifted how I thought about AI and creativity. I did not see AI as something that replaces creativity, but as something that can expand it, especially by offering new perspectives. At the same time, it made me more aware that creativity ultimately depends on human input. Our experiences, interpretations, and willingness to refine ideas are what give work meaning. AI can suggest possibilities, but the meaning behind the work still has to come from me.

Post 6: What’s Next?

One issue related to generative AI that I think does not get enough attention is its impact on the environment. A lot of people use AI tools every day without realizing how much energy and water are needed to keep them running. Even though it feels like you are just typing into a simple chat box, there are large data centers behind it that require constant electricity and cooling to operate.

A good example of this is how large AI models are trained. They process huge amounts of data using powerful computers that can run for weeks at a time. These data centers use a lot of electricity and also need water for cooling so the systems do not overheat. In some areas, this can put pressure on local resources, especially during times when energy or water demand is already high.

This matters because AI is becoming more common in everyday life, whether it is for school, work, or basic online searches. If we do not pay attention to the environmental cost, the convenience of AI could come with a serious downside. One solution is to improve how efficient AI systems are and to use cleaner energy sources for data centers. People can also be more mindful about when and how often they use AI tools so that they are not relying on them unnecessarily.

Academic AI

A simple way to think about “too much” AI use is whether students are still doing the thinking.

Dinsmore and Fryer warn that “some of those calling for or directly introducing genAI into formal education fail to fully understand… how humans learn in any given domain of knowledge” (Dinsmore & Fryer, 2026). This suggests the risk is using AI in ways that replace the mental effort needed for learning.

So AI use is “too much” when it does the key thinking for students, like planning answers, explaining ideas, or solving problems, and students just accept the result. That may improve work in the short term, but it reduces learning.

AI use is more acceptable when it supports learning instead. For example, it can give feedback, examples, or help students improve their own ideas, as long as they still make decisions and explain their thinking.

In both classrooms and professional life, the boundary is the same: AI should help people think better, not think for them.

Post 4: Creative AI

Creativity, according to Arriagada, is the ability to come up with ideas or make things that are original, unexpected, and meaningful. From this perspective, something is considered creative if people experience it that way, even if it wasn’t made with intention or awareness.

One of the prompts we asked chatGPT was to write a poem as if it were Edgar Allen Poe.

This was the response:

Beneath the moon’s unblinking eye,
Where whispers coil and spirits sigh,
I trod the halls of shadowed thought
And mourned the peace I never caught.

A tapping—soft—upon my mind,
As though some grief I’d left behind
Returned to claim its wretched due
In midnight’s cold, unyielding hue.

No dawn shall break this sable air,
No solace find me waiting there—
For in that gloom I came to see
The darkest haunt… resides in me.

What makes this poem interesting and creative is how it uses a dark, spooky mood similar to Edgar Allan Poe, with images like shadows, whispers, and midnight that create a mysterious feeling. It also stands out because it is not just about something scary happening but shows that the real fear comes from inside the speaker, which gives the poem a deeper meaning. This connects to the idea in The Artifact isn’t the art: Rethinking creativity in the age of AI by Ashish Bhatia that creativity is not just about the final product but about the effect it has on the audience. Even if something like this poem could be imitated by AI, it still counts as creative because it produces emotion, atmosphere, and meaning for the reader.

I was also surprised that AI can create a poem in a similar style to Edgar Allan Poe because I expected that kind of emotional and atmospheric writing to be something only humans could do well. The way it was able to copy the dark tone, imagery, and rhythm made me realize how advanced AI has become at mimicking creative styles. It challenged my idea that creativity always has to come from human experience, since the poem still feels meaningful even though it was generated by AI.

These experiments shifted my thinking about creativity because they showed me that creativity is not only a human skill, but something that can also be produced by AI. I used to think creative work had to come from personal experience and emotion, but seeing AI generate a poem in a style similar to Edgar Allan Poe made me realize that similar results can be created without those human experiences. It also made me question what really makes something creative, since the AI poem still felt meaningful and well crafted even though it was not written by a person.

In conclusion, the AI did demonstrate what I consider to be creative because it was able to produce a poem with strong mood, imagery, and emotional impact. Even though it was not written by a human, it still created a meaningful experience for the reader and successfully captured a style similar to Edgar Allan Poe. This made me rethink my understanding of creativity and recognize that creative outcomes can come from both humans and AI if they are new, engaging, and affect the audience in a powerful way.

Prompting LLMs

The LLM prompt I used: Explain how gravity works

The most helpful prompting strategy I’ve learned is using a persona prompt. In class, we have talked about how the way you ask a question changes the model’s answer, and I saw that myself when I asked Copilot to explain gravity. My first prompt just said, “Explain how gravity works,” and the answer was broad. It mentioned Newton and Einstein but did not go into much detail. When I used a persona prompt and asked Copilot to explain it as an astrophysicist, the response became much clearer and more detailed, including examples and explanations that actually made sense. This prompt is helpful for students, writers, or anyone who needs a clear, in depth explanation. By contrast, when I tried an audience prompt explaining gravity in football terms, the answer was less helpful because it simplified things too much.

Using a persona prompt with Copilot showed me how important prompt literacy is, which connects to what I learned from the UT Aspire article. The article talks about how asking clear, thoughtful questions helps students think more deeply and get better answers. I saw this in action when I asked Copilot to explain gravity as an astrophysicist—the answer was detailed and easy to understand. But a broad prompt or one focused on an audience gave weaker results. This shows that knowing how to ask questions carefully can guide thinking and produce the best results, just like UT Aspire explains.

Bias in AI

I researched the ethical issue of bias in artificial intelligence. One interesting thing I learned is that AI systems can repeat social biases that already exist in society if the data they are trained on is biased. For example, if an AI system learns from data that reflects stereotypes or unfair patterns, it may continue those patterns in its decisions. This can affect things like hiring, facial recognition, or loan approvals, which could lead to unfair treatment of certain groups.

To explore this more, I asked two different AI tools the same question: “What social biases can occur?” I asked both ChatGPT and NotebookLM so I could compare their responses. Both tools gave similar answers and explained that social biases in AI can include gender bias, racial bias, and economic bias. These types of biases can happen when some groups are underrepresented or when historical data already reflects inequality.

However, I noticed a difference between the two responses. ChatGPT gave a clear and simple explanation, while NotebookLM gave a more detailed answer. NotebookLM included more examples and explanations because it used the sources that I uploaded into it. Since I was able to control the sources it used, the answer felt more connected to the research I was doing.

Overall, this showed me that even when you ask the same question, different AI tools can give slightly different answers. The level of detail often depends on the sources the AI has access to, which can make a big difference in how much information you get.

2402.06196 

1504121_1753296366650_en-US.pdf 

A.I. Mastery: Your 5-Day Guide | PDF | Artificial Intelligence | Intelligence (AI) & Semantics 

AI and Libraries Avery Swartz Founder & CEO, Camp Tech RENDEZ-VOUS DES BIBLIOTHÈQUES PUBLIQUES DU QUÉBEC // MAY 3, 2024 

Books that Can Help Students Learn About Artificial Intelligence – Penguin Random House Higher Education 

Genius Makers by Cade Metz | Summary, Audio, Quotes, FAQ 

The 8 best AI courses for beginners in 2026 | Zapier 

Annotated-Bib-Readings.pdf