AI and Creativity

This week’s dive into AI creativity and the readings on it were interesting. One particular prompt regarding its capacity to create poems caught my attention. I was stunned by how the LLM balanced traditional poetic structure with unexpected twists. This challenged by assumptions about creativity.

To me, creativity has always been a blend between discipline and spontaneity, order and chaos. To reimagine a machine form something as linguistically complex that captures this version of creativity was truly mind-boggling Here’s the AI’s response regarding this:

“In the quiet library of midnight,
the clock whispered forgotten secrets
while shadows danced on silver pages,
each step a quiet rebellion against the dawn.”

The interplay between structure and surprise stood out to me. There is an element of traditional poetic framework coupled with an unexpected energy. This mixture of predictability with delightful unpredictability is what I consider the essence of all creativity. This resonates with the philosophical perspectives we discussed, which suggest that creativity involves recontextualizing the familiar to reveal deeper insights (Loi et al., 2020).

The paper we read also raised important ethical questions. As AI systems become more integrated into creative fields, we need to consider the moral costs of relying on these systems. For instance, when mainstream AI stifles the kind of open-ended exploration that fuels genuine creativity, there’s a risk that we might lose something irreplaceable about human ingenuity. In my view, while the AI-generated poem may not represent a radical breakthrough in originality, it does prompt us to think about how we value and attribute creativity, particularly when the creator is not human.

Week 3: Prompting LLMs

In this week’s research, I found learning about the inputs and outputs of prompting LLMs to be very intriguing. Specifically, the Persona Pattern (Prompt Improvement) made me interested to learn more about the process of large language model prompt engineering and how giving artificial intelligence more characterization can help reach more in-depth and accurate responses.

To utilize the Persona Pattern prompt, I wanted to experiment specifically with the LLM: ChatGPT. I decided to used ChatGPT because it is customizable to impersonate specific personas. For this experiment I decided to customize it with a few general personality traits, one of which being “talking like a member of Gen-Z.”

To help it, and for the sake of the experiment I wanted to take on a little bit of the persona myself to engage more with it. I thought that experimenting with a typical young-adult romantic topic would be interesting in funny. I asked it if I should ‘like totally dump my bf?.” I received an almost comical and slightly ridiculously gen-z slang-saturated response, with some good advice and a little bit of humor. It seemed really engaged and I found it interesting that every time I generated a prompt, it would ask something like “need any more like totally fire advice?” and start generating even more things I could ask it to do.

Week 2: AI Ethics

Shen Article:

During our research lab and this week’s findings, media articles, etc I found a lot of interesting takes and facts on ethics in the world of artificial intelligence and large language models. 

                  I thought it was incredibly interesting to learn about how different nations and cultures use large language models. 

This lab specifically prompted my idea for my final project as it made me question the ethics behind large language models and artificial intelligence in regards to the exploitation of other cultures and how it directly benefits first world consumers such as ourseleves. 

It begs the question: at what cost is our learning/creativity/functions through large language models and artificial intelligence, reaping those who carry these technologies on their backs (considerably underpaid). Additionally, it is important to note the accessibility of these clearly differ from around the globe. 

In our group discussion, we referenced specifically Chinese culture and how their access to these technologies is seen as beneficial to many of the nation due to China’s industrial culture. Their nation prides itself on becoming a technological superpower and exceeding standards of technological safety/security.

Here in the united states, from a privledged, first-world stand point, many citizizens here see LLMs and AI as dangerous and threatending to creative intergrity, jobs and industry, education, and more. This differs from the majority of the Chineese perspective as they see benfits in surveillance control, facial recognition, genetic profiling, and other forms of government-oriented safety measures for their country. 

It is also important to note the disadvantages that were also discussed in the article such as social control. The “Big Brother is Always Watching” effect being unforgiving to certain artistic and political freedoms of the citizens. 

I also found an article from the Journal of International and Public Affairs from 2019 that discusses the certain benefits in 3rd world countries that AI has implemented such as structured learning methods, easy money transferring models, etc.

LLM prompting

I used the flip interaction for the first time. I have heard of it before in a previous digital studies class. So I knew the concept of it, but I also didn’t know how it would work out. My prompting could be worked on however I tried to give as much details as possible for the LLM. I used google’s Gemini, which can generate answers and pictures. I needed the picture function, because I wanted to design a kitchen with AI and I wanted to see the end result.

My starting prompt was:
you’re a kitchen designer, I want to remodel my kitchen which is 4 meters long, 3 meters wide and 2.5 meter tall. on the south side there is a big arch which covers the whole wall. Right next to it on the east side there is a doorway to the outside and exactly opposite there is a door to a staircase. on the eastside there are two windows. I want you to ask me questions about the design of the kitchen and if you have enough information I want you to describe and then generate an image based on my answers
I then quickly corrected the AI, because it asked my a bunch of questions at the same time and I wanted it to ask one question at a time.
The prompt could definitely use some work, A prompt can influencesubsequent interactions with—and output generated from—an LLM by providing specific rules and guidelines for an LLMconversation with a set of initial rules. (White et al, 2023). So by tweaking the initial prompt to be more specific, in what I expect from the LLM I could get better result.


This type of interaction with LLM is useful in some case. As per other stuff with AI I think it is a great starting point. If you have no clue on how you want to redesign the kitchen, AI can give you some things to think about. However when you go further into the process the AI become unreliable and starts to create things which are not correct. As you can see in the starting prompt the arch is not on the same wall as the window, however in the AI generated picture it is generated on the same wall.

A Flip Interaction and Persona Pattern Experiment with ChatGPT and Copilot

Following the flip interaction method, I gave both ChatGPT and Microsoft Copilot the same task using the exact same prompt and details. My prompt was:

“Please act as a travel agent, helping me plan for a summer vacation in Europe. Before you begin, ask me any questions that will help you do a better job of planning an itinerary for my travels.”

And the details I provided were:

“2-week trip in early or mid-June, Budget: $2000, Departure: New York City, No visa needed, Interested in destinations like Paris, Amsterdam, Vienna, Berlin, etc., Prefer fewer countries with more time in each, Priorities: budget travel, sightseeing, cultural and food experiences, Interests: cities, nature, beaches, mountains, historical sites, Solo traveler, Cheapest accommodations, Carry-on only, No food or activity restrictions”

Both language models gave me similar responses. They each provided detailed itineraries with working links, which was impressive. However, I found ChatGPT’s answer to be more detailed, well-structured, and overall more polished. Copilot also did a good job, but its suggestions were a bit less comprehensive.

I also experimented with the persona pattern and noticed that both models responded in similar ways. However, when I asked them, “If you were me, would you be happy with this itinerary?” ChatGPT’s answer felt much more human, thoughtful, and convincing. Copilot’s response, on the other hand, was more generic and lacked a personal touch.

Finally, I asked both of them, “Can you give me links that can help me prompt you better?” ChatGPT understood my question clearly and gave me very accurate and useful resources. Copilot, surprisingly, didn’t seem to grasp the question and gave me an irrelevant response.

Overall, this experiment taught me a lot about the power of prompting. It’s clear that the way we communicate with AI plays a big role in how helpful it can be. Knowing how to ask the right questions and which AI tool to use, can make all the difference when trying to get the best results. One important insight I gained came from reading White et al.’s work on prompt patterns was, “A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities” (White et al., 1). I think prompting is more than a skill. It’s about clearly defining problems and using the right approach to solve them.


Prompting insights and experiments

This week’s reading, videos, and discussions have improved my understanding of prompting in the context of large language models. I learned that several “prompt patterns” exist that can be applied to improve an LLM’s output quality (see, section III. of White et al., 2023).

In the past, I didn’t really think about how to formulate my prompts. I just went by my “gut feeling,” and that worked pretty well. This is probably partly because, as mentioned by Mollick (2023), the term prompt engineering, as it is often defined, is overly complex and probably more appropriate for use in the context of computer programming. However, with LLMs always advancing, their use has become more interactive, allowing for more human-like conversations, for example, by means of iterative prompting (see, e.g., the Mollicks, 2023, video on prompting we watched for our class on April 2), rather than relying on very specific input to provide useful output. Nevertheless, I think knowing about certain prompt patterns is useful for reaching a qualitative output more quickly.

For the prompting experiment, I tried out the flipped interaction pattern combined with the persona pattern. I used the following prompt to compare the outputs from ChatGPT vs. Perplexity:

“Help me plan my California trip. Imagine you are a travel agent. Ask me as many questions as you need to provide me with a 1-week holiday tailored to my interests.”

I think asking an LLM to take on the persona of a travel agent can be useful for anyone planning a trip and feeling a little overwhelmed by the task – especially when they don’t know much yet about the destination they want to travel to. I am quite familiar with ChatGPT but had never used Perplexity before. I was surprised how good the questions and responses were in both of the LLMs. The questions I was asked overlapped a lot, and both LLMs suggested similar itineraries. ChatGPT asked me 20 questions, while Perplexity asked 21. Overall, I think ChatGPT has a more user-friendly design – it does a really good job of using emojis or bold-facing important parts, for example. Perplexity seemed a little more “professional” and straightforward in its tone. I also liked that Perplexity included a section called “Other considerations” to allow me to add anything it might have forgotten to ask.

Sources: