LLM prompting

I used the flip interaction for the first time. I have heard of it before in a previous digital studies class. So I knew the concept of it, but I also didn’t know how it would work out. My prompting could be worked on however I tried to give as much details as possible for the LLM. I used google’s Gemini, which can generate answers and pictures. I needed the picture function, because I wanted to design a kitchen with AI and I wanted to see the end result.

My starting prompt was:
you’re a kitchen designer, I want to remodel my kitchen which is 4 meters long, 3 meters wide and 2.5 meter tall. on the south side there is a big arch which covers the whole wall. Right next to it on the east side there is a doorway to the outside and exactly opposite there is a door to a staircase. on the eastside there are two windows. I want you to ask me questions about the design of the kitchen and if you have enough information I want you to describe and then generate an image based on my answers
I then quickly corrected the AI, because it asked my a bunch of questions at the same time and I wanted it to ask one question at a time.
The prompt could definitely use some work, A prompt can influencesubsequent interactions with—and output generated from—an LLM by providing specific rules and guidelines for an LLMconversation with a set of initial rules. (White et al, 2023). So by tweaking the initial prompt to be more specific, in what I expect from the LLM I could get better result.


This type of interaction with LLM is useful in some case. As per other stuff with AI I think it is a great starting point. If you have no clue on how you want to redesign the kitchen, AI can give you some things to think about. However when you go further into the process the AI become unreliable and starts to create things which are not correct. As you can see in the starting prompt the arch is not on the same wall as the window, however in the AI generated picture it is generated on the same wall.

A Flip Interaction and Persona Pattern Experiment with ChatGPT and Copilot

Following the flip interaction method, I gave both ChatGPT and Microsoft Copilot the same task using the exact same prompt and details. My prompt was:

“Please act as a travel agent, helping me plan for a summer vacation in Europe. Before you begin, ask me any questions that will help you do a better job of planning an itinerary for my travels.”

And the details I provided were:

“2-week trip in early or mid-June, Budget: $2000, Departure: New York City, No visa needed, Interested in destinations like Paris, Amsterdam, Vienna, Berlin, etc., Prefer fewer countries with more time in each, Priorities: budget travel, sightseeing, cultural and food experiences, Interests: cities, nature, beaches, mountains, historical sites, Solo traveler, Cheapest accommodations, Carry-on only, No food or activity restrictions”

Both language models gave me similar responses. They each provided detailed itineraries with working links, which was impressive. However, I found ChatGPT’s answer to be more detailed, well-structured, and overall more polished. Copilot also did a good job, but its suggestions were a bit less comprehensive.

I also experimented with the persona pattern and noticed that both models responded in similar ways. However, when I asked them, “If you were me, would you be happy with this itinerary?” ChatGPT’s answer felt much more human, thoughtful, and convincing. Copilot’s response, on the other hand, was more generic and lacked a personal touch.

Finally, I asked both of them, “Can you give me links that can help me prompt you better?” ChatGPT understood my question clearly and gave me very accurate and useful resources. Copilot, surprisingly, didn’t seem to grasp the question and gave me an irrelevant response.

Overall, this experiment taught me a lot about the power of prompting. It’s clear that the way we communicate with AI plays a big role in how helpful it can be. Knowing how to ask the right questions and which AI tool to use, can make all the difference when trying to get the best results. One important insight I gained came from reading White et al.’s work on prompt patterns was, “A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities” (White et al., 1). I think prompting is more than a skill. It’s about clearly defining problems and using the right approach to solve them.


Prompting insights and experiments

This week’s reading, videos, and discussions have improved my understanding of prompting in the context of large language models. I learned that several “prompt patterns” exist that can be applied to improve an LLM’s output quality (see, section III. of White et al., 2023).

In the past, I didn’t really think about how to formulate my prompts. I just went by my “gut feeling,” and that worked pretty well. This is probably partly because, as mentioned by Mollick (2023), the term prompt engineering, as it is often defined, is overly complex and probably more appropriate for use in the context of computer programming. However, with LLMs always advancing, their use has become more interactive, allowing for more human-like conversations, for example, by means of iterative prompting (see, e.g., the Mollicks, 2023, video on prompting we watched for our class on April 2), rather than relying on very specific input to provide useful output. Nevertheless, I think knowing about certain prompt patterns is useful for reaching a qualitative output more quickly.

For the prompting experiment, I tried out the flipped interaction pattern combined with the persona pattern. I used the following prompt to compare the outputs from ChatGPT vs. Perplexity:

“Help me plan my California trip. Imagine you are a travel agent. Ask me as many questions as you need to provide me with a 1-week holiday tailored to my interests.”

I think asking an LLM to take on the persona of a travel agent can be useful for anyone planning a trip and feeling a little overwhelmed by the task – especially when they don’t know much yet about the destination they want to travel to. I am quite familiar with ChatGPT but had never used Perplexity before. I was surprised how good the questions and responses were in both of the LLMs. The questions I was asked overlapped a lot, and both LLMs suggested similar itineraries. ChatGPT asked me 20 questions, while Perplexity asked 21. Overall, I think ChatGPT has a more user-friendly design – it does a really good job of using emojis or bold-facing important parts, for example. Perplexity seemed a little more “professional” and straightforward in its tone. I also liked that Perplexity included a section called “Other considerations” to allow me to add anything it might have forgotten to ask.

Sources:

Prompting Chat-GPT

I used chat-GPT to help make a plan to train for a 5k in three months. I gave it a pretty specific task with details about me. “Im planning to run a 5k for the first time in 3 months Im 6’0′ 215 lbs and in decent physical shape, but have never done long distance running. Give me a plan that can help me finish the 5k in decent time.” Its first response was very well put together and made sense but it was very broad, so then I asked it “how many times a week should I do each thing.” It then gave a much more specific plan that I liked. I then asked it “is there anything that can be improved in training plan”. It gave me improvements but did not implement them to the plan until it was asked to afterward.

the images above are the final draft of the 12 week 5k training plan

I found that details and specificity is always best when prompting. I learned that if you ask ai if it finished the prompt previously asked that it will often times tell you no the first time and add things that were missed.

prompting help: https://help.openai.com/en/articles/10032626-prompt-engineering-best-practices-for-chatgpt

Useful prompting in AI

Today in class, I gave a prompt to ChatGPT to help me brainstorm a research question. I used the flipped interaction, which improved the response. ChatGPT asked me multiple questions, which helped me brainstorm my research question. Also, I was given links to the sources and was pretty surprised that those sources existed.

For the research topic, I made ChatGPT create a presentation; however, the speaker notes had more information than the slides. I believe that additional information on the slides would have been better.

I realized responses provided by ChatGPT depend on how well we ask them questions. It gives more accurate answers if our question is clear and well-structured. However, most of the answers given by ChatGPT are either too long or have less information.

Gabrielle’s LLM Prompting Blog

DB2.docx

Instead of just asking a broad question, I learned that framing my prompt based on who I am makes the response way more helpful. Whether I was stuck at my internship or needed skincare tips for my oily, acne-prone skin, telling ChatGPT exactly what I needed (like being an 11th grader or a beauty enthusiast) helped me get answers that actually made sense. This method is super useful for anyone who wants personalized responses.