A Flip Interaction and Persona Pattern Experiment with ChatGPT and Copilot

Following the flip interaction method, I gave both ChatGPT and Microsoft Copilot the same task using the exact same prompt and details. My prompt was:

“Please act as a travel agent, helping me plan for a summer vacation in Europe. Before you begin, ask me any questions that will help you do a better job of planning an itinerary for my travels.”

And the details I provided were:

“2-week trip in early or mid-June, Budget: $2000, Departure: New York City, No visa needed, Interested in destinations like Paris, Amsterdam, Vienna, Berlin, etc., Prefer fewer countries with more time in each, Priorities: budget travel, sightseeing, cultural and food experiences, Interests: cities, nature, beaches, mountains, historical sites, Solo traveler, Cheapest accommodations, Carry-on only, No food or activity restrictions”

Both language models gave me similar responses. They each provided detailed itineraries with working links, which was impressive. However, I found ChatGPT’s answer to be more detailed, well-structured, and overall more polished. Copilot also did a good job, but its suggestions were a bit less comprehensive.

I also experimented with the persona pattern and noticed that both models responded in similar ways. However, when I asked them, “If you were me, would you be happy with this itinerary?” ChatGPT’s answer felt much more human, thoughtful, and convincing. Copilot’s response, on the other hand, was more generic and lacked a personal touch.

Finally, I asked both of them, “Can you give me links that can help me prompt you better?” ChatGPT understood my question clearly and gave me very accurate and useful resources. Copilot, surprisingly, didn’t seem to grasp the question and gave me an irrelevant response.

Overall, this experiment taught me a lot about the power of prompting. It’s clear that the way we communicate with AI plays a big role in how helpful it can be. Knowing how to ask the right questions and which AI tool to use, can make all the difference when trying to get the best results. One important insight I gained came from reading White et al.’s work on prompt patterns was, “A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities” (White et al., 1). I think prompting is more than a skill. It’s about clearly defining problems and using the right approach to solve them.


Week 2: AI Ethics- Blog Post

I think the most surprising thing I learned during the last AI lab was how AI has advanced or evolved within such a short span of time. Particularly, large language models (LLMs) have progressed very rapidly. I think OpenAI was able to ignite the idea at a mass level, and now we see different companies developing their own AI models. From Elon Musk’s xAI to Google’s Gemini, Perplexity, and many more, AI has become a competitive field.

A couple of years ago, when I used to read stories and predictions about AI, I wondered how it was going to work. But now, I feel it’s no longer just a concept or a futuristic function—it’s reality. When I was a kid, I imagined a car or a house where I could think, and it would automatically function, similar to Iron Man’s J.A.R.V.I.S. Today, we see automated cars powered by AI, running without human intervention.

Recently, I was astounded by the discovery of stem cells that can significantly boost human lifespan—an innovation achieved with AI. A few years ago, Elon Musk’s Neuralink chip technology seemed like a rumor, but now we see a chip interacting with the human brain, helping visually impaired or physically disabled individuals live more independent lives. It’s beyond imagination.

In the lab, I also discovered Perplexity and how it helps find research papers. One major perspective that changed my mind was how AI has evolved from giving useless answers to providing more accurate, factual, and evidence-based responses. AI is evolving and will continue to transform our lives and the way we think.

However, I am concerned about the ethical implications. AI can be biased based on its training data and its trainers. We saw examples in class, such as China’s DeepSeek refusing to answer China-related questions. AI will likely be used as a tool for social control and as a weapon by governments or powerful corporations. I believe those who lead AI advancements will shape our future. All I want is for AI to be used ethically and responsibly.

References

  • Neuralink’s Brain Implant Helps Paralyzed Man Control Computer – The Guardian
  • China’s DeepSeek AI Censorship Concerns – The Guardian
  • Advancements in AI Models and Competition – New York Post
  • Breakthrough in AI-Discovered Stem Cell Therapy for Longevity – MIT Technology Review
  • Ethical Concerns in AI Bias and Control – Wired

Introductions and the role of AI

Hello Everyone,

know it’s a bit late, but I’d like to introduce myself. My name is Nehal Rafin (He/Him), and I’m a freshman originally from Bangladesh. Currently, I’m undecided and exploring my field of interest. As a person, I like to understand and explore different topics, from quantum physics to religion, which makes me a learner of a broader and diverse group of topics. I think I have many hobbies, but something I discovered after coming to college was my love for cooking, especially Bangladeshi cuisine.

I was initially interested in this class because I feel AI is our future, and I want to contribute to this field. Fun fact: I used to play a game called “Rise of Nations,” developed by Microsoft. It’s a strategy game that shows how civilization has evolved. Although I played it 10 years ago, it predicted the era of AI. It’s very interesting to see a prediction from a game becoming reality. I think AI is going to change our lives and our world. We have already started to see the influence of AI in our lives, but I want this era to be a testimony to a moral and ethical world where all humans are treated with their proper right to dignity.