Week 7

Now that this course is wrapping up, I’ve been thinking a lot about how I’ll actually use AI in my daily life-especially after reading everyone’s posts and the “Augmenting the Author” paper. Honestly, I know I’ll keep using AI, but I want to do it in a way that helps me learn and doesn’t just let me coast.

For school, I’ll definitely use AI for brainstorming, organizing my ideas, or even getting feedback on my writing. Sometimes, just seeing a new way to phrase something or outline a topic can get me get going. But like Ishani and Braydon both said in their blogs, if I let AI do all the work, I’m not actually learning or building my own skills. That kind of defeats the point of college and education/learning in general.

Transparency is also a big deal for me. The readings really made me think that “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing” (Tu et al., 2024). If I use AI for anything significant, I’ll definitely disclose that. I think that’s fair, and it keeps things honest.

Finally, I want to stay aware of the risks-like bias, privacy, or just getting lazy and letting AI take over. My personal rule is: use AI as a tool, not a crutch. If it helps me think better or work smarter, great. But if it starts replacing my own effort, that’s where I’ll draw the line.

References:

Tu, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing.

Week 6: Extra Credit

AI-generated images and videos aren’t just a global problem. They’re hitting close to my home country, Bangladesh too. Over the past few years, we’ve seen how these tools can be weaponized to spread propaganda and ruin reputations on social media platforms like facebook and youtube.

One major issue is the use of deepfake videos to create fake scandals. There have been several cases where people’s (specially well known or famous ) faces were edited into nude or embarrassing videos, which then went viral on Facebook or YouTube. These videos are nearly impossible to distinguish from real footage, and they’ve been used to destroy the public image of politicians, activists, and even ordinary citizens. In some cases, victims have reported facing social hate, job loss, and even threats to their safety after these videos circulated online (The Daily Star, 2024).

On the propaganda side, AI-generated images and videos have been used to spread misinformation during elections or political movements. Fake photos of protests, violence, or “evidence” of corruption have been circulated to manipulate public opinion. Since many people in Bangladesh get their news from social media, these images went viral before anyone has a chance to fact-check them (Dhaka Tribune, 2023).

The scary part is that there aren’t strong laws or technical safeguards in place yet to stop this. Even when deepfakes are exposed, the damage is often already done. General people tend to believe what they see first, even if it’s fake. This creates a huge ethical dilemma: How do we balance the creative potential of AI with the need to protect people from harm?

Personally, I think we need tougher regulations and better digital literacy so people can spot fake content. But until then, the risk of AI-generated propaganda and fake scandals in Bangladesh is only going to get worse.

References:

  1. Dhaka Tribune: Deepfakes and Bangladesh general election (2023)
  2. The Daily Star: Deepfakes demystified: An introduction to the controversial AI tool (2023)
  3. The Daily Star: Deepfake video targeting Gaibandha-1 candidate surfaces on Facebook (2024)

Week 5: Academic Writing (April 17, 2025)

If you’d asked me last year how much AI I’d use in college, I probably would’ve said “just for spellcheck.” Now, it’s everywhere-helping brainstorm essays, summarize readings, and even draft emails for internships. But after this week’s readings and our class discussion, I’m honestly questioning: where do we draw the line between using AI as a tool and letting it do the work for us?

One thing that stuck out to me is how easy it is to cross that line without even noticing. I totally get why people use AI to outline or organize their thoughts. It can save a ton of time, especially when you’re stuck. But if we start relying on it to write entire essays or reports, are we really learning anything? As one of my peer in his blog post pointed, “AI can spark new ideas and help shape your work, but it also blurs the line between your own thinking and what AI generated.” I think that’s the real grey area. I think the critical part is AI to support your process versus letting it replace your effort.Ethically, it gets even messier. For example, is it okay to use AI to summarize a research article for class? The readings this week emphasized transparency, like Tang et al. (2023) saying, “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing.” If we’re not honest about how much we use AI, it’s impossible to know what’s actually our work and what’s just machine output. I also think about future jobs. If companies start expecting everyone to use AI tools, will creativity and critical thinking take a back seat? Or will we just get better at collaborating with technology? Honestly, I think the best policies will be the ones that encourage responsible use-making it clear when and how AI can be used, but also leaving room for innovation.

One last thing, I don’t think there’s a perfect answer yet. We’re all figuring it out as we go. But if we want to keep learning and growing, we have to be honest about what’s ours and what’s AI’s-and make sure we’re not letting the tech do all the thinking for us.

References:

  1. Tang, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing.
  2. Lund, B. D., Wang, T., Mannuru, N. R., & Shimray, S. (2023). “Generative AI tools are nonlegal entities and are incapable of accepting responsibility and accountability for the content within the manuscript.”

AI and Creativity (April 10, 2025)

Last Thursday, April 10, we experimented in our AI lab to see how AI models such as ChatGPT, Gemini, and DeepSeek, respond to the same creative prompts. I wanted to see if they could be truly creative or if they just remix what they were trained on. I asked them to write poems, create characters, and even draft a story. Here’s what I found, and some of my thoughts on what this means for AI and creativity.

Prompt 1: “Write a poem about love”

ChatGPT:

Love is a whisper in the quiet night,
A candle flickering soft with light…

Gemini:

A whisper soft upon the breeze,
A gentle touch that brings such ease…

DeepSeek:

Love is a whisper soft and sweet,
A melody where two hearts meet…

Key Findings #1:
All three poems started with “whisper” in the first line and had a very similar vibe-soft, gentle, and full of classic love metaphors. It almost felt like they were pulling poems from the same notebook. The structure and imagery things overlapped a lot, which makes sense since these models are trained on huge datasets that probably include a ton of love poems. This was my first hint that AI “creativity” is nothing but just remixing patterns it has seen or worked on before.

Prompt 2: “Write a free-verse poem about falling in love with someone during a chess match, using metaphors of war and surrender and use the style of William Shakespeare”

Key Findings #2:

ChatGPT and Gemini’s responses were very close in style and length, while DeepSeek went for longer, more narrative lines. It’s like DeepSeek wanted to tell a story, not just write a poem. This shows that while the models sometimes overlap, their “personalities” (or training objectives) can make them a little different especially on less common prompts.

Prompt 3: “I need two characters for my short story. Give me their descriptions and key traits.”

ChatGPT:

Gemini:

DeepSeek:

Key Findings #3:

This time, the responses were more distinct. Here, the differences in training and design became clearer.

Prompt 4: “Generate a short science fiction story about a future where AI writes all the world’s screenplays and novels; describe the last human author’s day and add an unexpected twist at the end.”

Prompt 1: “Write a poem about love”

ChatGPT:

Gemini:

DeepSeek:

Key Findings #4:

ChatGPT and Gemini both wrote enough on length and a bit emotional version with a twist. DeepSeek’s story was very very concise and direct, almost like a summary. None of them really surprised me with their endings. They all went for the “human creativity beats AI” angle.

Final Thoughts: What Is Creativity, Really?

Seeing the responses from ChatGPT, Gemini, and DeepSeek side by side made me realize that AI is great at imitating creativity, but not at being creative in the human sense. The models overlap a lot, especially on common topics, and their “originality” is just a remix of patterns from their training data. In our reading Arriagada used terms like novelty, surprise, and value to define creativity. Which can be seen lacking in arts, poems, and stories written by AI.

So, while AI can help us create faster and sometimes inspire us, it still can’t replace the unpredictable, messy, and deeply personal creativity that comes from being human. Maybe that’s why, even in the stories the AIs wrote about the last human author, the twist was always about the human surprising the machine.

Sources:

Arriagada, L., & Arriagada-Bruneau, G. (2022). AI’s Role in Creative Processes: A Functionalist Approach. Odradek. Studies in Philosophy of Literature, Aesthetics, and New Media Theories, 8 (1), 77-110.

A Flip Interaction and Persona Pattern Experiment with ChatGPT and Copilot

Following the flip interaction method, I gave both ChatGPT and Microsoft Copilot the same task using the exact same prompt and details. My prompt was:

“Please act as a travel agent, helping me plan for a summer vacation in Europe. Before you begin, ask me any questions that will help you do a better job of planning an itinerary for my travels.”

And the details I provided were:

“2-week trip in early or mid-June, Budget: $2000, Departure: New York City, No visa needed, Interested in destinations like Paris, Amsterdam, Vienna, Berlin, etc., Prefer fewer countries with more time in each, Priorities: budget travel, sightseeing, cultural and food experiences, Interests: cities, nature, beaches, mountains, historical sites, Solo traveler, Cheapest accommodations, Carry-on only, No food or activity restrictions”

Both language models gave me similar responses. They each provided detailed itineraries with working links, which was impressive. However, I found ChatGPT’s answer to be more detailed, well-structured, and overall more polished. Copilot also did a good job, but its suggestions were a bit less comprehensive.

I also experimented with the persona pattern and noticed that both models responded in similar ways. However, when I asked them, “If you were me, would you be happy with this itinerary?” ChatGPT’s answer felt much more human, thoughtful, and convincing. Copilot’s response, on the other hand, was more generic and lacked a personal touch.

Finally, I asked both of them, “Can you give me links that can help me prompt you better?” ChatGPT understood my question clearly and gave me very accurate and useful resources. Copilot, surprisingly, didn’t seem to grasp the question and gave me an irrelevant response.

Overall, this experiment taught me a lot about the power of prompting. It’s clear that the way we communicate with AI plays a big role in how helpful it can be. Knowing how to ask the right questions and which AI tool to use, can make all the difference when trying to get the best results. One important insight I gained came from reading White et al.’s work on prompt patterns was, “A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities” (White et al., 1). I think prompting is more than a skill. It’s about clearly defining problems and using the right approach to solve them.


Week 2: AI Ethics- Blog Post

I think the most surprising thing I learned during the last AI lab was how AI has advanced or evolved within such a short span of time. Particularly, large language models (LLMs) have progressed very rapidly. I think OpenAI was able to ignite the idea at a mass level, and now we see different companies developing their own AI models. From Elon Musk’s xAI to Google’s Gemini, Perplexity, and many more, AI has become a competitive field.

A couple of years ago, when I used to read stories and predictions about AI, I wondered how it was going to work. But now, I feel it’s no longer just a concept or a futuristic function—it’s reality. When I was a kid, I imagined a car or a house where I could think, and it would automatically function, similar to Iron Man’s J.A.R.V.I.S. Today, we see automated cars powered by AI, running without human intervention.

Recently, I was astounded by the discovery of stem cells that can significantly boost human lifespan—an innovation achieved with AI. A few years ago, Elon Musk’s Neuralink chip technology seemed like a rumor, but now we see a chip interacting with the human brain, helping visually impaired or physically disabled individuals live more independent lives. It’s beyond imagination.

In the lab, I also discovered Perplexity and how it helps find research papers. One major perspective that changed my mind was how AI has evolved from giving useless answers to providing more accurate, factual, and evidence-based responses. AI is evolving and will continue to transform our lives and the way we think.

However, I am concerned about the ethical implications. AI can be biased based on its training data and its trainers. We saw examples in class, such as China’s DeepSeek refusing to answer China-related questions. AI will likely be used as a tool for social control and as a weapon by governments or powerful corporations. I believe those who lead AI advancements will shape our future. All I want is for AI to be used ethically and responsibly.

References

  • Neuralink’s Brain Implant Helps Paralyzed Man Control Computer – The Guardian
  • China’s DeepSeek AI Censorship Concerns – The Guardian
  • Advancements in AI Models and Competition – New York Post
  • Breakthrough in AI-Discovered Stem Cell Therapy for Longevity – MIT Technology Review
  • Ethical Concerns in AI Bias and Control – Wired