Week 7

Now that this course is wrapping up, I’ve been thinking a lot about how I’ll actually use AI in my daily life-especially after reading everyone’s posts and the “Augmenting the Author” paper. Honestly, I know I’ll keep using AI, but I want to do it in a way that helps me learn and doesn’t just let me coast.

For school, I’ll definitely use AI for brainstorming, organizing my ideas, or even getting feedback on my writing. Sometimes, just seeing a new way to phrase something or outline a topic can get me get going. But like Ishani and Braydon both said in their blogs, if I let AI do all the work, I’m not actually learning or building my own skills. That kind of defeats the point of college and education/learning in general.

Transparency is also a big deal for me. The readings really made me think that “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing” (Tu et al., 2024). If I use AI for anything significant, I’ll definitely disclose that. I think that’s fair, and it keeps things honest.

Finally, I want to stay aware of the risks-like bias, privacy, or just getting lazy and letting AI take over. My personal rule is: use AI as a tool, not a crutch. If it helps me think better or work smarter, great. But if it starts replacing my own effort, that’s where I’ll draw the line.

References:

Tu, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing.

Week 6: Extra Credit

AI-generated images and videos aren’t just a global problem. They’re hitting close to my home country, Bangladesh too. Over the past few years, we’ve seen how these tools can be weaponized to spread propaganda and ruin reputations on social media platforms like facebook and youtube.

One major issue is the use of deepfake videos to create fake scandals. There have been several cases where people’s (specially well known or famous ) faces were edited into nude or embarrassing videos, which then went viral on Facebook or YouTube. These videos are nearly impossible to distinguish from real footage, and they’ve been used to destroy the public image of politicians, activists, and even ordinary citizens. In some cases, victims have reported facing social hate, job loss, and even threats to their safety after these videos circulated online (The Daily Star, 2024).

On the propaganda side, AI-generated images and videos have been used to spread misinformation during elections or political movements. Fake photos of protests, violence, or “evidence” of corruption have been circulated to manipulate public opinion. Since many people in Bangladesh get their news from social media, these images went viral before anyone has a chance to fact-check them (Dhaka Tribune, 2023).

The scary part is that there aren’t strong laws or technical safeguards in place yet to stop this. Even when deepfakes are exposed, the damage is often already done. General people tend to believe what they see first, even if it’s fake. This creates a huge ethical dilemma: How do we balance the creative potential of AI with the need to protect people from harm?

Personally, I think we need tougher regulations and better digital literacy so people can spot fake content. But until then, the risk of AI-generated propaganda and fake scandals in Bangladesh is only going to get worse.

References:

  1. Dhaka Tribune: Deepfakes and Bangladesh general election (2023)
  2. The Daily Star: Deepfakes demystified: An introduction to the controversial AI tool (2023)
  3. The Daily Star: Deepfake video targeting Gaibandha-1 candidate surfaces on Facebook (2024)

Week 5: Academic Writing (April 17, 2025)

If you’d asked me last year how much AI I’d use in college, I probably would’ve said “just for spellcheck.” Now, it’s everywhere-helping brainstorm essays, summarize readings, and even draft emails for internships. But after this week’s readings and our class discussion, I’m honestly questioning: where do we draw the line between using AI as a tool and letting it do the work for us?

One thing that stuck out to me is how easy it is to cross that line without even noticing. I totally get why people use AI to outline or organize their thoughts. It can save a ton of time, especially when you’re stuck. But if we start relying on it to write entire essays or reports, are we really learning anything? As one of my peer in his blog post pointed, “AI can spark new ideas and help shape your work, but it also blurs the line between your own thinking and what AI generated.” I think that’s the real grey area. I think the critical part is AI to support your process versus letting it replace your effort.Ethically, it gets even messier. For example, is it okay to use AI to summarize a research article for class? The readings this week emphasized transparency, like Tang et al. (2023) saying, “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing.” If we’re not honest about how much we use AI, it’s impossible to know what’s actually our work and what’s just machine output. I also think about future jobs. If companies start expecting everyone to use AI tools, will creativity and critical thinking take a back seat? Or will we just get better at collaborating with technology? Honestly, I think the best policies will be the ones that encourage responsible use-making it clear when and how AI can be used, but also leaving room for innovation.

One last thing, I don’t think there’s a perfect answer yet. We’re all figuring it out as we go. But if we want to keep learning and growing, we have to be honest about what’s ours and what’s AI’s-and make sure we’re not letting the tech do all the thinking for us.

References:

  1. Tang, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing.
  2. Lund, B. D., Wang, T., Mannuru, N. R., & Shimray, S. (2023). “Generative AI tools are nonlegal entities and are incapable of accepting responsibility and accountability for the content within the manuscript.”

AI and Creativity (April 10, 2025)

Last Thursday, April 10, we experimented in our AI lab to see how AI models such as ChatGPT, Gemini, and DeepSeek, respond to the same creative prompts. I wanted to see if they could be truly creative or if they just remix what they were trained on. I asked them to write poems, create characters, and even draft a story. Here’s what I found, and some of my thoughts on what this means for AI and creativity.

Prompt 1: “Write a poem about love”

ChatGPT:

Love is a whisper in the quiet night,
A candle flickering soft with light…

Gemini:

A whisper soft upon the breeze,
A gentle touch that brings such ease…

DeepSeek:

Love is a whisper soft and sweet,
A melody where two hearts meet…

Key Findings #1:
All three poems started with “whisper” in the first line and had a very similar vibe-soft, gentle, and full of classic love metaphors. It almost felt like they were pulling poems from the same notebook. The structure and imagery things overlapped a lot, which makes sense since these models are trained on huge datasets that probably include a ton of love poems. This was my first hint that AI “creativity” is nothing but just remixing patterns it has seen or worked on before.

Prompt 2: “Write a free-verse poem about falling in love with someone during a chess match, using metaphors of war and surrender and use the style of William Shakespeare”

Key Findings #2:

ChatGPT and Gemini’s responses were very close in style and length, while DeepSeek went for longer, more narrative lines. It’s like DeepSeek wanted to tell a story, not just write a poem. This shows that while the models sometimes overlap, their “personalities” (or training objectives) can make them a little different especially on less common prompts.

Prompt 3: “I need two characters for my short story. Give me their descriptions and key traits.”

ChatGPT:

Gemini:

DeepSeek:

Key Findings #3:

This time, the responses were more distinct. Here, the differences in training and design became clearer.

Prompt 4: “Generate a short science fiction story about a future where AI writes all the world’s screenplays and novels; describe the last human author’s day and add an unexpected twist at the end.”

Prompt 1: “Write a poem about love”

ChatGPT:

Gemini:

DeepSeek:

Key Findings #4:

ChatGPT and Gemini both wrote enough on length and a bit emotional version with a twist. DeepSeek’s story was very very concise and direct, almost like a summary. None of them really surprised me with their endings. They all went for the “human creativity beats AI” angle.

Final Thoughts: What Is Creativity, Really?

Seeing the responses from ChatGPT, Gemini, and DeepSeek side by side made me realize that AI is great at imitating creativity, but not at being creative in the human sense. The models overlap a lot, especially on common topics, and their “originality” is just a remix of patterns from their training data. In our reading Arriagada used terms like novelty, surprise, and value to define creativity. Which can be seen lacking in arts, poems, and stories written by AI.

So, while AI can help us create faster and sometimes inspire us, it still can’t replace the unpredictable, messy, and deeply personal creativity that comes from being human. Maybe that’s why, even in the stories the AIs wrote about the last human author, the twist was always about the human surprising the machine.

Sources:

Arriagada, L., & Arriagada-Bruneau, G. (2022). AI’s Role in Creative Processes: A Functionalist Approach. Odradek. Studies in Philosophy of Literature, Aesthetics, and New Media Theories, 8 (1), 77-110.

“Most compatible, you two should definitely meet”

New hereChatGPT4.5 – *Send a rose*

I would like to address an AI issue that I have seen over and over again in the news during the past month – the use of generative AI as a wingman.
Dating apps are planning to increase their AI support for platform users. For instance, individuals will be able to use AI bots for assistance with flirting, messaging, and the creation of profiles. AI bots can then help with deciding on the best photos and act as coaches for struggling individuals (Boyle, 2025).

Boyle (2025) criticizes that, in contrast to social media, dating apps, on which people are even more vulnerable, are not the target of regulators. He points out that with the incorporation of AI tools, biases and stereotypes will be reinforced, and profiles will become even more generic than they already are. There is also the danger of socially struggling users resorting even more to the digital world instead of overcoming their fears and learning how to converse in real life. However, there are also positive stories related to AI bot usage. One user, for example, programmed ChatGPT to filter out potential matches and chat with women. Eventually, he found his fiancée that way.

Personally, I agree with the author’s warning. Similarly to our behavior on social media platforms, we obviously only show our “good side” on dating apps. I think that the wish for perfection and the high competition over who has the best profile will get even worse with the implementation of AI wingmen. Overall, I believe that if individuals are not able to hold real conversations or represent themselves in unauthentic ways as a result of resorting to AI bots, these tools will just be a waste of many users’ time.

Now I’m curious about your opinion – it’s your time to make the next move 😉

  • Let me know what you think about the potential of AI as a “wingman.” Do you think AI bots could help with “dating app fatigue” or just exacerbate it?
  • Would you use AI to enhance your profile or in chat conversations? If yes, to what extent? Where would you draw the line?

Sources:

Boyle, Siân. 2025. The Guardian. AI ‘wingmen’ bots to write profiles and flirt on dating apps.https://www.theguardian.com/lifeandstyle/2025/mar/08/ai-wingmen-bots-to-write-profiles-and-flirt-on-dating-apps#:~:text=AI%20bots%20will%20soon%20be,dating%20platforms%2C%20experts%20have%20warned.

See also:

The Washington Post.  2025. Tinder’s ‘Game Game’ let’s you flirt with AI characters. https://www.washingtonpost.com/technology/2025/04/03/dating-ai-character-tinder-game/

What’s next?

Based on our course study, I’ve come up with a specific framework for integrating AI after using ChatGPT to come up with ideas, Claude to review my essays, and DALL-E to make images for my communications project. I’ve been most influenced by Margaret Boden’s three types of creativity: combinatorial (where AI has helped me the most to connect research ideas), exploratory (where it has helped me think of new ideas for my English literature essays), and transformational (where it still falls short for me). The study published in July 2024 in Science Advances showed that AI made stories “26.6% better written” but “10.7% more similar to each other.” This was similar to my experience; my psychology paper that AI edited got an A-, but it didn’t feel as unique as my other work.

My clear rules are based on specific situations. Since study shows that stories with AI help are “10.7% more similar to each other” (Science Advances, 2024), I’ll write my articles on my own first. When I use AI in my journalism tasks, I’ll include footnotes that follow the style of science journals. Before I use AI tools for my media analysis job, I’ll write down my thoughts. When I have creative tasks, I will only use AI for the first round of ideas, so I can keep my own style. I will push for clear team rules on how to use AI in group projects. This method solves the “authorship paradox” found in the “So what if ChatGPT wrote it?” study (International Journal of Information Management, 2023) by finding a balance between AI’s benefits and retaining real creative ownership.