Academic writing and AI

When I first asked ChatGPT to outline my lecture, I felt the thrill of amplified creativity, but that lasted only in the first few times. Balancing AI’s power means acknowledging its role without letting it subsume ours. As Tang et al. remind us, “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing.” ​

In practice, the grey zones are everywhere: Is summarizing primary sources with AI just good scaffolding or academic shortcut? Can we allow AI-driven captioning in class for accessibility but forbid it in formal assessments? Policies must draw clear boundaries, however it is difficult considering how little we know about AI extensibility. Transparent guidelines, akin to IRB declarations, could require students and faculty to note AI‑assisted sections, letting evaluators focus on original insight. AI could also be normalized as an assistive tool across all forms of assignments and in person discussion based evaluations might be prioritized instead.

Ultimately, defining “too much” is less about word count and more about intent. If AI amplifies human potential and honors ethical guardrails, it’s a tool; if it replaces responsibility, it’s overreach.

AI and Creativity

This week’s dive into AI creativity and the readings on it were interesting. One particular prompt regarding its capacity to create poems caught my attention. I was stunned by how the LLM balanced traditional poetic structure with unexpected twists. This challenged by assumptions about creativity.

To me, creativity has always been a blend between discipline and spontaneity, order and chaos. To reimagine a machine form something as linguistically complex that captures this version of creativity was truly mind-boggling Here’s the AI’s response regarding this:

“In the quiet library of midnight,
the clock whispered forgotten secrets
while shadows danced on silver pages,
each step a quiet rebellion against the dawn.”

The interplay between structure and surprise stood out to me. There is an element of traditional poetic framework coupled with an unexpected energy. This mixture of predictability with delightful unpredictability is what I consider the essence of all creativity. This resonates with the philosophical perspectives we discussed, which suggest that creativity involves recontextualizing the familiar to reveal deeper insights (Loi et al., 2020).

The paper we read also raised important ethical questions. As AI systems become more integrated into creative fields, we need to consider the moral costs of relying on these systems. For instance, when mainstream AI stifles the kind of open-ended exploration that fuels genuine creativity, there’s a risk that we might lose something irreplaceable about human ingenuity. In my view, while the AI-generated poem may not represent a radical breakthrough in originality, it does prompt us to think about how we value and attribute creativity, particularly when the creator is not human.

Feeding my Procrastination: Prompting for an Exam

The  prompt that I used for today’s class was: “Imagine I am the LLM and you are a college student. How would you request information about effective study methods and schedules for final exams?” The response it gave me as for the prompt response: “’Im a college student preparing for finals in [subject, e.g., biology, calculus, history]. Could you suggest a:

Study schedule template (e.g., 1-week or 2-week plan) with time blocks for review, practice, and breaks.

Active recall or spaced repetition techniques tailored to this subject.

Common pitfalls to avoid (like cramming or passive highlighting).

Then I used this prompt to craft a study schedule for my Mathematical Computing class exam by feeding the syllabus to its knowledge base as well. I used Deepseek as the AI tool for this assignment which also has a chain of thought response feature on it as well. This prompt would be useful and crucial for students who are almost unfamiliar with the class syllabus and will likely want to have a head start on the syllabus content to ace any upcoming exam for the class. This is an example of flipped interaction prompt pattern technique (White et al), that we looked at last class. 

AI’s Blind Spot

When we ask our beloved Gen AI assistant a simple question about world history, it confidently seems to spit out an answer. But what if this answer might be slightly fabricated? Or even better, entirely made up? Or what if the fabrication is solely based on some preconceived biases baked into the data that AI was trained on, leading to certain perspectives that seem factual? This was part of the things that we discussed in our research lab in class today and it made me think a bit differently on what AI’s future might hold.

AI models are not trained to form opinions, they simply learn from massive datasets collected from various literature sources such as books, articles and online resources. But in the real world, as we know it, all forms of medium of sharing information and knowledge come with built-in biases, whether they are political, cultural or even in some cases, accidental. This leads to the inherent problem of reinforcing propaganda rather than neutral information. For instance, Chinese AI models might align more with state ideologies, while Western models might reflect their own forms of biases. Neither reflects true ‘objectivity’.

So what is the scary part? The issue isn’t solely political. Biases in AI show up even in certain hiring algorithms that have preference over certain demographics, facial recognition systems that struggle with non-white faces, and even medical AI that might underdiagnose illness in minority groups. Thus, if AI is predicted to be the future, we need to ask: whose future will it shape?

The answer for this might be even more complicated. This problem fundamentally arises from the architecture of LLMs or cutting-edge AI systems that simply prioritize text that is meant to be syntactically correct over understanding semantic meaning. If I could change one thing about AI’s future, it would be to make it maximally curious, designed to question its own training data by seeking diverse perspectives and recognizing gaps in its own knowledge. AI should be perceived as a tool for discovery, not just a mirror of reflecting biases of the past and repeating the same mistakes.

So can we make AI more self-aware, or are we stuck with its blind spots forever?

A

Sources

https://americanedgeproject.org/new-report-chinese-ai-censors-truth-spreads-propaganda-in-aggressive-push-for-global-dominance/?utm_source=chatgpt.com

https://medium.com/%40jangdaehan1/algorithmic-bias-and-ideological-fairness-in-ai-fbee03c739c7

https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/?utm_source=chatgpt.com

https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/?utm_source=chatgpt.com

RM’s Introduction

Hey everyone! I’m RM, a senior Computer Science student at the College of Wooster (he/him/his). I’m all about music, travel, books, design, and anything that pushes the boundaries of technology. I’ve lived in four different countries, so travelling is a huge part of who I am, I love exploring new cultures, food, and getting lost in new cities. I am taking this class to understand the ethical implications of the use of LLMs.

AI automates data processing using machine learning, allowing computers to analyze information on their own. This shifts the workload from humans to machines, changing how we approach questions. Instead of just checking if individual facts are correct, AI helps us explore bigger ideas and patterns.

A picture of me with some talented group of individuals, leading the Hans H. Jenny Investment Fund at the College.