Week 5: Academic Writing (April 17, 2025)

If you’d asked me last year how much AI I’d use in college, I probably would’ve said “just for spellcheck.” Now, it’s everywhere-helping brainstorm essays, summarize readings, and even draft emails for internships. But after this week’s readings and our class discussion, I’m honestly questioning: where do we draw the line between using AI as a tool and letting it do the work for us?

One thing that stuck out to me is how easy it is to cross that line without even noticing. I totally get why people use AI to outline or organize their thoughts. It can save a ton of time, especially when you’re stuck. But if we start relying on it to write entire essays or reports, are we really learning anything? As one of my peer in his blog post pointed, “AI can spark new ideas and help shape your work, but it also blurs the line between your own thinking and what AI generated.” I think that’s the real grey area. I think the critical part is AI to support your process versus letting it replace your effort.Ethically, it gets even messier. For example, is it okay to use AI to summarize a research article for class? The readings this week emphasized transparency, like Tang et al. (2023) saying, “transparency in declaring the use of generative AI is vital to uphold the integrity and credibility of academic research writing.” If we’re not honest about how much we use AI, it’s impossible to know what’s actually our work and what’s just machine output. I also think about future jobs. If companies start expecting everyone to use AI tools, will creativity and critical thinking take a back seat? Or will we just get better at collaborating with technology? Honestly, I think the best policies will be the ones that encourage responsible use-making it clear when and how AI can be used, but also leaving room for innovation.

One last thing, I don’t think there’s a perfect answer yet. We’re all figuring it out as we go. But if we want to keep learning and growing, we have to be honest about what’s ours and what’s AI’s-and make sure we’re not letting the tech do all the thinking for us.

References:

  1. Tang, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing.
  2. Lund, B. D., Wang, T., Mannuru, N. R., & Shimray, S. (2023). “Generative AI tools are nonlegal entities and are incapable of accepting responsibility and accountability for the content within the manuscript.”

Leave a Reply