This week, we encountered a lot of questions about the fairness of AI use in academic fields. As I decided to further my studies in AI, I consider AI to be just a tool for improvement. What I mean by a tool here is that it’s just something similar to Google or Facebook. When Google first came out, the world was blown away: it became a significant moment in the history of the internet. Same as Google, I think Artificial Intelligence is also a revolution in terms of how we access and process information. The key is not whether we should use AI, but how we use it responsibly. To me, AI can enhance learning, boost creativity, and save time when used correctly. It’s not a shortcut to avoid thinking, but a powerful aid to deepen our understanding.
When it comes to determining ” how much is too much” in the use of AI, it is less about the restriction of limits but the intention behind its use. I agree that AI is a great tool in supporting learning, researching, and productivity, but not to replace our critical thinking skills and creativity. Finding a balance in the use of AI is hard, but to me, AI is just a supplement, not a substitute for everything.
Also, when considering the ethical use of AI, it is a very sensitive opinion relating to originality, authorship, and fairness. In Tang et al Transparency in academic writing (2023), they consider authorship in academic work as attribution to humans, not AI. This means AI cannot be accountable for the integrity of the content. Nowadays, there are already a few guidelines on how to cite AI in any academic work. However, Tang et al. argue that merely citing AI is not enough if transparency about its role is not made explicit.
References
Tang et al Transparency in academic writing (2023)
I agree that education and intention with use is most important, then actual written out restrictions.