Generative AI has quickly become one of the most transformative technologies of our time since its mainstream debut in 2022. By early 2023, ChatGPT had gained over 100 million users, leading to widespread integration of AI into our daily lives. Using AI is not black and white, there are definitely many grey areas, especially while considering ethics and creativity which humans are still navigating.
Our readings this week raised an interesting point: “Generative AI tools are nonlegal entities and are incapable of accepting responsibility and accountability for the content within the manuscript. Additional ethical concerns beyond authorship issues include copyright implications arising from the use of third-party content, conflict of interest, and the broader concept of plagiarism which encompasses not only verbatim text copying but also the replication of ideas, methods, graphics, and other forms of intellectual output originating from others” (Lund et al.,2023) Lund and his co-authors point out that AI can’t be held responsible for what it creates. What happens if something goes wrong? and who would be held accountable? If AI writes something based on thousands of existing texts, who actually owns that work? As a student, It’s easy to blur the line between getting support and cutting corners. AI is convenient because you can ask it think critically for you, which defeats the purpose of learning. College is where students are supposed to develop their voice, and ability to communicate complex ideas.
For me, the key to developing my own perspective on academic and professional use of AI is transparency and balance. I think AI can be a great tool if we’re honest about how we use it as and make a conscience effort to not let it replace our own thinking. Just like with calculators, it’s not about banning the tech but about learning how to use it responsibly and with intention.