Week 5: Academic Writing

Now, AI has gradually become an indispensable tool in each individual’s school and working life. However, it is difficult to determine exactly how much AI is too much. In my opinion, this will largely depend on each person’s intended use. For example, when using AI for learning purposes, I feel it is enough for students to see them as supporting tools, helping them understand the lesson more easily. However, if you use AI just to write out answers and submit them to pass, then that is already a distorted use of AI. This is probably the same for professional lives. We should use AI to understand the questions we are asking, not just simply to solve them. Therefore, I feel like we can’t know how much is enough for AI, and let’s just consider it a support tool.

When we consider them as support tools, it also means that we should not depend on them too much. As Tang et al. mentioned in the article, everyone using AI can encounter artificial hallucination when they receive a lot of information, but it is not entirely accurate. We have mentioned in our class sessions that we should always check the information carefully to avoid making mistakes from AI. Overall, what I mean by this post is that no one is stopping us from using AI, but we need to have the right awareness about it and when to use it properly.

AI in Academic

The advent of artificial intelligence has brought many conveniences to people, no matter what industry they are in. In academia, the benefits of AI are faster data processing and analysis, better positioning of required academic articles, etc. At the same time, AI brings a series of disadvantages, such as long-term use leading to a decline in students’ independent thinking ability and plagiarism and patent problems.

In academia, the word originality is very important. The invention of AI such as Chatgpt and Deepseek has brought unprecedented impact on originality. The definition of plagiarism proposed by Tang et al (2023) is not only direct word-for-word copying, but also includes indirect plagiarism such as plagiarizing ideas. Most of the texts generated by AI do not have “human” ideas, but are captured and generated through their own big data LLM model. Of course, there are countermeasures for policies. Turnitin is a widely used plagiarism and AI usage detection software in academia. Professors can place students’ papers in this software to detect potential plagiarism. With the widespread use of AI, many universities have also issued guidelines to teach students how to write with AI correctly, such as declaring the use of AI and ethically using AI for writing assistance (University of Kansas).

Sources:

Tang, A., Li, K-K., Kwok, K. O., Cao, L., Luong, S. & Tam, W. (2024). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship, 56, 314–318. https://doi.org/10.1111/jnu.12938 15475069, 2024, 2, Downloaded from https://sigmapub

https://www.forwardpathway.com/135174 (Xuhang Education: The rise and challenges of artificial intelligence in academia. Webpage content in Chinese)

https://cte.ku.edu/ethical-use-ai-writing-assignments (University of Kansas: Using AI ethically in writing assignments)

Week 5: Academic writing

This week, we encountered a lot of questions about the fairness of AI use in academic fields. As I decided to further my studies in AI, I consider AI to be just a tool for improvement. What I mean by a tool here is that it’s just something similar to Google or Facebook. When Google first came out, the world was blown away: it became a significant moment in the history of the internet. Same as Google, I think Artificial Intelligence is also a revolution in terms of how we access and process information. The key is not whether we should use AI, but how we use it responsibly. To me, AI can enhance learning, boost creativity, and save time when used correctly. It’s not a shortcut to avoid thinking, but a powerful aid to deepen our understanding.

When it comes to determining ” how much is too much” in the use of AI, it is less about the restriction of limits but the intention behind its use. I agree that AI is a great tool in supporting learning, researching, and productivity, but not to replace our critical thinking skills and creativity. Finding a balance in the use of AI is hard, but to me, AI is just a supplement, not a substitute for everything.

Also, when considering the ethical use of AI, it is a very sensitive opinion relating to originality, authorship, and fairness. In Tang et al Transparency in academic writing (2023), they consider authorship in academic work as attribution to humans, not AI. This means AI cannot be accountable for the integrity of the content. Nowadays, there are already a few guidelines on how to cite AI in any academic work. However, Tang et al. argue that merely citing AI is not enough if transparency about its role is not made explicit.

References

Tang et al Transparency in academic writing (2023)

How much AI use is too much & the transparency issue

Thinking about my future job as a teacher, I believe AI becomes too much, for example, when AI-composed homework and papers are corrected by AI. By that, I don’t mean that AI can’t be used as an assistive tool at all, but it is about the extent to which one uses it. Personally, I think it becomes too much when you have it compose or correct whole assignments without reflecting on the topics and outputs yourself. Having AI tools compose whole papers is clearly too much because, as we have discussed in class, and as is argued by Tang et al. (2023), as of now, AI tools are non-legal entities, meaning they cannot be granted authorship and, thus, cannot take responsibility for their outputs. Additionally, most LLMs still do a poor job of referencing sources, therefore running risk of plagiarizing (Tam et al., 2023; Tang et al., 2023).

Overall, I feel clear guidelines, for example, on how to cite which kind of AI use, would be necessary to increase transparency. Nevertheless, as is the case with other tools, like Grammarly or even ghostwriters, guidelines would still not guarantee responsible and appropriate use. After all, it will still be up to the author to declare AI use honestly, while the reader has to form their own judgment of a text.

Here’s an example from Austria, which highlights the lack of knowledge of, as well as guidelines for, AI use. In 2015, the “Vorwissenschaftliche Arbeit”, a “pre-scientific paper” all students graduating from high school were required to write during their last school year, was introduced. Because of the fast development of AI and teachers’ inability to assess whether students’ work was written by AI or the students themselves, the Ministry of Education decided to remove this requirement for graduation. Many politicians have criticized this decision, stating that completely abandoning the pre-scientific paper should not be the solution. Rather, it should be discussed how AI can be incorporated appropriately to enhance student learning.

Sources:

Die Presse (Austrian Newspaper). Vorwissenschaftliche Arbeit soll abgeschafft werden.https://www.diepresse.com/18532278/vorwissenschaftliche-arbeit-soll-abgeschafft-werden

Tam, W., Huynh, T., Tang, A., Luong, S., Khatri, Y., & Zhou, W. (2023). Nursing education in the age of artificial intelligence powered Chatbots (AI-Chatbots): Are we ready yet? Nurse Education Today, 129, 105917. https:// doi. org/ 10. 1016/j. nedt. 2023. 105917

Tang, A., Li, K-K., Kwok, K. O., Cao, L., Luong, S. & Tam, W. (2024). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship, 56, 314–318. https://doi.org/10.1111/jnu.12938 15475069, 2024, 2, Downloaded from https://sigmapub

Ethical Concerns with AI

After reading Tang et al., 2023, it made me think why not make it mandatory to inform the reader that AI was used in the production of the nursing journals? Any use of AI in an academic setting has the chance to be wrong. In a nursing setting, you would want to be certain the information you are publishing is correct. If a publisher decides to use AI in their journal and not inform the reader, the information could be trusted when it should not. This leaves a lot of liability on the journalist if something were to go awry. Surely, a journal article is not enforcing someone’s life decisions, but the idea that papers/generated videos can lead people in the wrong thought/direction is a concern. I will continue to talk more about this concept with my final project of Gen AI content/Deep Fake and how it can cause mass hysteria. I believe the solution to this is to disclose the use of all AI content to the viewer in every aspect.

Academic Writing Post – Derrick Jones Jr.

I believe that AI is here to stay, and with that it will be utilized in different fields. With the evolution and development of Ai ethics and usage, fields that once hated the idea of using Ai have started making strides in using it. IN academic writing especially in high fields needs to be recognized in citations. In medical research AI has been used and it has been argued if the use of it was ethical. Well I believe that writers that have started to utilize Ai are pioneering a new wave of Ai usage. In educational use, it is hard to determine how the use is ethical, students will find a way to tiptoe on the schools guidelines. So it hard to determine if the use of Ai at lower levels is okay. Higher level fields using Ai, if cited, should be fine if the user knows the boundaries and ethics in their respective fields.