How much AI use is too much & the transparency issue

Thinking about my future job as a teacher, I believe AI becomes too much, for example, when AI-composed homework and papers are corrected by AI. By that, I don’t mean that AI can’t be used as an assistive tool at all, but it is about the extent to which one uses it. Personally, I think it becomes too much when you have it compose or correct whole assignments without reflecting on the topics and outputs yourself. Having AI tools compose whole papers is clearly too much because, as we have discussed in class, and as is argued by Tang et al. (2023), as of now, AI tools are non-legal entities, meaning they cannot be granted authorship and, thus, cannot take responsibility for their outputs. Additionally, most LLMs still do a poor job of referencing sources, therefore running risk of plagiarizing (Tam et al., 2023; Tang et al., 2023).

Overall, I feel clear guidelines, for example, on how to cite which kind of AI use, would be necessary to increase transparency. Nevertheless, as is the case with other tools, like Grammarly or even ghostwriters, guidelines would still not guarantee responsible and appropriate use. After all, it will still be up to the author to declare AI use honestly, while the reader has to form their own judgment of a text.

Here’s an example from Austria, which highlights the lack of knowledge of, as well as guidelines for, AI use. In 2015, the “Vorwissenschaftliche Arbeit”, a “pre-scientific paper” all students graduating from high school were required to write during their last school year, was introduced. Because of the fast development of AI and teachers’ inability to assess whether students’ work was written by AI or the students themselves, the Ministry of Education decided to remove this requirement for graduation. Many politicians have criticized this decision, stating that completely abandoning the pre-scientific paper should not be the solution. Rather, it should be discussed how AI can be incorporated appropriately to enhance student learning.

Sources:

Die Presse (Austrian Newspaper). Vorwissenschaftliche Arbeit soll abgeschafft werden.https://www.diepresse.com/18532278/vorwissenschaftliche-arbeit-soll-abgeschafft-werden

Tam, W., Huynh, T., Tang, A., Luong, S., Khatri, Y., & Zhou, W. (2023). Nursing education in the age of artificial intelligence powered Chatbots (AI-Chatbots): Are we ready yet? Nurse Education Today, 129, 105917. https:// doi. org/ 10. 1016/j. nedt. 2023. 105917

Tang, A., Li, K-K., Kwok, K. O., Cao, L., Luong, S. & Tam, W. (2024). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship, 56, 314–318. https://doi.org/10.1111/jnu.12938 15475069, 2024, 2, Downloaded from https://sigmapub

Ethical Concerns with AI

After reading Tang et al., 2023, it made me think why not make it mandatory to inform the reader that AI was used in the production of the nursing journals? Any use of AI in an academic setting has the chance to be wrong. In a nursing setting, you would want to be certain the information you are publishing is correct. If a publisher decides to use AI in their journal and not inform the reader, the information could be trusted when it should not. This leaves a lot of liability on the journalist if something were to go awry. Surely, a journal article is not enforcing someone’s life decisions, but the idea that papers/generated videos can lead people in the wrong thought/direction is a concern. I will continue to talk more about this concept with my final project of Gen AI content/Deep Fake and how it can cause mass hysteria. I believe the solution to this is to disclose the use of all AI content to the viewer in every aspect.

Academic Writing Post – Derrick Jones Jr.

I believe that AI is here to stay, and with that it will be utilized in different fields. With the evolution and development of Ai ethics and usage, fields that once hated the idea of using Ai have started making strides in using it. IN academic writing especially in high fields needs to be recognized in citations. In medical research AI has been used and it has been argued if the use of it was ethical. Well I believe that writers that have started to utilize Ai are pioneering a new wave of Ai usage. In educational use, it is hard to determine how the use is ethical, students will find a way to tiptoe on the schools guidelines. So it hard to determine if the use of Ai at lower levels is okay. Higher level fields using Ai, if cited, should be fine if the user knows the boundaries and ethics in their respective fields.

Week 5: Academic Writing

As Artificial Intelligence becomes a larger and larger part of our lives socially and industrially, it is also becoming a huge component in academia. What does this mean for research, writing, and the general workings of academia? How is this further being implemented in our lives?

Artificial intelligence in the classroom has become a long winded debate and hurdle for many teachers and students. Generally, we see most of the ethical problems arising in research and writing aspects of academic prospects. Specifically, in humanities courses in which papers are assigned in high volumes or in social/earth/bio fields in which investigative and in-depth research is required in high quantities. Not only has AI inhibited proper learning in classrooms but it continues to allow students to hold themselves back, as they use the large language models as a crutch in these works. Of course copying and pasting a 4 page assignment paper from ChatGPT, putting it through a humanizing AI model, turning it in, and not crediting is 100% plagiarism and not moral writing. We should utilize models such as NotebookLM in our research that can help us find sources, make study guides, and prepare for exams with a podcast.

We have to as a society learn how to utilize artificial intelligence both ethically and beneficially, it is here, getting bigger, getting smarter, and there is nothing we can do about it.

Academic Writing

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?
AI use becomes too much when it replaces our critical thinking. AI should be a tool, not a crutch.

Where are the grey areas in use ethically, creatively, etc?
In terms of ethical grey areas, there are a few. Where exactly is the line between AI help and plagiarism? Do the students who have access to better AI have an unfair advantage? Is the use of AI checkers ethical? Is it fair to have an algorithm grade creative, academic assignments? Is it ethical to use AI in hiring practices?

How do we communicate those issues in our policies or our guidelines adequately? Communicating these issues in policies and guidelines requires at least an adequate understanding of the purpose and capabilities of the technology in question. Each of these questions relies on understanding of what AI could do in each scenario.

Could AI as an accessibility tool be impacted in how we approach our policies and guidelines?
Policies and guidelines should always be crafted with fairness in mind. This means that policies should recognize the nature of AI as a tool when used properly. AI can be used as a tool for accessibility. This could involve accessibility to education or for those with disabilities. Polices can and should be impacted by that fact.

How will current and future jobs be impacted by these policies?
Current and future jobs will have some restructuring and perhaps even growing pains as AI increasingly finds its way into the workforce. It is my belief that once various stigmas around the use of AI are removed and the performance of AI in various professions slightly increases, then we will begin to see these polices manifest. We will see a more efficient workforce and field of academia that can do a lot more than before AI. This starts with transparency in AI usage as Tang et al. discusses. Specifically, this starts with the “declaration of generative AI usage” (Tang et al., 317) in academic and professional settings.