Week 5: Academic Writing

As Artificial Intelligence becomes a larger and larger part of our lives socially and industrially, it is also becoming a huge component in academia. What does this mean for research, writing, and the general workings of academia? How is this further being implemented in our lives?

Artificial intelligence in the classroom has become a long winded debate and hurdle for many teachers and students. Generally, we see most of the ethical problems arising in research and writing aspects of academic prospects. Specifically, in humanities courses in which papers are assigned in high volumes or in social/earth/bio fields in which investigative and in-depth research is required in high quantities. Not only has AI inhibited proper learning in classrooms but it continues to allow students to hold themselves back, as they use the large language models as a crutch in these works. Of course copying and pasting a 4 page assignment paper from ChatGPT, putting it through a humanizing AI model, turning it in, and not crediting is 100% plagiarism and not moral writing. We should utilize models such as NotebookLM in our research that can help us find sources, make study guides, and prepare for exams with a podcast.

We have to as a society learn how to utilize artificial intelligence both ethically and beneficially, it is here, getting bigger, getting smarter, and there is nothing we can do about it.

Academic Writing

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?
AI use becomes too much when it replaces our critical thinking. AI should be a tool, not a crutch.

Where are the grey areas in use ethically, creatively, etc?
In terms of ethical grey areas, there are a few. Where exactly is the line between AI help and plagiarism? Do the students who have access to better AI have an unfair advantage? Is the use of AI checkers ethical? Is it fair to have an algorithm grade creative, academic assignments? Is it ethical to use AI in hiring practices?

How do we communicate those issues in our policies or our guidelines adequately? Communicating these issues in policies and guidelines requires at least an adequate understanding of the purpose and capabilities of the technology in question. Each of these questions relies on understanding of what AI could do in each scenario.

Could AI as an accessibility tool be impacted in how we approach our policies and guidelines?
Policies and guidelines should always be crafted with fairness in mind. This means that policies should recognize the nature of AI as a tool when used properly. AI can be used as a tool for accessibility. This could involve accessibility to education or for those with disabilities. Polices can and should be impacted by that fact.

How will current and future jobs be impacted by these policies?
Current and future jobs will have some restructuring and perhaps even growing pains as AI increasingly finds its way into the workforce. It is my belief that once various stigmas around the use of AI are removed and the performance of AI in various professions slightly increases, then we will begin to see these polices manifest. We will see a more efficient workforce and field of academia that can do a lot more than before AI. This starts with transparency in AI usage as Tang et al. discusses. Specifically, this starts with the “declaration of generative AI usage” (Tang et al., 317) in academic and professional settings.

Academic writing

The use of AI can be characterized by the extent to which it replaces individual learning and creativity. We use AI in our professional and personal lives so much that, in some way, we undermine our creativity. AI can be useful, for instance, when brainstorming or outlining ideas, but if professionals or students depend on it to handle all of the thinking or writing, it becomes a crutch that impedes growth. Generally, AI should complement human labor rather than take its place.

This week’s readings made me realize I should not rely on AI to do my work. I should only use it for brainstorming ideas. For example, we should not rely on AI to summarize a reading or a paper. As it can miss out on little details that we need to know.

Over time, a workforce that is tech-savvy, morally anchored, and creatively empowered can be shaped with the help of carefully planned AI policies. In the future, AI might start training humans for their job roles.

Academic writing

This week’s readings made me think more seriously about how AI fits into both school and work. It’s easy to rely on it too much, especially when you’re stuck or just being lazy. For me it’s okay to use AI to help organize thoughts or brainstorm ideas, but not to do all the work for you. Tang et al. (2023) stresses the importance of transparency and understanding that AI is a tool not a collaborator noting that “Generative AI tools are nonlegal entities and are incapable of accepting responsibility and accountability for the content.” That hits home because AI can’t replace you. If we’re not honest about how we use AI, it raises big questions about authorship and integrity.

The grey areas with AI is hard to ignore. It can spark new ideas and help shape your work, but it also blurs the line between your own thinking and what AI generated. That’s why we need clear, supportive guidelines. Not rules that punish but encourage honesty. AI has real potential in lots of fields, but that only works if we’re transparent about how we’re using it.

https://sigmapubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jnu.12938

Week 5: Academic Writing

This week’s readings got me thinking: How much is AI not just in our classrooms, but also in our future careers? It feels like we’re navigating a brand-new world, and the lines haven’t been drawn yet.

One area that stood out from the paper “Augmenting the Author” is the issue of transparency. The authors highlight the “depth of transparency in researchers’ access and utilization of AI” and the concerns this raises about the “reliability and credibility of AI-generated text.” This really resonates when we think about assignments or professional reports. How do we know if the work is truly our own or mostly AI-generated?

A key grey area lies in defining authentic authorship and intellectual contribution when AI is involved. For example, in one of my previous classes, an alumnus from Pfizer told us that they sometimes use AI to generate mock drug syntheses, which they then refine and test. I found this interesting in relation to this week’s discussion, but I didn’t get a chance to ask about it at the time AI’s usage and credited contributions can be a touchy subject.

As the researchers point out, there’s a concern that reviewers (or instructors) might end up “validating AI’s work” instead of the author’s. Moving forward, clear guidelines emphasizing transparency about AI use will be crucial. We need to foster a culture where acknowledging AI assistance isn’t seen as a weakness but as a step toward ethical and responsible innovation. 

links:https://dspacemainprd01.lib.uwaterloo.ca/server/api/core/bitstreams/40009604-deb5-4250-a341-ceccb3c6561c/content