What’s Next- Post 6

This course has been very interesting and has definitely shaped my views on AI and how I intend to use AI moving forward. When we talked about why it is important to cite AI, I realized then that not only does that ensure credibility and accountability, it also ensures that we give credit where it’s due. I think it’s easy to cite sources from literature, but I think we must extend that same attitude when working with AI because it is also a source of information in some sense.

Personally, I will continue to use AI for tasks including complementing my studying, generating quiz ideas and topics, comparing ideas, and for feedback on various projects and work. Professionally, I plan on using AI to cross-check my work, give feedback, and find sources (which I will double-check for accuracy). I do not think I have a choice on whether or not to use AI because a lot of fields are embracing AI, and especially in scientific research. For example, many authors are using AI to find sources, improve readability, and ensure reproducibility of their papers and experiments. If anything, it might hurt to not use AI or know how to navigate it, and I like that we are learning those skills in this class and in other school activities.

Nevertheless, it is important to recognize that AI is not always accurate. For example, we explored the Alzheimer’s hypotheses and how AI gestures to use the old hypothesis just because of how frequent it is in the literature. That being said, AI is still very useful in scientific research and needs to be verified and cited to ensure accuracy.

Overall, we should continue paying attention and updating how we use AI and when it’s okay or not okay. Especially as students and pre-professionals, AI has various uses that we can benefit from while being conscious of its limitations. We should also rely on our personal ideas and thoughts often, in order not to over-outsource everything to AI. We can do this by creating specific policies on AI use whenever needed, creating trusts and cooperations to represent us and share our thoughts and sentiments on AI use, etc.

What is the single most important GenAI-related issue you wish the general public knew more about?

The single most important GenAI issue I wish the general public knew was that LLM’s do not think for themselves. This is a common misconception that even I believed coming into this class. If you were to ask a lot of people, they would also believe that LLM’s think at extremely high speeds and come up with logical answers based on its thinking. However, now we know that it is trained on a ton of data and based on this data, it predicts the next word and spits out a response. Knowing this, users need to understand that they need to have prior knowledge on what they’re searching, check for other sources, and also refine their prompts if they want better answers. One specific example of people using GenAI incorrectly is when they use GenAI in place of Google or other search engines. Quickly typing in a prompt and skimming the answer that an LLM produces and trusting it as law is harmful to one’s knowledge, but the general public do not understand that because they do not know the single most important GenAI related issue that I mentioned above. In conclusion, LLM’s are useful, but in the right context and when you believe that it is a magic genie that produces all the right answers in rapid speed, you will become reliant on it, when it can easily hallucinate based on its training data or provide biases that you do not want. Not only can it be incorrect, but it can serve as a mental crutch that we as a society do not need as creative beings.

Academic AI

A simple way to think about “too much” AI use is whether students are still doing the thinking.

Dinsmore and Fryer warn that “some of those calling for or directly introducing genAI into formal education fail to fully understand… how humans learn in any given domain of knowledge” (Dinsmore & Fryer, 2026). This suggests the risk is using AI in ways that replace the mental effort needed for learning.

So AI use is “too much” when it does the key thinking for students, like planning answers, explaining ideas, or solving problems, and students just accept the result. That may improve work in the short term, but it reduces learning.

AI use is more acceptable when it supports learning instead. For example, it can give feedback, examples, or help students improve their own ideas, as long as they still make decisions and explain their thinking.

In both classrooms and professional life, the boundary is the same: AI should help people think better, not think for them.

Academic AI

I think it is too much when AI does the thinking for you. Like if AI is summarizing everything, explaining everything and basically doing your work for you, you are not actually learning. Dinsmore and Fryer say learning depends on building knowledge step by step and practicing those skills, and there are not really any shortcuts to that. So once AI replaces that process, it’s hurting more than helping.

My concerns are that every time we use AI, we put in our ideas, questions, and sometimes personal information. That can be stored or used to improve it. Even if it’s helpful now, long term it can cause privacy issues, especially if students don’t know what’s being saved in it.

The gray area is when its helping, but it isn’t fully taking over. Like if you are using Ai to clean up your writing or check if an idea makes sense is fine. But using it to come up with the actual idea itself or do the whole assignment is different. It really just comes down to if the work is actually done by you or not.

Colleges shouldn’t ban it or anything, but they need to be sure they are clear about what is okay, most likely by putting it in the syllabus or course policies. It’s just about making sure students are aware to use AI for brainstorming something or feedback on their work but not for the whole assignment. That way people can still learn and use it the right way.

I think AI is going to be part of almost every job. So instead of relying on it, I would need to learn how to use it the right way while still having my own knowledge. I also think a good solution to control it would be to make students show their work on assignments.

Dinsmore, D. L., & Fryer, L. K. (2026). What does genAI mean for student learning? Learning and Individual Differences.


Post 5: AI Academic

When it comes to AI, one of the main concerns is how this could actually affect education. Like the computers in the 1980s introduced to schools, this is yet another evolutionary step into the education system for better or for worse. There are many benefits when it comes to AI guiding students to better their education, however there are disadvantages that can also guides students to the wrong path for misuse of AI. Starting with the benefits that can improve students’ education;

Many students all have an interesting way of learning and it can sometimes be not so easy for professors and teachers to create a special way of teaching just for each of them such as following instructions. AI has the capacity to simplify instructions for students and sometimes create a checklist for them so they can easily understand the objectives of their assignments.

Another good example for usage of AI in education is source searching. Like working on a school project, AI, such as NotebookLM, can assist finding the sources you need based on the topic you request.

While there are several advantages to using AI as a guiding tool, there are some that needs to be worked on or keep in mind:

While searching for certain topics like on, NotebookLM, to keep in mind is that AI will always be data-hungry. Sometimes, Ai can go off course from the request like, “I found some sources on dinosaurs along with dinosaurs dental hygiene.” Extra sources that sometimes doesn’t relate to what students need to know at the time can affect the students’ study on facts and fictions.

Another example, is the misuse of AI as a way to cheat on exams and homework assignments. While students can use AI as a way to study and help them on the line, some students would intentionally misuse AI by looking up all the right answers and save them time, if misguided for using AI.

We can make AI an acceptable tool to the education system, but like all cases, we must find all loose knots and make sure they are tighten. Meaning in this relation, we must find solutions to make AI acceptable and opening towards student without misguidance or misusage.

Academic AI

As students in generation where LLMs modules are accessible, I think most of us already thought or used AI for academic purposes, but is it a good thing or a bad thing for academics.

As a STEM major the thought of people trying to become doctors using LLMs scares me, the imagination of coming to a doctor’s office who used AI for his education makes me very skeptical. LLMs in academics should be here to help us, not to do work for us, and that’s something that a lot of people don’t see a difference in. There is a huge difference between trying to make AI do your whole work and ask AI to correct your grammar, I think AI can be a very helpful and useful tool to make our college experience easier, but is it really worth it? Paying so much money to be able to get education for your future without actually getting any knowledge and just passing for a degree? Those are questions that students have to ask themselves on their own, for me it is really hard to recognize AI’s work from human’s work especially when LLMs are still improving. I can see why teacher would have a problem with AI being used in their classes, and also why teacher’s would allow it since LLMs will keep improving and there’s not much that we can do about it other than learn how to use them correctly and ethically based on preferences of the teacher.

Coming from this I truly believe that AI can be an awesome tool to help you save some time, and help you with your education as quizzing you, summarizing notes etc.., but not a tool that should be overused, but at the end of the day it is your choice if you’re paying for knowledge or just to be able to say you have a degree.

https://moodle-2526.wooster.edu/pluginfile.php/82732/mod_resource/content/1/Dinsmore%20Fryer%20What%20does%20genAI%20mean%20for%20student%20learning.pdf