Prompt 5 – Academic AI

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?

Regarding the use of AI in the classroom, LLMs should be able to be tools to help facilitate learning, such as help generate ideas, help find relevant sources (checked by you), and summarization. What AI should not be doing is acting as your primary learning source and acting as a teacher for you when trying to learn. “Expertise provides the clearest use case for automating less complex tasks for the expert (such as summarizing) since the expert should possess the requisite knowledge to spot inconsistencies in a genAI summary (Dinsmore & Fryer, 2026). Too much is too much when Generative AI is being used as a primary learning tool for kids who haven’t fully developed their learning and cognitive processes. Additionally, in the professional realm, some jobs require the use of Generative AI in tasks, such as video, image, and text generation. Being able to use AI to aid in facilitating work is generally deemed as good; employees just need to be aware of how to best use these tools. Too much is when the job requirements are done through strictly AI as the expert of learning and knowledge.

Reference: Dinsmore, D. L., & Fryer, L. K. (2026). What does current genAI actually mean for student learning? Learning and Individual Differences, 125, 102834. https://doi.org/10.1016/j.lindif.2025.102834

Post 5- Academic AI Marquan Felts-Lipsey

AI is becoming a big part of school, and I think it can be helpful if it’s used the right way. It makes things easier, like getting ideas or understanding hard topics, but there has to be a limit. If students depend on AI too much, they might stop thinking for themselves and not really learn anything.

I also think AI companies should be more honest about how their tools work, because not everything it gives is always correct. There are definitely gray areas too, like using AI for help versus just copying answers. That’s why schools and colleges should make clear rules so students know what’s okay.

At the same time, AI can be a good tool for students who struggle, especially with writing or learning new material. In the future, AI will probably affect jobs, so it’s important for us to learn how to use it the right way instead of relying on it.

As one idea says, “AI should be a tool to support learning, not replace the effort it takes to actually understand something.”


Post 5 Academic AI

AI for academic using was never an issue before people realize that students can use AI to write great essays. I still remember the last semester of my high school when my friend wrote an entire essay with AI without being revised by himself, and got an A for his final grade.

In short, the concept of too much means when you are not ready to handle the knowledge yet, or too lazy to write the entire or both, which is the most common use of AI for students now. Once AI starts replacing our cognitive work that builds expertise, it is too much. Dinsmore & Fryer argue that human learning depends on three forces, which are Knowledge, Strategies and Interest.AI might harm our process and willingness of learning if we are skipping any of these three processes. Because it is a linear progress, acclimation to competence to expertise. If we use AI inappropriately, AI will become a shortcut that destroy the entire mental process of integrating your knowledge.

Source: Dinsmore, Dan L., and Luke K. Fryer. “What Does Current GenAI Actually Mean for Student Learning?” Learning and Individual Differences, vol. 125, Jan. 2026, p. 102834, https://doi.org/10.1016/j.lindif.2025.102834.

Post 5 – Academic AI

When it comes to AI use in the classroom, I feel like “too much” would entail that all the work one is doing is simply typing in a prompt and that’s that. It’s not that we shouldn’t use AI; it’s that it can be an aid to learning instead of doing all the work for someone. It’s situations like those that I think of when reading or hearing about cognitive decline due to constant use of AI. Dinsmore and Fryer write in their article, “By either ignoring the building blocks of learning or ignoring the needs of the learner, use of genAI has the potential to reverse years of educational improvement regarding the role of these basic building blocks in overall development.” (Page 4) If AI is going to be used in the classroom, it should be used to expand, refine, or explore a topic further in the benefit of the student’s original work. At least that way, education is still happening.

Blog post 5

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?

The question above is something that I feel like everyone ask themself when either writing and essay or completing an assignment. I think we can determine this by looking that the information that the Ai tool gave us and base it off that. I know that nowadays we have Ai generators where people can see how much Ai is detected but I think that when it comes down to determining how much is too much we can look at a few things. We can look at how long the assignment is, we can see how much information was given to us from Ai, and we can also see by just looking and seeing how much did Ai do and how much did I personally do.

A quote I found in the readings this past week is “To our second concern of whether genAI will “free us” to be creative in an academic domain in order to make advances in the sciences, arts, technology, and humanities, we need to examine whether simply providing more time for creative endeavors is likely to produce that result.” Here this explains that we can’t let Ai take over our creativity part of us and not let it take away our general knowledge of things as well.



How safe is our data? (Post 5)

With the advancement of technology and the internet, we seem to have more and more to worry about in terms of privacy risks. It is not uncommon to hear people say, “Be careful what you put out on the internet,”, “nothing can ever be truly deleted online,” and frankly, there is so much more truth in that than we often recognize. 

The more I use AI and learn about how companies and AI tools, especially LLMs handle our data, the more scared I am of just how much these tools know about us, knowingly or unknowingly. Whilst several concerns about AI use exist, for example in academic circles, concerns about depleting basic learning blocks and reducing creativity, to concerns in healthcare such as misinformation and spread of harm, I think another basic yet equally important concern that we often forget is how much AI knows about us and what that information is currently being used/ will be used to do in the future. 

In this week’s reading (HAI article on privacy in an AI era), the author discusses how AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information (Miller, 2024). 

So I do in fact have concerns about AI companies’ approach to user data both for now and the future. 

Source: Miller, K. (2024, March 18). Privacy in an AI Era: How Do We Protect Our Personal Information? Hai.stanford.edu; Stanford University. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information