To me, AI use in academia often feels like a shortcut people take because they do not want to put in the effort to do it the long way. Dinsmore & Fryer (2026) remind us that LLMs are essentially “very powerful prediction tools” that are passed off as actual intelligence. They point out that for inexperienced LLM users, outsourcing tasks such as summarization could hinder their own skill development. You can’t skip the basics and expect to be an expert in your field using AI as a shortcut.
Data collection is another troubling aspect of LLMs. Most AI tools use your personal chats to train their models by default, and opting out is difficult because the option is hidden in settings. This is concerning to me because many people seem to have grown accustomed to talking to LLMs as if they have intelligence, or even using them as a therapist or a method of coping with their traumas. This data feels highly personal, and I think using it to train LLMs could have some negative implications. Overall, I think AI can be a useful tool, but copy/pasting whatever jargon the LLM outputs for you is not the most effective way to utilize LLMs.
Source: Dinsmore, Dan L., and Luke K. Fryer. “What Does Current GenAI Actually Mean for Student Learning?” Learning and Individual Differences, vol. 125, Jan. 2026, p. 102834, https://doi.org/10.1016/j.lindif.2025.102834.
this is great I like this
AI is realistically a big shortcut in everything. People don’t want to write, look things up, and learn.