What is the single most important GenAI-related issue you wish the general public knew more about?

The single most important GenAI issue I wish the general public knew was that LLM’s do not think for themselves. This is a common misconception that even I believed coming into this class. If you were to ask a lot of people, they would also believe that LLM’s think at extremely high speeds and come up with logical answers based on its thinking. However, now we know that it is trained on a ton of data and based on this data, it predicts the next word and spits out a response. Knowing this, users need to understand that they need to have prior knowledge on what they’re searching, check for other sources, and also refine their prompts if they want better answers. One specific example of people using GenAI incorrectly is when they use GenAI in place of Google or other search engines. Quickly typing in a prompt and skimming the answer that an LLM produces and trusting it as law is harmful to one’s knowledge, but the general public do not understand that because they do not know the single most important GenAI related issue that I mentioned above. In conclusion, LLM’s are useful, but in the right context and when you believe that it is a magic genie that produces all the right answers in rapid speed, you will become reliant on it, when it can easily hallucinate based on its training data or provide biases that you do not want. Not only can it be incorrect, but it can serve as a mental crutch that we as a society do not need as creative beings.

How can we (as a college) best communicate those issues to students – for example, in institutional or course policies or guidelines?

First, you have to understand what these “issues” are. The main issue is the fact that students are using AI to come up with ideas instead of them using their prior knowledge. However, we have to understand the motive behind a student’s desire to use AI, which is to pass a class and get a good grade. When the curriculum is not engaging and overwhelming with rigorous grading, students aren’t enticed to learn, and will likely use AI in the process. Dinsmore and Fryer indirectly address my point as they describe how “These trends illustrate the need to possess a wide range of cognitive and motivational attributes (e.
g., knowledge, interest, engagement, strategies) to be able to produce
positive learning outcomes, whether that be reading, problem solving, or many other activities.” (2)

This describes how students have an innate desire to use their prior knowledge, but learn best when they are engaged and interested. Having said this, it is evident that there needs to be more creative learning in schools and more opinion-based questions where a student can pick their own brain and come up with their own ideas that they feel strongly about, heavily discouraging the use of AI.

Dinsmore, Dan, and Luke K. Fryer. What Does Current GenAI Actually Mean for Student Learning?, 5 Mar. 2025, https://doi.org/10.31219/osf.io/f8z56_v1.

Post 4

Within our search to try to replicate Edgar Allan Poe’s style and creativity, I realized that ChatGPT did demonstrate creativity in the way that it made its own unique story, but it was not effective in the way that we asked it to be. A short example is:

“We glide ‘twixt sheets of moldered silk,  ​

Where living lovers spurn our ilk,  ​

Our touch—a frost they crave, then flee,  ​

Yet still we croo our siren plea:”

In comparison to:

Once upon a midnight dreary, while I pondered, weak and weary,

Over many a quaint and curious volume of forgotten lore—

    While I nodded, nearly napping, suddenly there came a tapping,

As of some one gently rapping, rapping at my chamber door.

“’Tis some visitor,” I muttered, “tapping at my chamber door—

            Only this and nothing more.”

It was not able to be as creative as Edgar Allan Poe, or match up to the level of wit that he used within his writing. This coincides with our reading where we explain how although AI may spit out a multitude of answers, it will not be able to replicate the ability to be as creative as the original person that sat down and struggled through their thought processes. As a creative individual, I would envision using genAI to assist my creativity only when I am at blocks where I am having trouble making my own ideas or thoughts. The benefits to this is that it can get the ball rolling initially and allow me to build on it, but of course the disadvantage to this is that it can become a crutch and completely take away my will and desire My definition of creativity would be someone’s ability to identify and use their own unique thoughts and interests and combine them into a certain thing or idea. Overall, our experiments this week allowed me to further think about how I am using AI to brainstorm instead of allowing myself to grow by thinking myself. Thinking for myself will allow me to become a more independent and whole individual and thinker.

Response 3/31

Louis, Jordan, Dame

At the beginning, the starter prompt sort of seemed useful, but I knew that the response would be vague. I asked ChatGPT to summarize Toy Story and then I asked it to “summarize Shrek.” Both of these summaries were 8-10 sentences, not nearly enough to fully grab details from a film. The prompt pattern that made the biggest difference was giving it structure. I asked for it to give me a paragraph for each scene and it was much more detailed and searched the web before it did it, so it clearly found the things from the internet. After this, we asked it to help us all run faster in a month and it spit out a plan without knowing any details about us. I then used the reverse conversation method and asked it to ask us details about our height, weight, diets, sleep patterns and other details and once we answered those, it gave us a more detailed, precise plan that would last a month and it gave us an end goal. Overall, I learned that AI is good for outlines and specific plans, but only with specific details. In order to get good results, the user must be specific with their prompts and their results will continue to improve, especially when involving health desires. This draws back to the prompt literacy text that we went over for the week of March 31, where we discussed the benefits and issues behind AI.

 

Prompt 2

ChatGPT is strange unless you’re direct. The most strange thing about ChatGPT’s model was that my prompt had to be really specific in order to find academic sources. First I asked it for academic sources related to facial recognition involving policing, environmental studies, or healthcare. It came up with journal articles and one YouTube video, but nothing from a book or database. After my third ask, I was able to filter down 5 from a database and then put those into Notebook LM. Notebook summarized these sources together and claimed that “Facial recognition technology (FRT) in policing creates a conflict between surveillance efficiency and democratic accountability. Public support is often performative; anonymity reveals many citizens privately harbor reservations about biometric tracking. Empirical data shows FRT deployment correlates with increased racial disparities, specifically raising Black arrest rates while decreasing White rates. This stems from automation bias and pre-existing structural inequities. Global regulations remain fragmented; the US lacks the robust accountability frameworks found in the EU, necessitating urgent, transparent impact assessments to protect civil liberties”. Through this lab, we learned that both AI models have strengths, but Chat is not going to excel at what Notebook excels at and vice versa. Chat struggled at finding these sources and struggled even further on giving me great summaries of the sources they provided as it was multiple steps. My prediction about where AI is headed is a continued reliance because it comes up with sources instantly rather than a trip to the library or a database. I learned that Chat says its prompts very confidently and if you do not check it for error, you are using it incorrectly.