What is the single most important GenAI-related issue you wish the general public knew more about?

The single most important GenAI issue I wish the general public knew was that LLM’s do not think for themselves. This is a common misconception that even I believed coming into this class. If you were to ask a lot of people, they would also believe that LLM’s think at extremely high speeds and come up with logical answers based on its thinking. However, now we know that it is trained on a ton of data and based on this data, it predicts the next word and spits out a response. Knowing this, users need to understand that they need to have prior knowledge on what they’re searching, check for other sources, and also refine their prompts if they want better answers. One specific example of people using GenAI incorrectly is when they use GenAI in place of Google or other search engines. Quickly typing in a prompt and skimming the answer that an LLM produces and trusting it as law is harmful to one’s knowledge, but the general public do not understand that because they do not know the single most important GenAI related issue that I mentioned above. In conclusion, LLM’s are useful, but in the right context and when you believe that it is a magic genie that produces all the right answers in rapid speed, you will become reliant on it, when it can easily hallucinate based on its training data or provide biases that you do not want. Not only can it be incorrect, but it can serve as a mental crutch that we as a society do not need as creative beings.

2 thoughts on “What is the single most important GenAI-related issue you wish the general public knew more about?

  1. That’s true, although I couldn’t help but wonder that considering that people are aware of what could happen with AI, many of them just don’t seem to bother with it or come up with solutions that they know that could’ve been a easy fix. Like stepping back and think before acting.

  2. I absolutely agree with your point! Indeed, most people think that LLMs think for themselves and as such can be 100% trusted. However, we know that they use datasets and statistics to generate responses and I for sure wish that more people knew this.
    I personally have also been in a few scenarios where I used an LLM instead of Google for instance and just went with what the LLM said. I think is also challenging now that whenever you search on Google, at the top of the page is the AI overview and so people do not scroll down to see Google responses and so this defeats the whole purpose of not overrelying on LLMs.

Leave a Reply