What People Get Wrong About AI: The Hallucination Problem

One of the most important things the general public doesn’t fully understand about generative AI is that it can sound confident while still being wrong. This issue—often called “hallucination”—happens when AI generates information that seems accurate but is actually misleading, outdated, or completely incorrect. The problem isn’t just errors—it’s that the errors are delivered in a way that makes them hard to notice.

A clear example comes from a sports prediction exercise I recently worked on. The AI generated a set of betting picks based on star players like Stephen Curry and Joel Embiid, using reasoning like “high scoring role” or “exceeds this line frequently.” At first glance, everything sounded logical. But when comparing the picks to actual results, none of them hit. One player didn’t even play that day, making the prediction unusable. There were no completely fake stats, but the AI relied on vague trends instead of verified, up-to-date data. That’s what makes hallucinations tricky—they’re not always obvious lies, but often confident generalizations that don’t hold up in reality.

We should be paying attention to this because more people are starting to rely on AI for decisions—whether in school, work, or even money-related choices like betting or investing. If users assume AI is always accurate, they can make poor decisions based on incomplete or misleading information. My solution is simple: AI should be treated as a starting point, not a final answer. Users need to verify important information, and AI systems should be pushed to show sources, use real data, and clearly signal uncertainty. Understanding this limitation isn’t about rejecting AI—it’s about using it more responsibly and effectively.

Post 5- Academic AI Marquan Felts-Lipsey

AI is becoming a big part of school, and I think it can be helpful if it’s used the right way. It makes things easier, like getting ideas or understanding hard topics, but there has to be a limit. If students depend on AI too much, they might stop thinking for themselves and not really learn anything.

I also think AI companies should be more honest about how their tools work, because not everything it gives is always correct. There are definitely gray areas too, like using AI for help versus just copying answers. That’s why schools and colleges should make clear rules so students know what’s okay.

At the same time, AI can be a good tool for students who struggle, especially with writing or learning new material. In the future, AI will probably affect jobs, so it’s important for us to learn how to use it the right way instead of relying on it.

As one idea says, “AI should be a tool to support learning, not replace the effort it takes to actually understand something.”


What stood out to me here was how emotionally grounded it felt. Instead of jumping straight into a dramatic plot, the AI focused on a quiet, almost poetic moment. That made it feel more human than I expected.

For me, creativity isn’t just producing something new—it’s producing something that feels meaningful or intentional. Based on our readings, I’d connect this to the idea that creativity involves both novelty and value. Something can be original, but if it doesn’t resonate or feel purposeful, it doesn’t fully count. At the same time, I’m not convinced creativity requires consciousness or lived experience, even though some philosophers argue that intention

So, did the AI demonstrate creativity here? I’d say partially, but not in the same way humans do. The passage is clearly novel in the sense that it wasn’t copied from anywhere, and it has emotional value—it creates a vivid feeling. But I don’t think the AI “understood” what it was doing. It was recombining patterns it learned from human writing. In class, we talked about functionalist arguments—if something produces outputs indistinguishable from creative work, maybe that’s enough to call it creative. This example kind of supports that idea, because if I didn’t know an AI wrote it, I might assume a human did.

At the same time, there were limits. As the story continued, it slipped into predictable territory—loneliness, overload, eventual isolation. It felt like it ran out of originality once it moved past the opening mood. That made me think the AI is strongest at style and imitation, but weaker at sustaining deeper, intentional development.

This experiment definitely shifted how I think about creativity and AI. Before, I saw AI writing as mostly generic and mechanical. Now I think it can produce moments of real beauty or insight—but those moments are inconsistent and not driven by genuine understanding. It made me realize that creativity, at least for humans, is not just about producing something that sounds good, but about having a perspective behind it.

Margaret Boden, “Creativity and Artificial Intelligence” (1998)

LLM

The most helpful prompting strategy I learned is being specific and clear about what I want the AI to do. Instead of asking something basic like “summarize this,” I learned to give more detail, like asking for a summary in my own words, with examples, or in a certain tone. When I do that, the response is way better and actually sounds like something I could say.

One prompt I used was: “Explain this concept in my own words, like I’m talking to someone outside of class.” This helped me understand the topic more because the response was simple and easier to relate to. It didn’t sound too complicated or like a textbook, which made it easier for me to remember.

This connects to what we talked about in class about how prompts shape the response. The better and more detailed your prompt is, the better the AI understands what you want. It’s kind of like giving directions—if you’re vague, you won’t get the result you want, but if you’re specific, you get something way more useful.

This made me think about how important it is for developers to test AI systems on diverse groups of people. AI is going to keep growing and being used in things like security, jobs, and schools, so it needs to be fair. In the future, I think AI will become even more common, but people will also push harder for rules and accountability to make sure it’s used responsibly. One thing that really stood out to me was learning how facial recognition systems don’t always work the same for everyone. Some studies showed that these systems can make more mistakes when identifying people with darker skin tones. That made me realize that technology isn’t always neutral—it reflects the data it’s trained on.