One of the most important things the general public doesn’t fully understand about generative AI is that it can sound confident while still being wrong. This issue—often called “hallucination”—happens when AI generates information that seems accurate but is actually misleading, outdated, or completely incorrect. The problem isn’t just errors—it’s that the errors are delivered in a way that makes them hard to notice.
A clear example comes from a sports prediction exercise I recently worked on. The AI generated a set of betting picks based on star players like Stephen Curry and Joel Embiid, using reasoning like “high scoring role” or “exceeds this line frequently.” At first glance, everything sounded logical. But when comparing the picks to actual results, none of them hit. One player didn’t even play that day, making the prediction unusable. There were no completely fake stats, but the AI relied on vague trends instead of verified, up-to-date data. That’s what makes hallucinations tricky—they’re not always obvious lies, but often confident generalizations that don’t hold up in reality.
We should be paying attention to this because more people are starting to rely on AI for decisions—whether in school, work, or even money-related choices like betting or investing. If users assume AI is always accurate, they can make poor decisions based on incomplete or misleading information. My solution is simple: AI should be treated as a starting point, not a final answer. Users need to verify important information, and AI systems should be pushed to show sources, use real data, and clearly signal uncertainty. Understanding this limitation isn’t about rejecting AI—it’s about using it more responsibly and effectively.
I agree, and with information everywhere these days, it’s especially more important to think critically about what you read.