Post 6

AI is super, super useful. It is probably beyond the level of the invention of Google. It’s fast, easy to use, even easy to read——it summarizes everything for you. It’s like addiction. Once you start using it, it’s hard for you to stop. It’s not a bad thing to do, I personally use AI every day and even for my own study process (ethically). But at the same time, problems occurs. The single most important generative AI issue the public needs to understand is that AIs can so called “invent” facts, citations, people, or events and send you them with real information. The issue is not AI makes mistakes, but it makes mistakes confidently, often combined with actual sources, makes it difficult to distinct.

There was once when my mom brought me information that were generated by AI, which says that she doesn’t need a passport to get on a plane but with abcde they will still allow you to get on the plane. Absolutely not possible, and she even encouraged me to try it out May 13th when I get back home.

we really need AI in schools and workplaces to be trained to produce outputs such as drafts requiring verification, not answers requiring trust. We have to pay more attention because hallucinations is not avoidable. Also, we need to realize that there is a clear line between “sounds right” and “is right”.

2 thoughts on “Post 6

  1. I agree with you that hallucinations are a big issue, because the LLMs often pass these off as facts. I think Sam Altman has said he likes that ChatGPT hallucinates and thinks it is a fun quirk, but I don’t agree.

Leave a Reply