A helpful prompting tip that I’ve learned from my own experience is to read the information you provide carefully and ask the AI clear questions. One time, while using ChatGPT to ask for a review of a movie I had seen as the director of the movie, the AI gave information about the characters in the movie, but the storyline was not the same as the movie I’ve watched. Then, when I rephrased the question without giving clear information for it like “Are you sure about that detail?”, it immediately responded, “Sorry for that misinformation, let me fix it for you”. After revision, the information given was correct. However, I wanted to test how AI received the information, so I asked again, “I thought this scene should be like this, no?”. It confirmed that the initial answer given was correct and ignored the correct information answered later. Since it’s been a while, I can’t find it in the chat history with AI.
I have learned my lesson every time I use ChatGPT to ask questions. You want to make sure that you specify your question and look through your materials clearly to prevent unnecessary errors. As we have read in the reading, “An LLM can always produce factual inaccuracies, just like a human” (White et al., 2023). So don’t rely too much on AI to answer everything for you.
I agree that ChatGPT does not give accurate answers every time, and we can’t rely on it.
Double-checking for every comment from chatGPT is important. I find that for the chatgpt 3 or 3.5, the amount of data they used to train the model is not that much, and those models do not have the ability to file checking or accessing real-time data, so the accuracy of the answer might not match our expectation. For the 4o, the amount of data used to train is 10 times more than that of 3.5. They also changed the method of accessing data and included file-checking and accessing real-time data, so I feel like the answer is somewhat better than it used to be. However, giving a doucheck is still needed because they are not accurate all the time.