Up until now, I had mostly been using LLMs to revise my writings including grammar, structure, and clarity so the promptings I did in class were fun and interesting. I think one of the most helpful prompting strategies I learned was the importance of being specific and clear with my prompting (I used Google Gemini). For instance, when I asked it to tell me about WW1, it gave me a follow up question: “Are you interested in a specific part of the war, like the life of a soldier in the trenches or the political maneuvering behind the scenes?” Additionally, when I told it to act as a teacher teaching it to elementary school students, the words used in the output became easier and beginner friendly which made it simpler and more accessible. E.g., “One day, a leader from a country called Austria-Hungary was hurt while visiting another place. Because of those “club promises” (which adults call alliances), one country joined the fight, then another, and another, until almost the whole world was involved!” This experience connects to the reading on prompting from OpenAI Academy which emphasizes the importance of clearly stating what you want the model to do and how you want the response delivered: “Be clear about what you need ChatGPT to do. Tell ChatGPT how you’d like the response”. These strategies may be helpful when learning about unfamiliar topics as they make complex information more approachable.
3 thoughts on “Post 3”
Leave a Reply
You must be logged in to post a comment.
I like the idea of asking the LLM to act as if it is teaching to elementary students! Smart way to get an overview on topics you might be unfamiliar with. Definitely using this more
This is super helpful Kyoka. ChatGPT sometimes reverts to giving much higher level information on various topics you ask of it and so whenever I need the most basic and accessible explanations, I always add the extra context that it should explained at the most basic and plain language level.
I agree that being specific in your prompting is what actually determines how useful the response is, because the LLM only works with exactly what you give it. When I prompt something like World War I, I need to clearly state things like audience, format, and focus or I’ll just get a broad answer. This shows that if I want a precise output, I have to directly tell the model exactly what I want it to produce.