The most helpful prompting strategy I’ve learned is being clear about exactly what I want the output to look like. Not just the topic, but the format, timeline, and specific instructions.
The prompt I used was: “make a daily schedule to help a person gain 10 pounds over 4 weeks no symbols just words” and I tested this on ChatGPT (GPT-5.3).
Here is part of the output I got:
8:00 AM Breakfast
4 eggs
2 slices of toast with peanut butter
1 banana
1 cup of milk
10:30 AM Snack
Protein shake
Handful of nuts or granola
What matters here is how the instructions shape the result. Saying “over 4 weeks” makes it structured and realistic instead of random. Saying “no symbols just words” makes it clean and usable for notes or assignments. The response follows exactly what I asked for.
This strategy works because AI is not actually thinking. It is just predicting language. As one article explains, these systems “can interact with us through natural language, and we can’t distinguish the real from the fake.” (The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023). That means the clearer your input is, the better the output will be.
I think that this is useful for students who want organized notes, athletes who need a structured plan, and really anyone who wants direct answers without anything extra. It saves time and makes the output easier to actually use.
Overall, what I’ve learned is that prompting is about control. If you control the format and details, you control the output.
The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023).
I’ve also learned from experimenting with LLMs that the less guessing AI has to do, the clearer and better response I can get!
I had same experience as well when I was working with AI the other day… But it is also important for us to try different prompts as well.
I like the question you asked and how it could relate to you right now during football. I also like that you specified your prompt by using no symbols just words to make it nice to look at!