Post 3

The prompt I used was “make me a workout plan.” I kept it super simple and straight to the point. In class, we discussed how a vague prompt usually leads to a generic answer. I gave the prompt to ChatGPT, and I got a very general workout plan. It was obvious the program didn’t understand what I was asking. It was simply producing something that sounded like a generic workout plan. 

This prompt would be helpful if a person simply wants a quick starting point, like a beginner who does not know where to begin with fitness. However, for a person like me, it is not particularly helpful without more information. For instance, I would want more information regarding specific goals, scheduling, and perhaps equipment availability. This is particularly relevant to our previous discussion in class regarding how the usefulness of an LLM depends on how specific a prompt is. For instance, when I improved the prompt by adding more information such as 5-day gym plan focused on muscle gain with dumbbells and machines, it became more relevant. 

ChatGPT provided a structure for a weekly routine (e.g., chest day, leg day, etc.) this only works effectively once you change the prompt and modify it. This is really the key to understanding why this model is not thinking but simply reacting to patterns based on common requests for information. That’s also why it can sometimes make recommendations that are off or general. 

This all ties together with the readings we had in class. The OpenAI guide emphasizes that good prompts should clearly define the task, provide context, and specify the desired output, which I didn’t do at first, leading to a weaker response. Similarly, the Claude prompting guide explains that you should treat the model like a “new employee” who has no prior context and needs clear instructions. That idea really clicked for me here. my original prompt didn’t give the model anything to work with, so it gave me the most average answer possible. 

The output ChatGPT gave me: 

Schedule 

  • Day 1 – Workout  
  • Day 2 – Rest  
  • Day 3 – Workout  
  • Day 4 – Rest  
  • Day 5 – Workout  
  • Weekend – Rest  

Workout A 

  • Squats – 3 × 8  
  • Push-ups (or bench press) – 3 × 8  
  • Lat Pulldown (or assisted pull-ups) – 3 × 8  
  • Plank – 3 × 20–30 sec  

Workout B 

  • Deadlifts – 3 × 5  
  • Dumbbell Shoulder Press – 3 × 8  
  • Dumbbell Rows – 3 × 8  
  • Crunches – 3 × 12 

Citation 

OpenAI. Prompting Guide. OpenAI Academy. 

3 thoughts on “Post 3

  1. This is really interesting and a clear example of how important specificity is when prompting LLMs. It serves as a helpful reminder always to provide sufficient context in prompts.
    It also made me think about how and whether LLMs actually retain context from previous chats. For example, there have been several times that I have had to remind ChatGPT of something I assumed it would remember from earlier conversations.

  2. This is an excellent point, and I am glad that you brought to attention the fact that we can be biased in what we are looking for without telling Chat that. If you work out a lot and just ask chat for a workout, it is going to assume you are the average Joe and have very minimal gym experience. I wonder if, in the future, you could specifically show what changes you made to the prompt to get the results you desired.

  3. your comparison really highlights how the generic isn’t always bad, but it is limited. as you noted, the first output is a good starting point for a beginner/novice, even if it falls short for someone with specific goals. I love the connection to the new employee analogy from the Claude guide, as it explains why the model defaulted to the most average patterns when you didn’t provide any constraints. It’s a great reminder that an LLM’s utility is often a direct reflection of the context we’re willing to provide.

Leave a Reply