April 2nd Response

Bad prompting example:

I was using Google Gemini to revise and reformat text for a formal technical report for my Physics 2 Lab

Original prompt: ” remake the abstract, making sure to follow the guidelines listed in the template very strictly.I also need you to add to the conclusion about how our experimental data for 1/r relation did not match the theoretical. be sure to include how the data range of 3 different distances is not very much data. while you’re doing that, go through my PDF and make sure theres no spelling or grammatical errors.”

I also provided it with a draft of an abstract and an example of what a finished abstract looks like

Result: The paragraph returned by the AI did not answer the prompt and complete the task fully. I very clearly asked it to add a part to the conclusion that it fully ignored. It also created a very long conclusion even though I asked it to keep the entire thing short.

Using the UT guide for prompt engineering, I have reworked the prompt to provide me with a much better result.

Reworked prompt:


Role: You are a Professor and an expert in crafting Physics Technical Reports.

Purpose: I am finalizing a lab report regarding the 1/r relation. Your goal is to provide a polished, submission-ready version of this document.

Tasks: Please perform these tasks in order to ensure the highest quality:

  1. Proofreading: Conduct a comprehensive review of the provided PDF text to correct all spelling and grammatical errors.
  2. Abstract Revision: Remake the abstract so that it adheres strictly to these template guidelines: [Paste Template Guidelines Here].
  3. Conclusion Enhancement: Update the conclusion to address the discrepancy between our experimental data for the 1/r relation and the theoretical model. You must explicitly argue that the limited data range—consisting of only three distances—is statistically insufficient to confirm the theoretical relationship.

Tone & Format:

  • Tone: Maintain an authoritative, formal, and precise scientific voice throughout the revisions.
  • Format: The final output must be a single, fully revised version of the text, integrating all the changes above.

Clarification: Before you begin, do you have any clarifying questions regarding the template guidelines or the specific experimental data provided?

Why this is improved
The UT paper claims that the AI needs to know who it is, who it is talking to, and why it is doing what is is doing (UT Aspire Prompt Literacy page 2). This new prompt establishes the AI’s identity as an expert, identifies the professor/academic audience , and clearly states the goal of a submission-ready report.

By breaking the request into three numbered, sequential tasks, the number of “processing resources” is reduced and the AI needs to do less logic-based work leading to greater accuracy (UT Aspire Prompt Literacy page 2)
Assigning the AI a specific professional persona helps reduce the likelihood of hallucinations and ensures the language is appropriate (UT Aspire Prompt Literacy page 2)
It tells the AI exactly what to include and what tone to use, preventing broad or general responses (UT Aspire Prompt Literacy page 2)

Post 3

Up until now, I had mostly been using LLMs to revise my writings including grammar, structure, and clarity so the promptings I did in class were fun and interesting. I think one of the most helpful prompting strategies I learned was the importance of being specific and clear with my prompting (I used Google Gemini). For instance, when I asked it to tell me about WW1, it gave me a follow up question: “Are you interested in a specific part of the war, like the life of a soldier in the trenches or the political maneuvering behind the scenes?” Additionally, when I told it to act as a teacher teaching it to elementary school students, the words used in the output became easier and beginner friendly which made it simpler and more accessible. E.g., “One day, a leader from a country called Austria-Hungary was hurt while visiting another place. Because of those “club promises” (which adults call alliances), one country joined the fight, then another, and another, until almost the whole world was involved!” This experience connects to the reading on prompting from OpenAI Academy which emphasizes the importance of clearly stating what you want the model to do and how you want the response delivered: “Be clear about what you need ChatGPT to do. Tell ChatGPT how you’d like the response”. These strategies may be helpful when learning about unfamiliar topics as they make complex information more approachable.

Post 3: Prompting LLMs

One of the most useful prompting strategies I’ve learned is how to get LLMs to write structured literature reviews. I started doing this because reading empirical economics papers one by one takes a lot of time, especially when you’re trying to go through many of them for a project. Using an LLM helps me move faster and compare papers more efficiently.

At first, I used a very simple prompt:
“Write a literature review of the paper attached (PDF).”

The result honestly wasn’t great. The output was very general, kind of surface level, and didn’t really engage with the empirical parts of the paper. It also wasn’t structured like a real literature review, and sometimes it included claims that weren’t clearly supported by the text. It felt more like a loose summary than something I could actually use for academic work.

Then I tried a more detailed prompt:
“Write a literature review of the paper attached. You are an economics professor writing for an academic journal. Include sections on: research overview, methodology, key findings, research strengths, and limitations. The review should be 800 words. Do not synthesize or cite sources not present in the source material attached. Do not make unsupported claims.”

This worked much better. The output was clearly organized, more technical, and actually focused on things that matter in empirical economics, like identification strategy and limitations. It also stayed grounded in the paper instead of making things up.

I tested this on ChatGPT (GPT 5.4), and the difference between the two prompts was pretty clear. With the second one, the model produced something that actually resembled a real literature review, with sections like methodology and key findings written in a more academic tone.

This connects directly to what we read about prompt literacy. The UT Aspire guide explains that being specific about things like role, format, and purpose leads to better outputs, while vague prompts usually give broad and less useful answers .

Overall, this kind of prompting is really useful for students or researchers who need to go through a lot of papers quickly. It doesn’t replace actually reading the paper, but it makes the process way more efficient and helps you decide what’s worth spending more time on.

Post 3: Prompting LLMs

The most helpful prompting strategy I’ve learned is being clear about exactly what I want the output to look like. Not just the topic, but the format, timeline, and specific instructions.

The prompt I used was: “make a daily schedule to help a person gain 10 pounds over 4 weeks no symbols just words” and I tested this on ChatGPT (GPT-5.3).

Here is part of the output I got:

8:00 AM Breakfast
4 eggs
2 slices of toast with peanut butter
1 banana
1 cup of milk

10:30 AM Snack
Protein shake
Handful of nuts or granola

What matters here is how the instructions shape the result. Saying “over 4 weeks” makes it structured and realistic instead of random. Saying “no symbols just words” makes it clean and usable for notes or assignments. The response follows exactly what I asked for.

This strategy works because AI is not actually thinking. It is just predicting language. As one article explains, these systems “can interact with us through natural language, and we can’t distinguish the real from the fake.” (The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023). That means the clearer your input is, the better the output will be.

I think that this is useful for students who want organized notes, athletes who need a structured plan, and really anyone who wants direct answers without anything extra. It saves time and makes the output easier to actually use.

Overall, what I’ve learned is that prompting is about control. If you control the format and details, you control the output.

The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023).

Post 3: Prompting LLMs

Nebiyou
After reading and discussing the prompting guidelines in class Tuesday, my second prompt with Gemini was “Make me a diet for a gymgoer”. I thought this would be fun to try and since I go to the gym as well, I was curious about what recommendation I’d get. The response I got at first was that “we need to move past eating clean and focus on performance fuel”, then it gave me a basic meal plan and nutrition strategy. This was an interesting response, considering I didn’t specify what my training goals were (powerlifting, for athleticism, lifting for mass gain, etc.) Towards the end of the response, it did however ask me if I’m looking to bulk up, cut down, or maintain my physique while getting stronger.

Then I used the Flipping Technique and asked, “Ask me 5 questions about my goals, resources, and routine” and it gave me more focused questions, such as my primary objective, dietary dealbreakers, my current training schedule, and height and weight. I found this to be a lot more helpful and a step in the right direction. This prompt could be very useful for people new to weightlifting or people who have been lifting for a while but may not be seeing the results that they are looking for. I found the flipping technique from Dr. Hayward’s slideshow to be the most useful prompting technique. It’s nice to have the LLM ask questions to kinda home in on what I’m really looking for – since sometimes the prompter themselves might not even be sure what they’re looking for.

For my first prompt I tried, I asked it “Tell me about World War I”, where it gave me an okay overview. My follow up prompt was “You are a history teacher, and i’m a student in your college level class. tell me about the turning points in the world war 1 and the after effects of world war 1 in the western hemisphere”. I used advice from Tips to Get Started and Best Practices from UT’s Prompt Literacy guide, like telling Gemini what role it should take on, the audience, and what the purpose is. “To reduce the likelihood of hallucinations (outputs that sound plausible, but are incorrect or unrelated), tell it that it’s an expert in the topic you’re prompting. For
example, “You are an expert in men’s fashion design.” Other expert role examples
include: Copywriter
o Public Speaker
o Marketing Strategist”
(Under Best Practices)

Post 3: LLM Prompt

To my experience of prompting, If I were to be honest I was kinda confused with some of the concept of this experiment. However one thing that caught my attention was personal prompt since I’m so used to asking questions in regards to what I can do to improve myself. For the personal experiment, I asked, “help me be more productive.” I used google gemini and what It did is constrained by making a list of production by advising to master my to-do list, protect my focus, manage my energy, not just time; and optimize my enviorment. When I added persona + goal, it mentions about S.M.A.R.T. Goals. That was the only thing that improved, however it may seem like it got confused.