Post 3: Prompting LLMs

One of the most useful prompting strategies I’ve learned is how to get LLMs to write structured literature reviews. I started doing this because reading empirical economics papers one by one takes a lot of time, especially when you’re trying to go through many of them for a project. Using an LLM helps me move faster and compare papers more efficiently.

At first, I used a very simple prompt:
“Write a literature review of the paper attached (PDF).”

The result honestly wasn’t great. The output was very general, kind of surface level, and didn’t really engage with the empirical parts of the paper. It also wasn’t structured like a real literature review, and sometimes it included claims that weren’t clearly supported by the text. It felt more like a loose summary than something I could actually use for academic work.

Then I tried a more detailed prompt:
“Write a literature review of the paper attached. You are an economics professor writing for an academic journal. Include sections on: research overview, methodology, key findings, research strengths, and limitations. The review should be 800 words. Do not synthesize or cite sources not present in the source material attached. Do not make unsupported claims.”

This worked much better. The output was clearly organized, more technical, and actually focused on things that matter in empirical economics, like identification strategy and limitations. It also stayed grounded in the paper instead of making things up.

I tested this on ChatGPT (GPT 5.4), and the difference between the two prompts was pretty clear. With the second one, the model produced something that actually resembled a real literature review, with sections like methodology and key findings written in a more academic tone.

This connects directly to what we read about prompt literacy. The UT Aspire guide explains that being specific about things like role, format, and purpose leads to better outputs, while vague prompts usually give broad and less useful answers .

Overall, this kind of prompting is really useful for students or researchers who need to go through a lot of papers quickly. It doesn’t replace actually reading the paper, but it makes the process way more efficient and helps you decide what’s worth spending more time on.

One thought on “Post 3: Prompting LLMs

  1. I’ve noticed the same thing where if I don’t clearly structure my prompt, the LLM just gives a surface-level summary instead of something usable. This shows that to get a high-quality output, I need to define exactly what I want the model to produce rather than expecting it to infer the structure on its own.

Leave a Reply