This week’s reading, videos, and discussions have improved my understanding of prompting in the context of large language models. I learned that several “prompt patterns” exist that can be applied to improve an LLM’s output quality (see, section III. of White et al., 2023).
In the past, I didn’t really think about how to formulate my prompts. I just went by my “gut feeling,” and that worked pretty well. This is probably partly because, as mentioned by Mollick (2023), the term prompt engineering, as it is often defined, is overly complex and probably more appropriate for use in the context of computer programming. However, with LLMs always advancing, their use has become more interactive, allowing for more human-like conversations, for example, by means of iterative prompting (see, e.g., the Mollicks, 2023, video on prompting we watched for our class on April 2), rather than relying on very specific input to provide useful output. Nevertheless, I think knowing about certain prompt patterns is useful for reaching a qualitative output more quickly.
For the prompting experiment, I tried out the flipped interaction pattern combined with the persona pattern. I used the following prompt to compare the outputs from ChatGPT vs. Perplexity:
“Help me plan my California trip. Imagine you are a travel agent. Ask me as many questions as you need to provide me with a 1-week holiday tailored to my interests.”
I think asking an LLM to take on the persona of a travel agent can be useful for anyone planning a trip and feeling a little overwhelmed by the task – especially when they don’t know much yet about the destination they want to travel to. I am quite familiar with ChatGPT but had never used Perplexity before. I was surprised how good the questions and responses were in both of the LLMs. The questions I was asked overlapped a lot, and both LLMs suggested similar itineraries. ChatGPT asked me 20 questions, while Perplexity asked 21. Overall, I think ChatGPT has a more user-friendly design – it does a really good job of using emojis or bold-facing important parts, for example. Perplexity seemed a little more “professional” and straightforward in its tone. I also liked that Perplexity included a section called “Other considerations” to allow me to add anything it might have forgotten to ask.
Sources:
- Mollick, E. (2023). A guide to prompting AI for what you want. One Useful Thing. https://www.oneusefulthing.org/p/a-guide-to-prompting-ai-for-what
- Wharton School. (2023). Practical AI for instructors and students part 3: Prompting AI. YouTube. https://www.youtube.com/watch?v=wbGKfAPlZVA
- White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv. https://doi.org/10.48550/arXiv.2302.11382