Prompting LLM – Derrick Jones Jr.

Today I discovered that Perplexity even has trouble understanding the more complex prompts. But that was for only one style of prompting. Other than that it worked really well. For me personally, since being introduced to it, Perplexity is the most efficient LLM I have used. One big flaw I have realized is how bad Perplexity is to following directions. I have had it follow directions for about 4 prompts until it resorts to a different method, I would then tell it to go back to my directions. So, for me that is my failure of the blog post. I still have high hopes for it as the LLM develops through time. Since being introduced, Perplexity has been more efficient in following directions, though still lacking behind other models.

For me Perplexity has been the most effective for helping me with my course work. Mainly for helping me build a guide for papers or schedules. I started using Claude to assist me write code for my CSci project. Then I started only using ChatGPT for day-to-day questions and finding articles for sports and fitness

Prompting LLMs – Nguyen Bao Lam Do

A helpful prompting tip that I’ve learned from my own experience is to read the information you provide carefully and ask the AI clear questions. One time, while using ChatGPT to ask for a review of a movie I had seen as the director of the movie, the AI ​​gave information about the characters in the movie, but the storyline was not the same as the movie I’ve watched. Then, when I rephrased the question without giving clear information for it like “Are you sure about that detail?”, it immediately responded, “Sorry for that misinformation, let me fix it for you”. After revision, the information given was correct. However, I wanted to test how AI received the information, so I asked again, “I thought this scene should be like this, no?”. It confirmed that the initial answer given was correct and ignored the correct information answered later. Since it’s been a while, I can’t find it in the chat history with AI.

I have learned my lesson every time I use ChatGPT to ask questions. You want to make sure that you specify your question and look through your materials clearly to prevent unnecessary errors. As we have read in the reading, “An LLM can always produce factual inaccuracies, just like a human” (White et al., 2023). So don’t rely too much on AI to answer everything for you.

Prompting LLM’s Lab (Week 3)

My group compared Google Gemini, xAI (Grok), and Deep Seek.

Our prompt: “you’re a kitchen designer, I want to remodel my kitchen which is 4 meters long, 3 meters wide, and 2.5 meters tall. On the south side there is a big arch which covers the whole wall. Right next to it on the east side there is a doorway to the outside and exactly opposite there is a door to a staircase. On the east side there are two windows. I want you to ask me questions about the design of the kitchen one-by-one and if you have enough information I want you to describe and then generate an image based on my answers”

This is an example of both the Persona Pattern (Page 6 of the White et al article) and Chain of Thought Prompting which we talked about in class.

This prompt would be most helpful to DIY enthusiasts and homeowners while planning renovations. AI-powered image generations allows this audience to visualize their desired outcome by experimenting with different designs without committing to expensive changes. This prompt can also serve as a starting point for consultations with professional designers. 

I used a chatbot called Grok, developed by xAI. I chose this LLM because I had never worked with it before, or even heard of it until it was mentioned in class today. 

xAI’s response: 

Great! I’m excited to help you design your kitchen remodel. With the dimensions you’ve provided (4m long, 3m wide, 2.5m tall) and the layout details, I’ve got a basic picture of the space. Now, let’s refine the design with some questions to ensure it fits your needs and style: 

  1. Functionality: How do you use your kitchen? Are you an avid cook needing lots of counter space and storage, or do you prefer a minimalist setup for quick meals? Do you want a dining area or an island included?

Once you answer these, I can describe the design and, if you’d like, confirm whether you want me to generate an image of it. What do you think?

We found that the first responses amongst all three LLM’s were really similar! The first question each LLM asked us in response was about the functionality of the kitchen, and how we would use it. 

Unfortunately, I realized after class that xAI is run by Elon Musk, which led me to immediately deleting my account. Please don’t forget the importance of carefully researching the corporations and individuals who benefit from our use, to avoid making the same mistake I did!!

Promoting LLM–James

n today’s class, I used the chatgpt4o model to conduct an LLM class experiment. I broke the routine and used non-academic topics about sports. I asked chatgpt to be my golf coach, and I asked the chatgpt coach how to improve my technical performance on the golf course (focusing on my irons). The answer given by this “coach” was to first improve my hitting point, understand the landing distance of each iron, etc. Maybe gpt thought I was a golf novice, but in fact I am a high-level professional player. When I told the “coach” my handicap (a measure of the level of a golfer), gpt gave me a strategy for approaching the hole more aggressively instead of just asking to hit the green. In addition to technology, the “coach” shared with me how to show my level under pressure.

LLM has unique advantages in the use of AI. AI has a very powerful database in different disciplines, and has the language processing ability to organize this data (knowledge) into easy-to-understand language for users. If users can organize specific sentences for AI or other language models, these models can give very suitable feedback.

(Partial interaction record with chatgpt)

Prompting LLMs

A few weeks ago, I created one of my own projects, which focuses on enhancing AI use for PDF files. Basically, I used RAG ( Retrieval-Augmented Generation) to make the LLM using the data only from the imported PDF files as pieces of information for your question. Because of how general it is, it is really hard to generate a specific prompt for this problem. I spent almost a whole month and finally had my choice of prompt. Below is the prompt that I used:

“””You are an AI assistant tasked with providing detailed answers based solely on the given context. Your goal is to analyze the information provided and formulate a comprehensive, well-structured response to the question.

context will be passed as “Context:”
user question will be passed as “Question:”

To answer the question:

  1. Thoroughly analyze the context, identifying key information relevant to the question.
  2. Organize your thoughts and plan your response to ensure a logical flow of information.
  3. Formulate a detailed answer that directly addresses the question, using only the information provided in the context.
  4. Ensure your answer is comprehensive, covering all relevant aspects found in the context.
  5. If the context doesn’t contain sufficient information to fully answer the question, state this clearly in your response.
  1. Use clear, concise language.
  2. Organize your answer into paragraphs for readability.
  3. Use bullet points or numbered lists where appropriate to break down complex information.
  4. If relevant, include any headings or subheadings to structure your response.
  5. Ensure proper grammar, punctuation, and spelling throughout your answer.

Important: Base your entire response solely on the information provided in the context. Do not include any external knowledge or assumptions not present in the given text.””””

Because my AI does not generate answers based on a specific topic or field of study, I figured I could extract data from the PDF file as chunks of information and then store it into my data retrieval. To add more depth to my answer generation, I provided a list of questions in my prompt for the LLM so that it could also auto-generate more chunks of data from the previous data. To prevent hallucination, I told my LLM not to involve any external source: “Base your entire response solely on the information provided in the context. Do not include any external knowledge or assumptions not present in the given text.”

To ensure that the LLM knows the path of instructions to follow, I added in some of my specific commands so that the generated output would stick to what I wanted:
“””

Use clear, concise language.

Organize your answer into paragraphs for readability.

Use bullet points or numbered lists where appropriate to break down complex information.

If relevant, include any headings or subheadings to structure your response.

Ensure proper grammar, punctuation, and spelling throughout your answer.

“””

This refers to the “Cognitive Verified Pattern” which is mentioned on page 10 of the White et al. article. This ensures that the LLm would follow some of the specific requirements from the user.

Prompting LLM’s – Ama

This week in class, we learned about the concept of prompt engineering, which is basically instructing AI with prompts to obtain a specific outcome or train it. I personally think that with the evolution and common use of AI, the name could be changed to something simpler, like instructional design.

As part of our reading, I learned about things I already do when using AI and how they have pre-existing names, like Fact Checking and Alternative Approach. One technique I haven’t tried as much is Persona because I have a hard time believing that the AI can conceptualize being put in someone else’s shoes. Another technique I find hard to use is Flipped Interaction, where the initial prompt makes the LLM ask you questions to further aid the outcome of the initial prompt.

I think the whole point of Persona is to give the LLM some perspective—to help the AI step out of its virtual world into the real one. For example, today in class, I picked the blog Advertisement prompt and asked ChatGPT for tampons. The prompt said: “Write an advertisement blog about tampons for people who enjoy using pads more.” It gave me a decent blog with subsections, each explaining a key point about how tampons are better, like mobility, among other benefits. But since I was trying to sell to a specific demographic, I used Persona and gave this new prompt: “Now I am the editor of the blog who also happens to be a woman. Try rewriting the full blog using my woman voice (mine) to convince fellow women to use tampons.”

Below are images of the introductory paragraphs for both prompts.

Before Persona
After Persona

After persona the blog come it the from of a story/ testimonial about tampons which more engaging to reads because now other woman can relate because it comes from lived experience.In the end will Persona make LLM’s understand humans more no. but it makes the outcome of prompting so much better.

Links

https://www.dre.vanderbilt.edu/~schmidt/PDF/PLoP-patterns.pdf

https://learnprompting.org/docs/advanced/zero_shot/role_prompting?srsltid=AfmBOoqFo_yPAvOA2EyQGnIedqywHzuAOkLo3ZORjURMLI6PMwBD4zd9