Feeding my Procrastination: Prompting for an Exam

The  prompt that I used for today’s class was: “Imagine I am the LLM and you are a college student. How would you request information about effective study methods and schedules for final exams?” The response it gave me as for the prompt response: “’Im a college student preparing for finals in [subject, e.g., biology, calculus, history]. Could you suggest a:

Study schedule template (e.g., 1-week or 2-week plan) with time blocks for review, practice, and breaks.

Active recall or spaced repetition techniques tailored to this subject.

Common pitfalls to avoid (like cramming or passive highlighting).

Then I used this prompt to craft a study schedule for my Mathematical Computing class exam by feeding the syllabus to its knowledge base as well. I used Deepseek as the AI tool for this assignment which also has a chain of thought response feature on it as well. This prompt would be useful and crucial for students who are almost unfamiliar with the class syllabus and will likely want to have a head start on the syllabus content to ace any upcoming exam for the class. This is an example of flipped interaction prompt pattern technique (White et al), that we looked at last class. 

Asking what do you want in AI

In today’s class, I experimented with using Claude AI as a research assistant to locate scientific papers focused on pKa values of PFAS (per- and polyfluoroalkyl substances). While the AI successfully provided me with several relevant academic papers and resources, I discovered that the results weren’t precisely aligned with my specific research needs.

This experience taught me an important lesson about effectively utilizing AI tools for academic research. I realized that the quality of results directly correlates with the quality of my input. When working with AI systems like Claude, having baseline knowledge of your subject matter is essential, as it enables you to formulate precise, targeted queries.

The more specific and technically accurate my questions became, the more relevant the responses were. I found that breaking down complex research questions into smaller, more focused inquiries yielded better results than broad, general requests. This approach allowed the AI to identify and retrieve the most pertinent information from scientific databases and publications.

Prompting LLMs Lab

Here is the prompt I used: “Please act as a financial advisor. I would like you to ask me questions to achieve an optimal portfolio allocation. You should ask questions one at a time until you have sufficient information to make your recommendation.” This is an example of the flipped interaction prompt pattern (page 6 of the White et al article). This type of prompt would be useful for people seeking a solution to a goal they do not exactly know how to reach on their own. I tested this prompt on ChatGPT. Here is the first response I received: “Great! Let’s start with a few key questions to determine the best portfolio allocation for you.

1. What is your primary investment goal?

– Wealth accumulation for long-term growth

– Stable income (e.g., dividends, bonds)

– Capital preservation (avoiding major losses)

– A mix of growth and stability

Your answer will help shape the risk-return balance in your portfolio.” 

Post 3: Prompting LLMs (due April 3 by midnight)

What is the most helpful prompting tip you’ve learned, either in this class or elsewhere? In your post, for full credit please be sure to include all of these elements. After all the posts are up, also remember to post two comments on classmates’ posts!

Your own, original voice – no AI blandness, please! Make specific connections to your own experience, discussions we’ve had in class, etc. to bring your post to life for your readers.
The LLM prompt you used — wording, tone, sequence etc. are all important.
Its use cases or applications —in what circumstances, and for what kinds of users, would this prompt be helpful?
The LLM you tested this prompt on, and an example of the output you received (screenshot or cut and paste).
Support and citation—include at least one connection to our course readings (and a correct citation to that reading) to help classmates connect your post to what we’ve read and covered so far in class. 

Ai Ethics – Derrick Jones Jr.

Since Ai has opened to the public people have been exploiting it to do their work. It’s a very common occurrence that students plagiarize their work with the use of AI. Using AI properly, by learning how to create strong prompts, can benefit students significantly. Since learning how to create strong prompts my performance in class has gone up. It helps summarize notes, spark ideas, and can even give helpful feedback on papers.

So, in class, one of our classmates presented Perplexity to us. As I was experimenting with it in class I realized how strong and beneficial this engine could be. I was able to give it a narrow topic for it to search and a broad topic to search. Each time it pulled reliable, scholarly, and primary sources for me. I started using it to find articles for one of my research assignments and it made an hour process of research in to a ~20-minute process. I showed my other professor the engine the next morning and he allowed me to use it for research, he also believed it was fascinating. So, I’m excited to experiment different types of prompts to see what method is the best to use. 

Going into Spring break Professor gave us a group presentation to work on. From the looks of it, all the group ideas are good. I’m excited to see what my group thinks and brings to the table. I think our idea was the best and I have my own opinion on it already. I believe AI should be implemented into education. But to do that we have to teach Ai ethics and learn how different students use and benefit differently from others. I think it’s a process long enough where I won’t be able to see the use of AI rules change in education but it’s something I have high hopes for.

I hope everyone’s break goes well. Stay safe!

AI Ethics

The amazing headlines—AI authoring novels, producing art, maybe transforming every sector—have all been seen. But what most shocked me in my latest artificial intelligence investigation is not what these systems are capable of—rather, it is what they essentially cannot do. This realization has entirely changed my perspective on our AI-powered future. Finding that even the most advanced artificial intelligence systems lack true comprehension opened my eyes. Large language models such as GPT-4 are actually executing quite sophisticated pattern recognition rather than thinking like people do, despite their remarkable results. Melanie Mitchell, an artificial intelligence researcher, claims in her book “Artificial Intelligence: A Guide for Thinking Humans,” these systems lack the conceptual basis people employ to understand the world. Through statistical connections in data, they can replicate cognition; but, this generates a persuasive illusion rather than actual knowledge. As we bring artificial intelligence into vital spheres of life, this difference becomes quite important. Research from the Stanford Institute for Human-Centered AI exposes alarming instances of “automation bias”—our inclination to believe computer-generated information more than human judgment, even when the computer is erroneous. When we use artificial intelligence in financial systems, legal environments, or healthcare without understanding its basic limits, this becomes harmful. The confident, fluid presentation of artificial intelligence outputs hides their possible inaccuracy in new contexts.
The fact that so few individuals acknowledge this truth worries me most. According to a 2023 Pew Research poll, just faster and more effectively, 72% of Americans say AI systems “think” in ways akin to humans. This misunderstanding leaves a dangerous difference between reality and expectations. According to widely referenced work on model limits by AI ethics researcher Timnit Gebru, this misinterpretation results in “inappropriate reliance on systems that cannot deliver on their perceived promises. This viewpoint has radically transformed my perception of artificial intelligence development. The issue is not whether robots will grow to be superintelligent entities but rather whether we will grow wise enough to recognize what these tools really are: potent pattern-matching systems that can increase human intellect without substituting the special qualities of human knowledge. This difference is not only intellectual; it’s also necessary for developing technologies that really benefits mankind instead of compromising it by misguided trust as we go.