Post 6: What’s Next

Going forward, I plan to use AI as a tool that can help improve my effort instead of replacing it. The main problem is that AI makes everything easier but if you rely on it too much, you can lose your ability to think and actually learn. For example, if I use AI to write my entire paper, I might get it done faster, but I won’t really understand the topic or be able to explain it fully. A better way to use it is by doing my own work first, then using it to fix certain things or give a better idea based on my work, that way I’m still doing the thinking and learning, and Ai is just helping me. I think it’s important because AI is becoming part of school, jobs, and something people use every day, so deciding not to use it isn’t a realistic option anymore and the real focus should be on how we are using it. My solution would be to always do it yourself first and use AI as a tool for feedback.

Academic AI

I think it is too much when AI does the thinking for you. Like if AI is summarizing everything, explaining everything and basically doing your work for you, you are not actually learning. Dinsmore and Fryer say learning depends on building knowledge step by step and practicing those skills, and there are not really any shortcuts to that. So once AI replaces that process, it’s hurting more than helping.

My concerns are that every time we use AI, we put in our ideas, questions, and sometimes personal information. That can be stored or used to improve it. Even if it’s helpful now, long term it can cause privacy issues, especially if students don’t know what’s being saved in it.

The gray area is when its helping, but it isn’t fully taking over. Like if you are using Ai to clean up your writing or check if an idea makes sense is fine. But using it to come up with the actual idea itself or do the whole assignment is different. It really just comes down to if the work is actually done by you or not.

Colleges shouldn’t ban it or anything, but they need to be sure they are clear about what is okay, most likely by putting it in the syllabus or course policies. It’s just about making sure students are aware to use AI for brainstorming something or feedback on their work but not for the whole assignment. That way people can still learn and use it the right way.

I think AI is going to be part of almost every job. So instead of relying on it, I would need to learn how to use it the right way while still having my own knowledge. I also think a good solution to control it would be to make students show their work on assignments.

Dinsmore, D. L., & Fryer, L. K. (2026). What does genAI mean for student learning? Learning and Individual Differences.


Post 3: Prompting LLMs

The most helpful prompting strategy I’ve learned is being clear about exactly what I want the output to look like. Not just the topic, but the format, timeline, and specific instructions.

The prompt I used was: “make a daily schedule to help a person gain 10 pounds over 4 weeks no symbols just words” and I tested this on ChatGPT (GPT-5.3).

Here is part of the output I got:

8:00 AM Breakfast
4 eggs
2 slices of toast with peanut butter
1 banana
1 cup of milk

10:30 AM Snack
Protein shake
Handful of nuts or granola

What matters here is how the instructions shape the result. Saying “over 4 weeks” makes it structured and realistic instead of random. Saying “no symbols just words” makes it clean and usable for notes or assignments. The response follows exactly what I asked for.

This strategy works because AI is not actually thinking. It is just predicting language. As one article explains, these systems “can interact with us through natural language, and we can’t distinguish the real from the fake.” (The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023). That means the clearer your input is, the better the output will be.

I think that this is useful for students who want organized notes, athletes who need a structured plan, and really anyone who wants direct answers without anything extra. It saves time and makes the output easier to actually use.

Overall, what I’ve learned is that prompting is about control. If you control the format and details, you control the output.

The New York Times. “What Exactly Are the Dangers Posed by A.I.?” (2023).

Post 2: Ai Ethics

AI feels like something that is just helping us, but the more I learn, the more I realized it can actually cause real problems if we were not paying attention. The concern I focused on is bias in AI, like how it can treat people differently based on race or the gender.

The most surprising thing I learned was how facial recognition systems are a lot less accurate for darker-skinned women compared to lighter skinned men. That is a mistake that can affect real life situations like criminal activity, job hiring, or security decisions. It made me realize that AI isn’t neutral but its trained on data created by humans. and if that data has any bias, the AI will reflect it.

Another thing is how important prompting is. If you ask a vague question, you get back a basic answer, but if you are specific and ask for multiple perspectives and answers, the responses are a lot better. I wouldn’t fully rely on AI for anything really important without checking it first because it can still sound confident even though it is wrong.

Overall, AI is useful and powerful, but it is not perfect. If we don’t question it, we could end up trusting systems that reinforce the same problems that we are trying to fix.

Damion Cunnigham

Hello, I am Damion Cunningham, I am a sophomore from Cleveland Ohio, and I play football here at Wooster.

I think AI helps us with hard and difficult task in a good and fast way. Other than that, it isn’t fully reliable because it is programmed by humans.