As AI has become more common in schools and workplaces, it has caused a lot of difficult ethical issues. Johnson and Smith say, The challenge isn’t whether to use AI tools, but how to use them in ways that enhance rather than diminish human capability and agency.When deciding how to use AI correctly, we need to think about both the purpose and the outcome. Are we using AI to skip learning or to help us understand better? When AI helps with jobs that usually help build basic skills, like writing, solving problems, and critical thinking, there are some gray areas.While AI is useful, it can’t take the place of human judgment and experience. Policies that work must be clear and flexible. We don’t need strict rules that forbid everything. Instead, we need standards that take AI’s potential into account while still protecting the integrity of education.It is important that mobility should be at the center of policymaking. AI tools can level the playing field for students with disabilities by giving them different ways to show what they know that they might not be able to do otherwise.The people who work in the future will need to know how to use AI and have skills that AI can’t copy. How ready our kids are for this reality tomorrow will depend on the rules we set for schools today.
Author Archives: Yoshifumi
Creative AI
I saw some interesting patterns in the creative abilities of both ChatGPT and Claude AI today as I tested them. At first, I asked for poems without giving any details about the title or subject. Both AIs wrote abstract poems that were hard to understand. They used a lot of metaphors and philosophical thoughts that, while they may have been well written technically, didn’t make me feel anything.
But when I gave them more specific information asking for poems about a Japanese father who is 50 years old they immediately came up with simple poems that anyone could understand. The pictures became more real: views of mountains, cherry blossoms, and special times between a father and his child took the place of the vague ideas they had used before.
This experience backs up what I said last week if you want accurate and useful responses from AI systems, you need to be very clear about what you want. The result will be more like what you wanted if you give more information. This is true not only for creative writing, but also for finding information, writing code, and solving problems.
I also saw that these AI systems might be biased. All of the names they came up with for their characters were Western or white-sounding, which I thought was an interesting pattern that should be looked into further. Even though both systems are meant to show different points of view, this small bias suggests that their training data may have imbalances that affect the creative work they produce.
In this comparison, both the pros and cons of the current AI language models are shown. It’s easy for them to follow specific instructions, but it’s hard for them to be creative in general. They can make content that sounds a lot like human speech, but it still shows patterns that come from their training data instead of real understanding or cultural awareness.
The lesson for people who want to get the most out of these AI systems is clear: being precise with your prompts will lead to precise results. When you ask for vague things, you get vague answers. But when you give these systems thoughtful, detailed prompts, they can do everything they can.
Asking what do you want in AI
In today’s class, I experimented with using Claude AI as a research assistant to locate scientific papers focused on pKa values of PFAS (per- and polyfluoroalkyl substances). While the AI successfully provided me with several relevant academic papers and resources, I discovered that the results weren’t precisely aligned with my specific research needs.
This experience taught me an important lesson about effectively utilizing AI tools for academic research. I realized that the quality of results directly correlates with the quality of my input. When working with AI systems like Claude, having baseline knowledge of your subject matter is essential, as it enables you to formulate precise, targeted queries.
The more specific and technically accurate my questions became, the more relevant the responses were. I found that breaking down complex research questions into smaller, more focused inquiries yielded better results than broad, general requests. This approach allowed the AI to identify and retrieve the most pertinent information from scientific databases and publications.
AI Ethics
The amazing headlines—AI authoring novels, producing art, maybe transforming every sector—have all been seen. But what most shocked me in my latest artificial intelligence investigation is not what these systems are capable of—rather, it is what they essentially cannot do. This realization has entirely changed my perspective on our AI-powered future. Finding that even the most advanced artificial intelligence systems lack true comprehension opened my eyes. Large language models such as GPT-4 are actually executing quite sophisticated pattern recognition rather than thinking like people do, despite their remarkable results. Melanie Mitchell, an artificial intelligence researcher, claims in her book “Artificial Intelligence: A Guide for Thinking Humans,” these systems lack the conceptual basis people employ to understand the world. Through statistical connections in data, they can replicate cognition; but, this generates a persuasive illusion rather than actual knowledge. As we bring artificial intelligence into vital spheres of life, this difference becomes quite important. Research from the Stanford Institute for Human-Centered AI exposes alarming instances of “automation bias”—our inclination to believe computer-generated information more than human judgment, even when the computer is erroneous. When we use artificial intelligence in financial systems, legal environments, or healthcare without understanding its basic limits, this becomes harmful. The confident, fluid presentation of artificial intelligence outputs hides their possible inaccuracy in new contexts.
The fact that so few individuals acknowledge this truth worries me most. According to a 2023 Pew Research poll, just faster and more effectively, 72% of Americans say AI systems “think” in ways akin to humans. This misunderstanding leaves a dangerous difference between reality and expectations. According to widely referenced work on model limits by AI ethics researcher Timnit Gebru, this misinterpretation results in “inappropriate reliance on systems that cannot deliver on their perceived promises. This viewpoint has radically transformed my perception of artificial intelligence development. The issue is not whether robots will grow to be superintelligent entities but rather whether we will grow wise enough to recognize what these tools really are: potent pattern-matching systems that can increase human intellect without substituting the special qualities of human knowledge. This difference is not only intellectual; it’s also necessary for developing technologies that really benefits mankind instead of compromising it by misguided trust as we go.
Introductions
Hi, I’m Yoshi Otani—feel free to call me Yoshi. I’m originally from Japan but was born in San Diego, CA. I’m a senior chemistry major and plan to pursue an MBA after college. In my free time, I enjoy playing basketball and going to the gym.
I think AI is a powerful tool with valuable applications in academics and beyond. However, I believe AI is currently overhyped, with many companies pushing its use for unnecessary reasons.
