Post 6

I think the most important issue in how I plan to integrate AI going forward is making sure it does not replace my thinking. Once it starts replacing my role as a thinker, the line between my learning and AI get blurred. If I become completely dependent on artificial intelligence to perform these things as writing, problem solving, or analyzing, I will fail to develop the necessary skills of understanding how to learn and apply knowledge.

I see this clearly in my own academic work. For example, in a statistics assignment, I could use AI to generate a full solution and get the correct answer quickly. Obviously that does not mean I understand the concept or could do it again on my own. When I do the work and solve the problem myself and use AI to check my answer or explain where I went wrong, I am still going through the learning process. The difference is if AI is replacing the work or supporting it.

It matters because AI is going to become part of education and future jobs, so not using it is not a good or smart option. If people rely on it too much, they risk losing core skills and becoming dependent on it. My solution is controlled use, use AI for feedback, clarification, and efficiency, but not as a substitute for learning or decision-making.

Blog post 6

c) How do you plan to integrate AI in your life going forward – whether personally or professionally? Do you feel you have a choice here, for example is deciding not to use AI an option?

I plan to continue using Ai in my life going forward in a big way. I plan to become a teacher and during this time I plan to have it help me make rubrics, project ideas, etc. I feel like I do have an option weather or not to use AI but I do think it could help me and benefit me in a big way.

The case study I will do is from last week Ai in academics. The article I really liked was Dinsmore and Fryer “What does GenAi mean for student learning?”. Here in this article I think it brings up muliple good points on weather Ai is good for using in school/acidemics. This article would and will be great since I want to teach and could help me and the students on weather to use or not use Ai.

We should be paying tons of attention to this topic in academics because as time and Ai evolve, it will only make it easier for kids to pass courses and class. It is getting harder and harder for some teachers to detect whether a kid or student is or has used Ai on an assinment.

Post 6: What’s next?

a) If you could sit down with the CEO of one of the major LLM platforms (e.g. OpenAI, Anthropic, etc), what would you like them to know about college students’ perspectives on AI – whether about how students are using LLMs, their ethical concerns, fears about future job prospects, or whatever else?

I would let the CEO know that college students’ perspectives on AI surround the idea that they understand the benefit that AI can provide; however, the way schools are still set up hinders their ability to get the most out of AI to aid their work. Despite the lack of framework surrounding ethical use of AI, students can use LLMs to organize their ideas, as well as generate tables or graphs that they have gathered data for already, and using LLM’s to receive feedback on their essays or projects. Their ethical concerns arise with schools standing strong about their classic statement, “AI use is prohibited of any kind in coursework,” as well as relating usage of AI directly to violating academic integrity. For example, last year, I remember going into all my spring semester classes on the first day in January and seeing the same statement on AI use in the class like it was copy and pasted on every course syllabus. Additionally, there is general fear among students that AI will takeover the job market and available jobs will be limited and not able to employ the population, which is a very scary thought. I would really like to ask the CEO what he thinks about job security and availability as AI continues to grow. Goldman Sachs added to the conversation, “Despite concerns about widespread job losses, AI adoption is expected to have only a modest and relatively temporary impact on employment levels” (Goldman Sachs, 2025). This is mildly calming, but the overarching question and fear remain uncertain.

Overall, I think the importance remains in questioning how AI will perform in the job market and its impact on job availability, potentially leaving qualified people unemployed because they are not machines and cannot think as fast as machines. Hopefully, with students engaging and learning how to use AI ethically, humans being able to learn to most effectively use AI may lead to a potential human plus AI era of society, where both benefit each other and are able to co-exist in society and function properly and well.

Reference: How will AI affect the global workforce? (2025, August 13). Goldman Sachs. https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

Post 6: Using AI Professionally or Personally

I plan to use AI going forward in my future, and I don’t think I have much of a choice over this. I’ve always hoped for my career to be something revolving around sports analytics or sports journalism/broadcasting. In those fields, it is required for AI to be used to facilitate things like percentages, box scores, charts, and even more stats. Some people would not be okay with this, but it is something I think is very cool and shows ways that sports features can be highlighted in different formats for the better. I believe this brings an exciting new feature to the world of sports. In 2024, The International Olympic Committee (IOC) became the first organization to imbed artificial intelligence into sports. The reason being was the IOC believed that AI officials could find prospective talented athletes while implementing proper training programs for them. The IOC also believed that to make the Olympic games fairer from a judge standpoint, they would have artificial intelligence assist with this process. IOC President Thomas Bach said, “We are determined to exploit the vast potential of AI in a responsible way.” This highlights the efforts of the company using AI in an ethical matter. We should be paying close attention to a topic like this because in the world of sports, nearly 82 percent of leagues/companies use artificial intelligence nowadays. This resembles extreme growth and these organizations have done a tremendous job using artificial intelligence in a responsible and ethical matter. In my opinion, this percentage is only going to continue to grow.  This is exciting, fun, creative, and uniquely shows the ways that AI can benefit jobs in sports media or analytics. The good thing is that AI is not going to take over jobs, and many employees are not at risk of having their jobs taken with AI during this shift. So, I conclude with thinking that AI is very beneficial to the sports world of analytics and data. It will be exciting to see how far it goes.  

“IOC Unveils Plans For Using Artificial Intelligence In Sports” – https://www.espn.com/olympics/story/_/id/39973988/olympic-organizers-unveil-plans-using-artificial-intelligence

“Sports industry AI adoption rises to 82% amid tangible financial and sporting results” – https://www.sportspro.com/news/sports-tech-report-ai-sportradar-february-2026/

What People Get Wrong About AI: The Hallucination Problem

One of the most important things the general public doesn’t fully understand about generative AI is that it can sound confident while still being wrong. This issue—often called “hallucination”—happens when AI generates information that seems accurate but is actually misleading, outdated, or completely incorrect. The problem isn’t just errors—it’s that the errors are delivered in a way that makes them hard to notice.

A clear example comes from a sports prediction exercise I recently worked on. The AI generated a set of betting picks based on star players like Stephen Curry and Joel Embiid, using reasoning like “high scoring role” or “exceeds this line frequently.” At first glance, everything sounded logical. But when comparing the picks to actual results, none of them hit. One player didn’t even play that day, making the prediction unusable. There were no completely fake stats, but the AI relied on vague trends instead of verified, up-to-date data. That’s what makes hallucinations tricky—they’re not always obvious lies, but often confident generalizations that don’t hold up in reality.

We should be paying attention to this because more people are starting to rely on AI for decisions—whether in school, work, or even money-related choices like betting or investing. If users assume AI is always accurate, they can make poor decisions based on incomplete or misleading information. My solution is simple: AI should be treated as a starting point, not a final answer. Users need to verify important information, and AI systems should be pushed to show sources, use real data, and clearly signal uncertainty. Understanding this limitation isn’t about rejecting AI—it’s about using it more responsibly and effectively.

Post 6

AI is super, super useful. It is probably beyond the level of the invention of Google. It’s fast, easy to use, even easy to read——it summarizes everything for you. It’s like addiction. Once you start using it, it’s hard for you to stop. It’s not a bad thing to do, I personally use AI every day and even for my own study process (ethically). But at the same time, problems occurs. The single most important generative AI issue the public needs to understand is that AIs can so called “invent” facts, citations, people, or events and send you them with real information. The issue is not AI makes mistakes, but it makes mistakes confidently, often combined with actual sources, makes it difficult to distinct.

There was once when my mom brought me information that were generated by AI, which says that she doesn’t need a passport to get on a plane but with abcde they will still allow you to get on the plane. Absolutely not possible, and she even encouraged me to try it out May 13th when I get back home.

we really need AI in schools and workplaces to be trained to produce outputs such as drafts requiring verification, not answers requiring trust. We have to pay more attention because hallucinations is not avoidable. Also, we need to realize that there is a clear line between “sounds right” and “is right”.