Post 7

AI creating images and videos can be extremely problematic. We can hope that they wouldn’t but news channels now have the ability to create any scene they want with AI, and this is another tool the news can use to mislead the people. Additionally, social media can create images and scenes that are also misleading. This can lead to misinformation and disinformation that keep people misinformed.

Also, deep fakes are another ethical issues. Fake images of celebrities have surfaced around social media for a while now, and these can be very harmful to the celebrities that are involved. AI should not be trained to create inappropriate images, or images of people being physically harmed. AI generated images largely affect society negatively. Fundamentally, the more realistic they look, the more people can believe that fake images are real, or lead people to think a real image is fake. The dangers of AI images should be taken more seriously, and people should be aware of the potential dangers.

Post 6: What’s Next?

If I could sit down with ChatGPT’s CEO, I would tell them to change its programming to not create full essays and stories. Today, many college students care solely about completion of assignments. College students are very sensitive with their time. If a student can take any shortcuts to complete an assignment, they will. With ChatGPT being able to complete assignments and essays, this does nothing but hinder college student learning and skills in the grand scheme of things. With this, and some reasoning, one could argue that ChatGPT hinders college students’ ability to be productive in their careers and their ability to perform well on tasks in the real world. This is also concerning because college students rely far too much on ChatGPT. With ChatGPT hallucinating information at times, this could also hinder college students by being misinformed by ChatGPT. While Chat can be helpful in a lot of ways, the harm that it can potentially do to college students is immense, and I feel that students should be more aware of these risks before using ChatGPT to assist them.

Post 5- Academic AI

In any context, when generating opinions, thoughts, and knowledge from AI and claiming it as ours, this is where AI usage becomes too much. It is extremely important for individuals to have the ability to gain their own knowledge, and generate their own thoughts and opinions. These skills make people unique, and if everyone is developing these skills using AI, everyone will be the same. With this, AI can be extremely detrimental. Additionally, students gaining knowledge from AI can lead to a lack of knowledge being gained in schools. “The danger we face now is the indiscriminate use of genAI to scaffold students’ learning by those who do not fundamentally understand how humans learn.” (Desinmore & Fryer)

AI companies storing data is also concerning. If people are not careful, AI can have access to extremely private, personal information. However, people should have this understanding and be careful with what they share with AI. The big Grey area with AI is creativity. In my opinion, using AI to create original work is not creative. However, many would argue that building upon ideas and samples from AI is a creative skill. In syllabuses, professors should let students know of these dangers. I do not believe AI should be able to store personal information about the user. This should be added to the policies that AI should abide by. AI can do a lot that humans do. For example, AI can create emails and messages to customers in a business. To stick out, employers should make messages more personable, as people would still much rather talk to a human than a robot.

Post 4: Creative AI

My definition of creativity is the ability to create a unique, original product from scratch. This week in class, my group, group 5, decided to have LLMs create a poem in the style of Yaet, the rapper. ChatGPT’s response was not unique and not even created from scratch. Many of the rhymes that ChatGPT created are rhymes that I’ve heard many times before. When prompting ChatGPT to, “Create a short poem in the style of yaet the rapper.” ChatGPT provided the following poem.

“Yeah
I step in the night with the lights all glowin
Money keep callin my phone it keep showin
Diamonds they dance and they freeze in the moment
I told em I got it I meant it I own it

Pull up in silence the engine be hummin
Thought I was lost but I turned into somethin
Stack it up tall like it came out the oven
Countin it twice cause I came from the nothin

Yeah
They tryna ride on the wave that I made
I been up late turnin dark into day
Voices in my head but they fade with the bass
I keep it movin no time to just wait

I been on top and I never look down
Came from the dirt now I walk with a crown
Life like a loop it keep goin around
I just keep floatin I don’t make a sound”

While I cannot argue that this poem is awful, and I cannot argue that this is not in the style of Yeat, I can argue that this is unoriginal as it gets. “They tryna ride on the wave.” Is a lyric that has been used by Yeat in a real song before. This is completely unoriginal, and therefore, I would not say ChatGPT used any form of creativity in the output of our group’s prompt. This week’s experiments shifted my thought on AI’s responses to prompts that include writing. In this assignment, ChatGPT stole lyrics from other songs, and used it as its own. Because this is illegal for humans to do, I argue that ChatGPT should not be able, or allowed to do this either. Yeat is king, and I find it preposterous that ChatGPT would take Yeat’s creative work, and claim it as its own.

Post 3: AI Prompting LLMs

My prompt to ChatGPT was, “Make me a workout to help me get faster.” In response, Chat said,

“🔥 Day 1 – Acceleration (Start Speed)

Focus: getting fast in the first 10–20 yards

Warm-up (always do this)

  • High knees × 20 yards
  • Butt kicks × 20 yards
  • A-skips × 20 yards
  • 2–3 light sprints

Workout

  • 10-yard sprints × 6 (full effort)
  • 20-yard sprints × 4
  • Hill sprints (or resisted) × 4

Rest: 1–2 minutes between reps (FULL recovery)


⚡ Day 2 – Strength + Power (Gym)

Focus: explosive legs = more speed

  • Squats: 4×5
  • Deadlifts: 3×5
  • Box jumps: 4×3
  • Walking lunges: 3×8 each leg
  • Core (planks): 3×30–60 sec

👉 Lift heavy but controlled — power matters more than max weight.


💨 Day 3 – Top Speed & Form

Focus: running fast once you’re already moving

Drills

  • A-skips × 2
  • B-skips × 2
  • Stride-outs × 2

Workout

  • Flying 20s (build up, then sprint 20 yards) × 5
  • 40-yard sprints × 3

👉 Focus on:

  • Relaxed upper body
  • Big stride, fast arms
  • Don’t tense up

🧠 Day 4 (Optional) – Agility & Change of Direction

Short burst sprints (5–10 yards)

Ladder drills (quick feet)

Cone drills (5-10-5 shuttle) × 4″

This workout is okay, but it does not give me any ideas that appeal to my own personal needs.

This prompt would be helpful for a new athlete, looking to learn basic workouts to get faster.

LLMs may not give you the outputs you desire when your prompts are not specific and clear.

AI Ethics:

In my research in class today, my group experimented with asking ChatGPT, and Notebook LM and asked about the ethics of AI facial scanning. Both softwares found that AI has a harder time scanning both women’s, and people with darker skin’s faces compared to men’s and people with lighter skin’s faces. For me, a helpful prompt in my research with Chat GPT was, “Generally, is it easier to scan male’s or female’s faces? Generally, is it more difficult to scan darker skinned people’s faces than lighter skinned individuals?” When Chat was asked this, the response was honest, and said that studies show that there is a higher rate of error for face scanning of women and individuals with darker skin, compared to the error rate of face scanning of men and individuals with lighter skin. A prompt that gave me a confusing response was, “Are you racially biased? Are gender biased?” When asked this, chat immediately says no. However, Chat then points out how biases do show themselves at times, and how it is because of the way it has been trained. Notebook LM differed with Chat, only in the format of its responses. Notebook LM likes to use more detailed responses in paragraph format. Chat uses less detailed responses in bullet point format.

https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/examining-the-ethics-of-facial-recognition

https://journalofethics.ama-assn.org/article/what-are-important-ethical-implications-using-facial-recognition-technology-health-care/2019-02