Extra Credit Post

What ethical dilemmas might arise from the use of AI-generated images or videos, and how can these dilemmas impact society as a whole?

Some ethical dilemmas that may arise from the use of AI-generated images or videos include AI taking bits and pieces of pre-existing images and videos from authors and creators without crediting them. A critical area within this realm that frightens me is the “deep-faking” that users can generate with this new technology. It is particularly worrying that something so incorrect can feel so real at the same time, even though the action never occurred. We must stay alert and ensure that image and video generation undergo some form of verification to confirm they are AI-generated, not real. These dilemmas can directly impact society, as images and videos can be generated to depict actions that are not real but appear to be. Furthermore, failing to give credit to the authors is a direct copyright and plagiarism issue, which could lead individuals into trouble in academic and professional settings. “A model trained on millions of images can easily reproduce a creator’s unique style, without giving credit or asking permission” (IMGIX, 2025). This directly addresses ethical concerns about creators not receiving proper credit for their work, which is used and learned from in large quantities.

Reference: Deambrosi, A. (2025). 8 ethical concerns raised by AI images and video. Imgix. https://www.imgix.com/blog/8-ethical-concerns-raised-by-ai-images-and-video

Post 6: What’s next?

a) If you could sit down with the CEO of one of the major LLM platforms (e.g. OpenAI, Anthropic, etc), what would you like them to know about college students’ perspectives on AI – whether about how students are using LLMs, their ethical concerns, fears about future job prospects, or whatever else?

I would let the CEO know that college students’ perspectives on AI surround the idea that they understand the benefit that AI can provide; however, the way schools are still set up hinders their ability to get the most out of AI to aid their work. Despite the lack of framework surrounding ethical use of AI, students can use LLMs to organize their ideas, as well as generate tables or graphs that they have gathered data for already, and using LLM’s to receive feedback on their essays or projects. Their ethical concerns arise with schools standing strong about their classic statement, “AI use is prohibited of any kind in coursework,” as well as relating usage of AI directly to violating academic integrity. For example, last year, I remember going into all my spring semester classes on the first day in January and seeing the same statement on AI use in the class like it was copy and pasted on every course syllabus. Additionally, there is general fear among students that AI will takeover the job market and available jobs will be limited and not able to employ the population, which is a very scary thought. I would really like to ask the CEO what he thinks about job security and availability as AI continues to grow. Goldman Sachs added to the conversation, “Despite concerns about widespread job losses, AI adoption is expected to have only a modest and relatively temporary impact on employment levels” (Goldman Sachs, 2025). This is mildly calming, but the overarching question and fear remain uncertain.

Overall, I think the importance remains in questioning how AI will perform in the job market and its impact on job availability, potentially leaving qualified people unemployed because they are not machines and cannot think as fast as machines. Hopefully, with students engaging and learning how to use AI ethically, humans being able to learn to most effectively use AI may lead to a potential human plus AI era of society, where both benefit each other and are able to co-exist in society and function properly and well.

Reference: How will AI affect the global workforce? (2025, August 13). Goldman Sachs. https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

Prompt 5 – Academic AI

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?

Regarding the use of AI in the classroom, LLMs should be able to be tools to help facilitate learning, such as help generate ideas, help find relevant sources (checked by you), and summarization. What AI should not be doing is acting as your primary learning source and acting as a teacher for you when trying to learn. “Expertise provides the clearest use case for automating less complex tasks for the expert (such as summarizing) since the expert should possess the requisite knowledge to spot inconsistencies in a genAI summary (Dinsmore & Fryer, 2026). Too much is too much when Generative AI is being used as a primary learning tool for kids who haven’t fully developed their learning and cognitive processes. Additionally, in the professional realm, some jobs require the use of Generative AI in tasks, such as video, image, and text generation. Being able to use AI to aid in facilitating work is generally deemed as good; employees just need to be aware of how to best use these tools. Too much is when the job requirements are done through strictly AI as the expert of learning and knowledge.

Reference: Dinsmore, D. L., & Fryer, L. K. (2026). What does current genAI actually mean for student learning? Learning and Individual Differences, 125, 102834. https://doi.org/10.1016/j.lindif.2025.102834

Prompt 4 – Creative AI

I do not think that the AI model used, more specifically, Copilot, demonstrated creativity. Going from Bhatia’s definition: “Something that demonstrates both novelty and usefulness” (Bhatia, 2025). This definition includes that creativity is involved in novelty and being useful, but lacks the full palette that creativity can show and be defined as through human experiences, thoughts, and ideas. In class on Thursday, AI merged the definitions together and more or less described exactly that creativity is defined as something that demonstrates novelty and usefulness, not totally getting out what the concept of creativity truly encapsulates, relating to the reading on Tuesday that discussed Bhatia’s definition, with AI lacking the human capacity of synthesis.

Regarding how AI did not appear as creative, I asked Copilot with a prompt of: “Please construct a poem in the style of Shel Silverstein regarding the theme of compassion, with a four-stanza maximum.” It responded with this:

In the Pocket of My Pocket

I found a little kindness In the pocket of my coat— It must’ve slipped in quietly Like a whisper on a note.

I handed it to someone Who looked chilly in the rain, And suddenly the cloudy world Felt warm and bright again.

They passed it to another Who had dropped their heavy day, And soon that tiny kindness Grew too big to tuck away.

So now I check my pockets Every morning, just to see— ’Cause compassion has a funny way Of finding its way back to me.

The first area that jumped out at me was its unexciting and boring nature. I expected the LLM to generate a Silverstain-like poem that would be difficult to compare to a real one, but this was not the case. The big thing about Shel Silverstein is that his poems are stories that you can really visualize and can take values from. This AI poem almost forces this sense of compassion to you without any buildup and personalization. I do not believe that this output was creative at all. The framing of the word choice and rhyming was in the realm of Silverstein, but the lack of storytelling and personalization instantly made me see this poem as artificial intelligence. Bhatia offers that: “This is the aspect of creativity that AI lacks. It doesn’t feel tension. It doesn’t labor through ambiguity. It doesn’t explore. Where AI can offer 10 answers to a problem instantly, humans may wrestle for years to arrive at one — but it could be the only one that matters” (Bhatia, 2025). This AI constructed poem perfectly describes AI lacking and not feeling tension, operating without any internal struggle that encompasses every human when putting their relationships, thoughts, and experiences into their own creativity.

My definition of creativity would be having the ability to generate original ideas through experiences, personalization, thoughts, and feelings that can act as problem-solving tools, as well as work in the realm of art. This definition connects with Bhatia in the sense that AI lacks the human capacity of synthesis that truly encapsulates what creativity is, but adds to it, focusing on creativity as a pivotal tool in problem-solving and artistic ways.

The experiments engaged upon this week made me think a lot about what creativity should be defined as. The term is veyr ambiguous and personalized, but because it is personalized, it showed how AI lacks the true nature of creativity and is incapable at this moment of using the human capacity of synthesis in encapsulating relationships, experiences, and thoughts, which contribute heavily to what creativity is and how it is generated. These experiments allowed me to envision my own writing process and how my own creativity is sparked, which I’ve found a lot of the time comes from previous experiences and being able to attribute those to acting as problem-solving tools, as well as artistic value. AI would be useful in helping to aid with topic brainstorming and importing my information and getting an output of a chart or area of organizartion, but relying on AI to showcase creativity is not what I would recommend.

Reference: Bhatia, A. (2025, April 7). The artifact isn’t the art: Rethinking creativity in the age of AI. Freethink. https://www.freethink.com/opinion/studio-ghibli-chatgpt-creativity?utm_source=tldrai

Prompting LLMs

The LLM I used for Tuesday’s lab was Microsoft Copilot. The prompt that was most intriguing was the second phase of the simple “make a workout plan for me” prompt, which was: Make a workout plan for me. I’m available for 2 hours starting at 4 pm and would like a 5-day plan. Please act as a strength coach. Regarding my own experience with this and what we’ve discussed in class, updating the original, basic prompt with prompt patterns (persona, audience, format, constraints, flipped interaction, and iteration), the output became increasingly tailored to the specifics of the prompt. In this prompt, I added persona in requesting Copilot to act as a strength coach, as well as format, identifying I want a 5-day plan, as well as iteration, which revised the first prompt and tightened the plan. This prompt is particularly useful for people who have a gap in their schedule to engage in working out and being pushed the way a strength coach will push you to reach the results you want, especially with a 5-day plan for two hours a day. An additional alteration to this particular prompt would be describing your age range and athletic abilities to further tailor the workout regimen.

Again, I used Microsoft Copilot for this prompt:

Here’s a clean, structured 5‑day workout plan built around your availability: 2 hours starting at 4 pm, with a smart mix of strength, conditioning, and mobility so you’re not burning out but still progressing fast.

Day 1 — Upper Body Strength (Push Focus)

Warm‑Up (10 min)

  • Light band work
  • Shoulder mobility
  • Push‑ups (2×10)

Main Lifts (60 min)

  • Bench Press — 5×5
  • Overhead Press — 4×6
  • Incline Dumbbell Press — 4×8
  • Dips — 3×10–12

Accessories (30 min)

  • Triceps rope pushdowns — 4×12–15
  • Lateral raises — 4×15
  • Core: Planks 3×1 min

Cool‑Down (10 min)

Chest/shoulder stretching

📅 Day 2 — Lower Body Strength (Squat Focus)

Warm‑Up (10 min)

  • Hip mobility
  • Light goblet squats

Main Lifts (60 min)

  • Back Squat — 5×5
  • Romanian Deadlift — 4×8
  • Leg Press — 4×10
  • Walking Lunges — 3×20 steps

Accessories (30 min)

  • Hamstring curls — 4×12
  • Calf raises — 4×15–20
  • Core: Hanging leg raises — 3×10

Cool‑Down (10 min)

Lower‑body stretching

📅 Day 3 — Conditioning + Core

Warm‑Up (10 min)

Light cardio + dynamic stretching

Conditioning Block (60 min)

Choose one:

  • Rowing intervals (10×2 min on / 1 min off)
  • Treadmill incline intervals
  • Circuit training (kettlebells, sled pushes, battle ropes)

Core Block (30 min)

  • Cable woodchoppers — 3×12 each side
  • Ab‑wheel rollout — 3×10
  • Russian twists — 3×20
  • Back extensions — 3×12

Cool‑Down (10 min)

Breathing + mobility

📅 Day 4 — Upper Body Strength (Pull Focus)

Warm‑Up (10 min)

Band pull‑aparts, light rows

Main Lifts (60 min)

  • Deadlift — 5×3 (heavy but clean)
  • Pull‑ups — 4×8
  • Barbell rows — 4×6–8
  • Lat pulldowns — 3×10

Accessories (30 min)

  • Biceps curls — 4×12
  • Face pulls — 4×15
  • Rear‑delt flyes — 3×15

Cool‑Down (10 min)

Back/lat stretching

📅 Day 5 — Full Body + Mobility

Warm‑Up (10 min)

Dynamic full‑body warm‑up

Full‑Body Strength Circuit (60 min)

Repeat 4 rounds:

  • Goblet squat — 12
  • Push‑ups — 15
  • Kettlebell swings — 20
  • Dumbbell rows — 12 each side
  • Step‑ups — 12 each leg
  • Rest 2–3 min between rounds

Mobility + Stability (40 min)

  • Deep hip mobility
  • Thoracic spine work
  • Shoulder stability
  • Light yoga flow

Cool‑Down (10 min)

Slow breathing + stretching

⭐ Want me to tailor this further?

I can customize it to your goals—muscle gain, fat loss, strength, athletic performance, or a mix. I can also adjust for equipment, injuries, or preferred training styles.

From the Google Cloud reading, the One -, few – and multi-shot prompts is an example of the prompt used in this example, as the basic original prompt was slightly amended to generate a more accurate response to what is wanted. “This method involves providing the model with one or more examples of the desired input-output pairs before presenting the actual prompt. This can help the model better understand the task and generate more accurate responses” (Google Cloud: Prompt engineering: overview and guide, 2026).

Prompt 2 – AI as hiring tool, through bias

In trying to use AI as a hiring tool through Notebooklm, the LLM would give me different areas to focus on as a recruiter, such as using AI to measure personality and skillsets through its training data to, identified as data-driven methods. Additionally, Notebooklm suggested that using itself as an AI hiring tool, the LLM was able to mitigate bias and ensure fairness and suggested it could blind hire based on removing personal identification information. Interestingly, the LLM admitted that collaborative work ensures greater success and established that there should be human oversight attached to its abilities in being an AI hiring tool. Regarding bias, due to its training data, if a company’s historically most successful employees are white men, the LLM will search for candidates who match that standard, creating high degrees of bias. The LLM admitted that if race and gender are removed from the database, it looks for zip codes, names, or hobbies to discriminate against specific groups, which furthers the issue of bias. Notebooklm provides basic advice regarding what employers should look for, such as identifying potential core values to evaluate, recommending interview questions, as well as evaulating candidate responses. In examining ChatGPT’s response to AI as a hiring tool provided less of a detail-oritented response, as opposed to Notebooklm. Both admitted that since they are created via mathmatical training data, they are going to have bias towards areas, such as using AI as a hiring tool. Overall, I would say Notebooklm provide more detailed responses in what specific areas the LLM could help as a hiring tool.

Resources:

The sources were’nt able to be imported in, so here’s a screenshot.