Ethics of AI by: Brody Snyder

The information that we found today was very intriguing. It made me think about AI a little more differently. It also helped me understand which platform of AI is helpful in it’s own way.

As a group we inserted the prompt: “Write a paragraph about the consequences on the labor market, the employment rate, wage inequality, the good of the economy as a whole. use reliable sources and cite them”. There were a few interesting things that we found with these results.

The ChatGPT result was a very wordy and detailed paragraph with multiple sentences. It didn’t use much data, have incite citations, or use percentages or numbers. On the other hand NotebookLM did not put it in paragraph form at all. It generated bullet points with titles to each category. This is very intriguing considering it didn’t do exactly what the prompt was asking.

Using the NotebookLM response It gave the data that 30% of hourly jobs would be taken over by AI with in the year of 2030. I never truly thought about AI taking over real humans jobs which is crazy to think most jobs would have AI working it instead of humans

https://worldatwork.org/publications/workspan-daily/artificial-influencer-research-suggests-ai-is-widening-the-pay-gap

https://economy.ac/news/2025/12/202512285540

AI ethics in modern day

During class, me and my group researched AI hiring and the tools it uses. Here we found a lot of shocking and new information that I was not aware of. The biggest one being between a male and a female applicant, like yesterday the AI device is more bias toward the male body and the male application.

Now it does go through the whole process with both but overall, in the end it is more bias toward the male body than the female. I also found that AI only puts people in there job-fitting 40-67% of the time. I think this is a bad thing for AI to be doing because give or take 5-10% of the time people are not going to be doing what they applied for/what there job fit actually is. I do think though in the near future more and more companies will pick up of AI hiring and it will become more accurate and better through the years. I think that it will develop more but it will only come with time and trial.

Something I did learn and that is useful is that through the hiring process, it saves humans a ton pf time with going through hundreds and thousands of applications where humans can only do so much at a time. Also through the resume screening, AI has a 92-94% accuracy rate or picking the best candidates for the specific hiring.

ChatGPT information verified by-https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine?utm_source=chatgpt.com

https://arxiv.org/abs/2407.20371?utm_source=chatgpt.com  

When one AI explanation isn’t enough….

Why use one AI when you can use two? I enjoyed this research lab because it truly echoed the idea that you can use both ChatGPT and Notebook LM effectively for different aspects of any work or project. Both AI tools have their strengths and weaknesses and also vary in the approach they use to answer questions and find sources.

For today’s lab, the prompt I used was: “What are the applications of AI for facial recognition systems in healthcare?” I asked ChatGPT to summarize an answer to this question and then give me credible sources (such as research papers). ChatGPT gave me five research papers, all of which were credible. The research papers were however, varied in terms of the subtopic(whilst some focused on facial analysis in healthcare, others focused on deep learning-based facial image analysis). ChatGPT also gave me a good summary which it subcategorized to include various sides of AI’s use for facial recognition in healthcare (e.g: disease diagnosis, patient recognition etc).

I took two of the five sources from ChatGPT and inputted those into Notebook LM and asked it to summarize what they said. Notebook LM did a good job of breaking down both sources into clinical applications, technical and ethical challenges, amongst other subtopics.

This lab taught me that ChatGPT tends to give more generalized information, and is very confident regardless of whether its right or wrong. Notebook LM on the other hand is very technical, has a high level of accuracy and is usually very specific. Interestingly, ChatGPT did a good job of giving credible sources and that was surprising to me.

Overall, I recommend both depending on what your goals are and I think there are definitely ways to play to the strengths of both and I would love to learn more.

Sources: https://pubmed.ncbi.nlm.nih.gov/40041850/ https://pubmed.ncbi.nlm.nih.gov/35877324/ https://www.linkedin.com/pulse/duel-vibing-when-one-ai-isnt-enough-my-journey-app-zero-paul-cleghorn-auxxf

Bias in AI

I researched the ethical issue of bias in artificial intelligence. One interesting thing I learned is that AI systems can repeat social biases that already exist in society if the data they are trained on is biased. For example, if an AI system learns from data that reflects stereotypes or unfair patterns, it may continue those patterns in its decisions. This can affect things like hiring, facial recognition, or loan approvals, which could lead to unfair treatment of certain groups.

To explore this more, I asked two different AI tools the same question: “What social biases can occur?” I asked both ChatGPT and NotebookLM so I could compare their responses. Both tools gave similar answers and explained that social biases in AI can include gender bias, racial bias, and economic bias. These types of biases can happen when some groups are underrepresented or when historical data already reflects inequality.

However, I noticed a difference between the two responses. ChatGPT gave a clear and simple explanation, while NotebookLM gave a more detailed answer. NotebookLM included more examples and explanations because it used the sources that I uploaded into it. Since I was able to control the sources it used, the answer felt more connected to the research I was doing.

Overall, this showed me that even when you ask the same question, different AI tools can give slightly different answers. The level of detail often depends on the sources the AI has access to, which can make a big difference in how much information you get.

2402.06196 

1504121_1753296366650_en-US.pdf 

A.I. Mastery: Your 5-Day Guide | PDF | Artificial Intelligence | Intelligence (AI) & Semantics 

AI and Libraries Avery Swartz Founder & CEO, Camp Tech RENDEZ-VOUS DES BIBLIOTHÈQUES PUBLIQUES DU QUÉBEC // MAY 3, 2024 

Books that Can Help Students Learn About Artificial Intelligence – Penguin Random House Higher Education 

Genius Makers by Cade Metz | Summary, Audio, Quotes, FAQ 

The 8 best AI courses for beginners in 2026 | Zapier 

Annotated-Bib-Readings.pdf 

Ai Ethics: Facial Recognition Bias

When comparing ChatGPT and Notebook LM’s abilities to act as a research assistant on racial bias in AI facial recognition software, it became evident that Notebook LM offered a lot more detailed and relevant information than Chat did.

My prompt for Chat was: “Act as a research assistant for me. Researching racial bias within AI facial recognition software using language friendly for college students.” This resulted in many different subsections, all of which contained two or three bullet points that contained brief headings of more that could be found. Some of these subsections included current “solutions” to the bias problem, the “Gender Shades” study mentioned in class, and explained how Ai became biased, blaming training professionals as being biased themselves.

After importing the same sources pulled from ChatGPT into Notebook LM, I then asked Notebook: “Search sources for the reason for the bias in facial recognition software.” Notebook LM provided a lot more detail and was able to reference specific sections within the sources that provided statistics of bias in facial recognition. However, Notebook LM also has a section that places part of the Ai bias on the trainers and the data provided, stating that Ai is not a perfect tool.

Based on these findings, I can gather that Notebook LM would be more useful when writing a research paper and when it comes to creating in-text citations. Nonetheless, based on what both Chat and Notebook provided me, I can safely say that it is not a good choice for police to use Ai facial recognition in investigations, as there is still much training to be done to shrink the size of bias within these systems.

Sources:

National Institute of Standards and Technology (NIST). (n.d.). Demographic Effects in Face Recognition. Retrieved from https://pages.nist.gov/frvt/reports/demographic/

  • Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. Conference on Fairness, Accountability, and Transparency.

Sarridis, I., Koutlis, C., Papadopoulos, S., & Diou, C. (2025). VB-Mitigator: An Open-source Framework for Evaluating and Advancing Visual Bias Mitigation. AEQUITAS 2025: Workshop on Fairness and Bias in AI

AI Ethics:

In my research in class today, my group experimented with asking ChatGPT, and Notebook LM and asked about the ethics of AI facial scanning. Both softwares found that AI has a harder time scanning both women’s, and people with darker skin’s faces compared to men’s and people with lighter skin’s faces. For me, a helpful prompt in my research with Chat GPT was, “Generally, is it easier to scan male’s or female’s faces? Generally, is it more difficult to scan darker skinned people’s faces than lighter skinned individuals?” When Chat was asked this, the response was honest, and said that studies show that there is a higher rate of error for face scanning of women and individuals with darker skin, compared to the error rate of face scanning of men and individuals with lighter skin. A prompt that gave me a confusing response was, “Are you racially biased? Are gender biased?” When asked this, chat immediately says no. However, Chat then points out how biases do show themselves at times, and how it is because of the way it has been trained. Notebook LM differed with Chat, only in the format of its responses. Notebook LM likes to use more detailed responses in paragraph format. Chat uses less detailed responses in bullet point format.

https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/examining-the-ethics-of-facial-recognition

https://journalofethics.ama-assn.org/article/what-are-important-ethical-implications-using-facial-recognition-technology-health-care/2019-02