Matt Kaley – Post 2: AI Ethics

For this class activity, I explored the Notebook LM and its capabilities in summarizing the implications of A.I. in facial recognition. I asked Notebook LM to scan the web for surface-level sources relevant to this topic. There was an option for a “deep search” as well, but the surface-level search returned a vast number of peer-reviewed sources, all relevant. Some sources examined the implications for law enforcement, while others examined the implications for health care.

One of the most troubling implications that Notebook LM laid out for me was the high likelihood that darker-skinned females were misidentified. Light-skinned males had an error rate of 0.8%, compared to 34.7% for darker-skinned females. Scraping social media to find training data is another concern. Just because an image is publicly viewable does not mean its biometric data should be a free resource for corporations to train their LLMs.

I think Notebook LM can be used to synthesize sources. I would ask Notebook LM to find its own sources sparingly, as I am concerned about the credibility of the sources that LM generated for me. I think, after finding sources for yourself, LM could be a good tool for summarizing the information sources have in common, though it’s definitely not a substitute for actually reading these sources if you need to write a literature review, or something like that.

Group members who explored ChatGPT found it was very confident in all its responses, even though the ideas were more general than those of Notebook LM. One thing I found interesting about Notebook LM is that it often backs up its passages with citations directly from the sources, so you can go back and read the author’s words to make sure LM isn’t hallucinating. I think Notebook LM has uses in writing papers. They could help you see connections between sources that are hidden in plain sight.

Understanding Facial Recognition Algorithms | RecFaces

Source: https://notebooklm.google.com/notebook/e8482b60-de6d-4b3c-bd07-23e7e34e3b71

This made me think about how important it is for developers to test AI systems on diverse groups of people. AI is going to keep growing and being used in things like security, jobs, and schools, so it needs to be fair. In the future, I think AI will become even more common, but people will also push harder for rules and accountability to make sure it’s used responsibly. One thing that really stood out to me was learning how facial recognition systems don’t always work the same for everyone. Some studies showed that these systems can make more mistakes when identifying people with darker skin tones. That made me realize that technology isn’t always neutral—it reflects the data it’s trained on.

During todays class, I used ChatGPT as a generative AI source, while other members of my group used NotebookLM. We made an attempt to research and discover AI hiring tools as a topic and see which sources came up to compare reliability. Group members that did not use ChatGPT found sources like “15 best AI hiring tools” and “10 Best AI hiring sources.” While using ChatGPT, I found sites that were about AI sourcing and outreach such as hireEZ and Zoho Recruit. I believe that ChatGPT is more reliable because I have used it to find me sources on topics, and I’ve used it for citations to format properly. Often, it’s easier to do because you can just copy and paste the source into the search bar and then ask it to format the source properly. I’m sure others prefer to use NotebookLM, but I’ve seen both sides and I have used what I prefer. I do not have any plans to use Ai any differently going forward, but it was brought to my attention that ChatGPT is not great for sources to find, and I have to be more careful with the usage of it while avoiding bias. This is something I’m taking into consideration moving forward heavily. If I’m being honest, I don’t think AI is headed in the right direction, I believe it’s taking over for the worse and not the better. It is taking up things like job opportunities, energy, and data. This can be a massive problem in the future. An argument from another groups findings was about the trajectory of AI and how people also think it’s getting worse. It’s scary to look at the many ways robots can project information as well as personal data.

Sources ChatGPT pulled up:

Resumly. “What AI Tools Recruiters Use for Screening: 2025 Guide.” Resumly AI, https://www.resumly.ai/blog/what-ai-tools-recruiters-use-for-screening-2025-guide. Accessed 12 Mar. 2026.

HireGen. “Top 5 AI Recruitment Tools You Should Be Using in 2025.” HireGen, https://hiregen.com/posts/top-5-ai-recruitment-tools-you-should-be-using-in-2025. Accessed 12 Mar. 2026.

Scalar. “The 10 Best AI Recruiting Tools to Supercharge Hiring in 2025.” Scalar, University of Southern California, https://scalar.usc.edu/works/the-10-bestai-recruiting-tools-to-supercharge-hiring-in2025/index. Accessed 12 Mar. 2026.

Prompt 2

ChatGPT is strange unless you’re direct. The most strange thing about ChatGPT’s model was that my prompt had to be really specific in order to find academic sources. First I asked it for academic sources related to facial recognition involving policing, environmental studies, or healthcare. It came up with journal articles and one YouTube video, but nothing from a book or database. After my third ask, I was able to filter down 5 from a database and then put those into Notebook LM. Notebook summarized these sources together and claimed that “Facial recognition technology (FRT) in policing creates a conflict between surveillance efficiency and democratic accountability. Public support is often performative; anonymity reveals many citizens privately harbor reservations about biometric tracking. Empirical data shows FRT deployment correlates with increased racial disparities, specifically raising Black arrest rates while decreasing White rates. This stems from automation bias and pre-existing structural inequities. Global regulations remain fragmented; the US lacks the robust accountability frameworks found in the EU, necessitating urgent, transparent impact assessments to protect civil liberties”. Through this lab, we learned that both AI models have strengths, but Chat is not going to excel at what Notebook excels at and vice versa. Chat struggled at finding these sources and struggled even further on giving me great summaries of the sources they provided as it was multiple steps. My prediction about where AI is headed is a continued reliance because it comes up with sources instantly rather than a trip to the library or a database. I learned that Chat says its prompts very confidently and if you do not check it for error, you are using it incorrectly.

Bias in AI Hiring Tools

One of the most surprising things I learned during today’s research lab was how widely AI hiring tools are already used and how much influence they can have on the labor market. Many companies now rely on AI systems to screen resumes, rank candidates, and even evaluate video interviews before a human recruiter ever reviews an application. At first glance, this seems efficient. These tools can process thousands of applications quickly and reduce the time and cost of hiring. However, what surprised me the most is that these systems can unintentionally reproduce existing social biases.

Because many AI hiring tools are trained on historical hiring data, they often learn patterns from past decisions that were already biased. For example, an experimental hiring algorithm developed by Amazon reportedly penalized resumes that included the word “women’s,” because the system had been trained on data from a male-dominated tech workforce. This shows how algorithms can replicate discrimination even when companies are trying to automate hiring objectively.

The consequences go beyond individual applicants. If biased algorithms systematically filter out qualified candidates from certain groups, this can reduce equal access to employment and reinforce existing inequalities in the labor market. Over time, that can also affect the broader economy. When companies fail to hire the most qualified workers because of biased screening systems, the result can be poorer job matching, lower productivity, and wider wage inequality. Researchers from the Brookings Institution argue that biased algorithmic hiring can limit opportunities for marginalized workers and reduce the overall efficiency of the labor market.

Overall, this research made me realize that while AI hiring tools promise efficiency, they also raise serious ethical and economic questions about fairness, opportunity, and the future of work.

Othmane Oumnad

Hi everyone! My name is Othmane (he/him). I’m a senior from Morocco. I’m really passionate about music and DJing, and when the weather is nice I like to play soccer or go swimming. I’m excited to take this class because AI seems to be advancing much faster than the law, and studying the ethics around it feels especially important right now. Personally, I’m curious to see how AI develops. Ideally, I’d love for AI to take care of things like laundry and dishes so I have more time for art and music, not the other way around.