Bias and Misidentification in AI Facial Recognition

In the research lab, my group decided to research the ethical issues related to AI facial recognition technology. In my research, I found that there are two major problems with the use of AI facial recognition technology. The problems are:

First, the facial recognition technology is not equally efficient for all people. Research indicates that the technology is more efficient for white male faces than for women or darker-skinned individuals. The National Institute of Standards and Technology conducted a study that indicated some programs have higher false positive rates for certain racial and ethnic groups. This means the system is more likely to incorrectly identify a person with another person’s face in the database.

Another problem with the use of facial recognition technology is the problem of misidentification, which can lead to the wrong person being arrested. There are already reports of the wrong person being arrested because the facial recognition system incorrectly identified the person. The American Civil Liberties Union (ACLU) reported many cases of the wrong person being arrested because the system incorrectly identified them.

The most surprising thing I learned was how widely this technology is already being used even though researchers have found major accuracy and fairness problems. This made me realize that AI systems like facial recognition can have serious real-world consequences when they are used in law enforcement and security systems.

  1. https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt (NIST)
  2. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology

Ai as a hiring tool

In today’s lab our group decided to research on Ai as a hiring tool, I’ve never really heard about it before coming to this class, I’m not sure if it’s because from coming from a different culture where it’s not being as used or it’s just not talked about enough. This was honestly really surprising and scary at the same time, especially after professor Hayward told us about “Kevin situation”

Since many Ai tools are trained on historical hiring data, they’ve learned patterns. In this case Kevin was the most common name in this company so the Ai tool automatically put Kevin as the best candidate for the job. This makes me wonder if Ai keeps growing and is being used in important situations as this while promising efficiency, it could cause more harm and unfairness to people than good.

In this lab we also split into 2 different groups, one working with ChatGPT and the other with notebook, which made me notice some differences. ChatGPT was answering more in summarized short answers and in bullet points while NotebookLM was giving longer responses. ChatGPT also kinda kept repeating the same source page just posted from different people while NotebookLM was giving more diverse responses. So this lead me to a conclusion that ChatGPT is more efficient in making short notes with bullet points, lists, workouts, etc.., but NotebookLM seems more useful for actual learning and curiosity.

https://arxiv.org/abs/1906.09208 – website that kept repeating

https://jier.org/index.php/journal/article/view/3262

Post 2 AI Ethics

We’ve all heard of ChatGPT but not Notebook LM… It honestly took a while for me to learn how NotebookLM works before I could experiment with it in today’s lab. But after playing around with it for a while, I could see its usefulness and how it can be implemented into my research procedures going forward.

My research topic was Racial Bias in Face Recognition Algorithms. I first went into ChatGPT and asked it to provide multiple sources discussing this topic. I clicked on the first three and they were all pretty good. I had free access to all of them, I could download them easily, and they were all relevant to the topic. Then, I asked both ChatGPT and NotebookLM “what issues do all three sources raise regarding this topic?” Both platforms provided 6-7 bullet points with most points overlapping. This was a surprise to me. I initially expected NotebookLM to do better on it as it allows users to specifically upload documents and discuss about it whilst ChatGPT allows you to do pretty much anything. However, I would still not trust ChatGPT to be 100% accurate since it may simply be stating whatever is most common out there. With this in mind, I would likely use ChatGPT to find sources given that I can click on their links and verify their accuracy and legitimacy myself. I would also use it for simple, quick searches that do not require analyzing sources or research papers and can instead be answered using general information available on the internet. However, if I am trying to dive deeper into each source or combine and compare multiple sources I have found, I would use NotebookLM. I think it is important to keep in mind the purpose or goal of what I am trying to achieve as well as to understand the strengths and weaknesses of each AI tools when deciding which one to use.

https://pubmed.ncbi.nlm.nih.gov/33585821/?utm_source=chatgpt.com

https://www.mdpi.com/2079-9292/13/12/2317?utm_source=chatgpt.com

https://link.springer.com/article/10.1007/s43681-021-00108-6?utm_source=chatgpt.com

Prompt 2 – AI as hiring tool, through bias

In trying to use AI as a hiring tool through Notebooklm, the LLM would give me different areas to focus on as a recruiter, such as using AI to measure personality and skillsets through its training data to, identified as data-driven methods. Additionally, Notebooklm suggested that using itself as an AI hiring tool, the LLM was able to mitigate bias and ensure fairness and suggested it could blind hire based on removing personal identification information. Interestingly, the LLM admitted that collaborative work ensures greater success and established that there should be human oversight attached to its abilities in being an AI hiring tool. Regarding bias, due to its training data, if a company’s historically most successful employees are white men, the LLM will search for candidates who match that standard, creating high degrees of bias. The LLM admitted that if race and gender are removed from the database, it looks for zip codes, names, or hobbies to discriminate against specific groups, which furthers the issue of bias. Notebooklm provides basic advice regarding what employers should look for, such as identifying potential core values to evaluate, recommending interview questions, as well as evaulating candidate responses. In examining ChatGPT’s response to AI as a hiring tool provided less of a detail-oritented response, as opposed to Notebooklm. Both admitted that since they are created via mathmatical training data, they are going to have bias towards areas, such as using AI as a hiring tool. Overall, I would say Notebooklm provide more detailed responses in what specific areas the LLM could help as a hiring tool.

Resources:

The sources were’nt able to be imported in, so here’s a screenshot.

AI Facial Recognition (environmental)

If there’s something interesting about learning with new AI apps and learning the topic about facial recognition. I noticed two things:

  1. Learning with a brand new AI app you never used before becomes more learn-able the more you experiment.

When it comes to AI, we are all very familiar with ChatGPT. How it’s just there in public. Considering that it can give you answers, all the answers are generated. Which leads me to my thought on another AI I’m newly using, NotebookLM. The idea of NotebookLM is that it imports resources based on what you need to know like ‘how does AI affect the Environment’. Next you can ask for a report based on how you ask it to make it; giving you a clean, shorten summary from the amount of resources into a couple of pages in relation of the topic. NotebookLM is a great resource for student needing research, while ChatGPT just generates answers that it thinks is what you wanna hear.

  1. Facial Recognition and how it affects the environment.

During class, as I was working with my group, we were talking about AI facial recognition. Knowing that there are different aspects in relations of facial recognition like how it relates to police work or how it affects the environment. I, as a student majoring in environmental studies want to learn more about the environmental aspect of facial recognition. From what I gained from NotebookLM the environmental aspect of AI face recognition is that considering that facial recognition is a valuable tool, however, we should be required to be cautious care and transparent for its potential impact. The consequence of facial recognition can lead to mis-recognizing people’s faces based on race and gender bias. This can also cause false accusation of the wrong people in traveling and law enforcement. I have also learned that there is a surveillance paradox, such as biometric passwords. It is unchangeable, therefore can possibly increase the risk of identity theft. In short, facial recognition doesn’t just affect people of race and gender, it also affects the systems of law enforcements, traveling, and identity on the internet. Which also leads, to companies are in charge of AI would be dealing with serious charges and penalties based on the actions of facial recognition that the people won’t abide.

Ethics of AI by: Brody Snyder

The information that we found today was very intriguing. It made me think about AI a little more differently. It also helped me understand which platform of AI is helpful in it’s own way.

As a group we inserted the prompt: “Write a paragraph about the consequences on the labor market, the employment rate, wage inequality, the good of the economy as a whole. use reliable sources and cite them”. There were a few interesting things that we found with these results.

The ChatGPT result was a very wordy and detailed paragraph with multiple sentences. It didn’t use much data, have incite citations, or use percentages or numbers. On the other hand NotebookLM did not put it in paragraph form at all. It generated bullet points with titles to each category. This is very intriguing considering it didn’t do exactly what the prompt was asking.

Using the NotebookLM response It gave the data that 30% of hourly jobs would be taken over by AI with in the year of 2030. I never truly thought about AI taking over real humans jobs which is crazy to think most jobs would have AI working it instead of humans

https://worldatwork.org/publications/workspan-daily/artificial-influencer-research-suggests-ai-is-widening-the-pay-gap

https://economy.ac/news/2025/12/202512285540