When comparing ChatGPT and Notebook LM’s abilities to act as a research assistant on racial bias in AI facial recognition software, it became evident that Notebook LM offered a lot more detailed and relevant information than Chat did.
My prompt for Chat was: “Act as a research assistant for me. Researching racial bias within AI facial recognition software using language friendly for college students.” This resulted in many different subsections, all of which contained two or three bullet points that contained brief headings of more that could be found. Some of these subsections included current “solutions” to the bias problem, the “Gender Shades” study mentioned in class, and explained how Ai became biased, blaming training professionals as being biased themselves.
After importing the same sources pulled from ChatGPT into Notebook LM, I then asked Notebook: “Search sources for the reason for the bias in facial recognition software.” Notebook LM provided a lot more detail and was able to reference specific sections within the sources that provided statistics of bias in facial recognition. However, Notebook LM also has a section that places part of the Ai bias on the trainers and the data provided, stating that Ai is not a perfect tool.
Based on these findings, I can gather that Notebook LM would be more useful when writing a research paper and when it comes to creating in-text citations. Nonetheless, based on what both Chat and Notebook provided me, I can safely say that it is not a good choice for police to use Ai facial recognition in investigations, as there is still much training to be done to shrink the size of bias within these systems.
Sources:
National Institute of Standards and Technology (NIST). (n.d.). Demographic Effects in Face Recognition. Retrieved from https://pages.nist.gov/frvt/reports/demographic/
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. Conference on Fairness, Accountability, and Transparency.
Sarridis, I., Koutlis, C., Papadopoulos, S., & Diou, C. (2025). VB-Mitigator: An Open-source Framework for Evaluating and Advancing Visual Bias Mitigation. AEQUITAS 2025: Workshop on Fairness and Bias in AI