When one AI explanation isn’t enough….

Why use one AI when you can use two? I enjoyed this research lab because it truly echoed the idea that you can use both ChatGPT and Notebook LM effectively for different aspects of any work or project. Both AI tools have their strengths and weaknesses and also vary in the approach they use to answer questions and find sources.

For today’s lab, the prompt I used was: “What are the applications of AI for facial recognition systems in healthcare?” I asked ChatGPT to summarize an answer to this question and then give me credible sources (such as research papers). ChatGPT gave me five research papers, all of which were credible. The research papers were however, varied in terms of the subtopic(whilst some focused on facial analysis in healthcare, others focused on deep learning-based facial image analysis). ChatGPT also gave me a good summary which it subcategorized to include various sides of AI’s use for facial recognition in healthcare (e.g: disease diagnosis, patient recognition etc).

I took two of the five sources from ChatGPT and inputted those into Notebook LM and asked it to summarize what they said. Notebook LM did a good job of breaking down both sources into clinical applications, technical and ethical challenges, amongst other subtopics.

This lab taught me that ChatGPT tends to give more generalized information, and is very confident regardless of whether its right or wrong. Notebook LM on the other hand is very technical, has a high level of accuracy and is usually very specific. Interestingly, ChatGPT did a good job of giving credible sources and that was surprising to me.

Overall, I recommend both depending on what your goals are and I think there are definitely ways to play to the strengths of both and I would love to learn more.

Sources: https://pubmed.ncbi.nlm.nih.gov/40041850/ https://pubmed.ncbi.nlm.nih.gov/35877324/ https://www.linkedin.com/pulse/duel-vibing-when-one-ai-isnt-enough-my-journey-app-zero-paul-cleghorn-auxxf

Bias in AI

I researched the ethical issue of bias in artificial intelligence. One interesting thing I learned is that AI systems can repeat social biases that already exist in society if the data they are trained on is biased. For example, if an AI system learns from data that reflects stereotypes or unfair patterns, it may continue those patterns in its decisions. This can affect things like hiring, facial recognition, or loan approvals, which could lead to unfair treatment of certain groups.

To explore this more, I asked two different AI tools the same question: “What social biases can occur?” I asked both ChatGPT and NotebookLM so I could compare their responses. Both tools gave similar answers and explained that social biases in AI can include gender bias, racial bias, and economic bias. These types of biases can happen when some groups are underrepresented or when historical data already reflects inequality.

However, I noticed a difference between the two responses. ChatGPT gave a clear and simple explanation, while NotebookLM gave a more detailed answer. NotebookLM included more examples and explanations because it used the sources that I uploaded into it. Since I was able to control the sources it used, the answer felt more connected to the research I was doing.

Overall, this showed me that even when you ask the same question, different AI tools can give slightly different answers. The level of detail often depends on the sources the AI has access to, which can make a big difference in how much information you get.

2402.06196 

1504121_1753296366650_en-US.pdf 

A.I. Mastery: Your 5-Day Guide | PDF | Artificial Intelligence | Intelligence (AI) & Semantics 

AI and Libraries Avery Swartz Founder & CEO, Camp Tech RENDEZ-VOUS DES BIBLIOTHÈQUES PUBLIQUES DU QUÉBEC // MAY 3, 2024 

Books that Can Help Students Learn About Artificial Intelligence – Penguin Random House Higher Education 

Genius Makers by Cade Metz | Summary, Audio, Quotes, FAQ 

The 8 best AI courses for beginners in 2026 | Zapier 

Annotated-Bib-Readings.pdf 

Ai Ethics: Facial Recognition Bias

When comparing ChatGPT and Notebook LM’s abilities to act as a research assistant on racial bias in AI facial recognition software, it became evident that Notebook LM offered a lot more detailed and relevant information than Chat did.

My prompt for Chat was: “Act as a research assistant for me. Researching racial bias within AI facial recognition software using language friendly for college students.” This resulted in many different subsections, all of which contained two or three bullet points that contained brief headings of more that could be found. Some of these subsections included current “solutions” to the bias problem, the “Gender Shades” study mentioned in class, and explained how Ai became biased, blaming training professionals as being biased themselves.

After importing the same sources pulled from ChatGPT into Notebook LM, I then asked Notebook: “Search sources for the reason for the bias in facial recognition software.” Notebook LM provided a lot more detail and was able to reference specific sections within the sources that provided statistics of bias in facial recognition. However, Notebook LM also has a section that places part of the Ai bias on the trainers and the data provided, stating that Ai is not a perfect tool.

Based on these findings, I can gather that Notebook LM would be more useful when writing a research paper and when it comes to creating in-text citations. Nonetheless, based on what both Chat and Notebook provided me, I can safely say that it is not a good choice for police to use Ai facial recognition in investigations, as there is still much training to be done to shrink the size of bias within these systems.

Sources:

National Institute of Standards and Technology (NIST). (n.d.). Demographic Effects in Face Recognition. Retrieved from https://pages.nist.gov/frvt/reports/demographic/

  • Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. Conference on Fairness, Accountability, and Transparency.

Sarridis, I., Koutlis, C., Papadopoulos, S., & Diou, C. (2025). VB-Mitigator: An Open-source Framework for Evaluating and Advancing Visual Bias Mitigation. AEQUITAS 2025: Workshop on Fairness and Bias in AI

AI Ethics:

In my research in class today, my group experimented with asking ChatGPT, and Notebook LM and asked about the ethics of AI facial scanning. Both softwares found that AI has a harder time scanning both women’s, and people with darker skin’s faces compared to men’s and people with lighter skin’s faces. For me, a helpful prompt in my research with Chat GPT was, “Generally, is it easier to scan male’s or female’s faces? Generally, is it more difficult to scan darker skinned people’s faces than lighter skinned individuals?” When Chat was asked this, the response was honest, and said that studies show that there is a higher rate of error for face scanning of women and individuals with darker skin, compared to the error rate of face scanning of men and individuals with lighter skin. A prompt that gave me a confusing response was, “Are you racially biased? Are gender biased?” When asked this, chat immediately says no. However, Chat then points out how biases do show themselves at times, and how it is because of the way it has been trained. Notebook LM differed with Chat, only in the format of its responses. Notebook LM likes to use more detailed responses in paragraph format. Chat uses less detailed responses in bullet point format.

https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/examining-the-ethics-of-facial-recognition

https://journalofethics.ama-assn.org/article/what-are-important-ethical-implications-using-facial-recognition-technology-health-care/2019-02

Matt Kaley – Post 2: AI Ethics

For this class activity, I explored the Notebook LM and its capabilities in summarizing the implications of A.I. in facial recognition. I asked Notebook LM to scan the web for surface-level sources relevant to this topic. There was an option for a “deep search” as well, but the surface-level search returned a vast number of peer-reviewed sources, all relevant. Some sources examined the implications for law enforcement, while others examined the implications for health care.

One of the most troubling implications that Notebook LM laid out for me was the high likelihood that darker-skinned females were misidentified. Light-skinned males had an error rate of 0.8%, compared to 34.7% for darker-skinned females. Scraping social media to find training data is another concern. Just because an image is publicly viewable does not mean its biometric data should be a free resource for corporations to train their LLMs.

I think Notebook LM can be used to synthesize sources. I would ask Notebook LM to find its own sources sparingly, as I am concerned about the credibility of the sources that LM generated for me. I think, after finding sources for yourself, LM could be a good tool for summarizing the information sources have in common, though it’s definitely not a substitute for actually reading these sources if you need to write a literature review, or something like that.

Group members who explored ChatGPT found it was very confident in all its responses, even though the ideas were more general than those of Notebook LM. One thing I found interesting about Notebook LM is that it often backs up its passages with citations directly from the sources, so you can go back and read the author’s words to make sure LM isn’t hallucinating. I think Notebook LM has uses in writing papers. They could help you see connections between sources that are hidden in plain sight.

Understanding Facial Recognition Algorithms | RecFaces

Source: https://notebooklm.google.com/notebook/e8482b60-de6d-4b3c-bd07-23e7e34e3b71

This made me think about how important it is for developers to test AI systems on diverse groups of people. AI is going to keep growing and being used in things like security, jobs, and schools, so it needs to be fair. In the future, I think AI will become even more common, but people will also push harder for rules and accountability to make sure it’s used responsibly. One thing that really stood out to me was learning how facial recognition systems don’t always work the same for everyone. Some studies showed that these systems can make more mistakes when identifying people with darker skin tones. That made me realize that technology isn’t always neutral—it reflects the data it’s trained on.