I researched the ethical issue of bias in artificial intelligence. One interesting thing I learned is that AI systems can repeat social biases that already exist in society if the data they are trained on is biased. For example, if an AI system learns from data that reflects stereotypes or unfair patterns, it may continue those patterns in its decisions. This can affect things like hiring, facial recognition, or loan approvals, which could lead to unfair treatment of certain groups.
To explore this more, I asked two different AI tools the same question: “What social biases can occur?” I asked both ChatGPT and NotebookLM so I could compare their responses. Both tools gave similar answers and explained that social biases in AI can include gender bias, racial bias, and economic bias. These types of biases can happen when some groups are underrepresented or when historical data already reflects inequality.
However, I noticed a difference between the two responses. ChatGPT gave a clear and simple explanation, while NotebookLM gave a more detailed answer. NotebookLM included more examples and explanations because it used the sources that I uploaded into it. Since I was able to control the sources it used, the answer felt more connected to the research I was doing.
Overall, this showed me that even when you ask the same question, different AI tools can give slightly different answers. The level of detail often depends on the sources the AI has access to, which can make a big difference in how much information you get.
1504121_1753296366650_en-US.pdf
A.I. Mastery: Your 5-Day Guide | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
Genius Makers by Cade Metz | Summary, Audio, Quotes, FAQ