In my research in class today, my group experimented with asking ChatGPT, and Notebook LM and asked about the ethics of AI facial scanning. Both softwares found that AI has a harder time scanning both women’s, and people with darker skin’s faces compared to men’s and people with lighter skin’s faces. For me, a helpful prompt in my research with Chat GPT was, “Generally, is it easier to scan male’s or female’s faces? Generally, is it more difficult to scan darker skinned people’s faces than lighter skinned individuals?” When Chat was asked this, the response was honest, and said that studies show that there is a higher rate of error for face scanning of women and individuals with darker skin, compared to the error rate of face scanning of men and individuals with lighter skin. A prompt that gave me a confusing response was, “Are you racially biased? Are gender biased?” When asked this, chat immediately says no. However, Chat then points out how biases do show themselves at times, and how it is because of the way it has been trained. Notebook LM differed with Chat, only in the format of its responses. Notebook LM likes to use more detailed responses in paragraph format. Chat uses less detailed responses in bullet point format.
One thought on “AI Ethics:”
Leave a Reply
You must be logged in to post a comment.
I think that it’s kind of weird that Chat knows the difference between its own biases and the harsh truth of the facial scanning. Pretty cool Louie!