Week 6 : AI Images and Videos

AI-generated images and videos are changing the way we create and interact with digital content, but this can raise ethical concern that must be addressed to ensure responsible use. One of the most pressing concerns of AI generated content is the potential for misinformation and the creation of realistic fake images or videos that can be used to deceive, manipulate public opinion, or damage reputations; these are known as deepfakes. There was a time in 2020 during the pandemic era when many social media posts (specifically on X/Twitter) were deepfakes that portrayed celebrities in inappropriate situations typically surrounding nudity. This not only threatens individual privacy and safety but also undermines trust in media and information sources.

Additionally, the data used to train AI models often reflects existing societal biases, which can result in outputs that reinforce harmful stereotypes or exclude marginalized groups. Privacy is another major issue, as AI systems may use images of real people without their consent, leading to the creation of exploitative content. Intellectual property rights are also complicated by AI content; it raises questions about whether ownership and fair use are considered when AI models are trained on copyrighted works.

To address these challenges, there should be implementation of clear guidelines; such as labeling AI content, ensuring diverse and representative training data, obtaining consent for the use of personal likenesses, and developing regulations that promote transparency and accountability. While AI-generated art can democratize creativity and provide valuable educational and accessibility tools, its use must be guided by ethical principles that prioritize transparency, fairness, and respect for individual rights, ensuring positive contributions to society.

2 thoughts on “Week 6 : AI Images and Videos

  1. You bring up interesting points Ama! Regarding the intellectually property issues, there is legal precedent from the Authors Guild v. Google (2015) case that may help these AI companies that may be using copyrighted works for training. This case set the legal precedent that Google’s scanning of millions of copyrighted books to create a searchable online database was transformative use which is considered fair use under the fair use doctrine. In sum, Google did not harm the market of the authors and most likely helped these authors reach a broader market. These AI companies are likely to make similar arguments that training AI is a transformative use because AI creates ‘novel’ applications of works and, therefore, is fair use under the fair use doctrine. In other words, the output is not substituting for the original work within the training data. This is certainly a weaker argument, in my opinion, than Google’s 2015 argument. I believe AI can be a potential substitute for human work which makes it a little bit trickier to argue in favor of this unethical AI use under the fair use doctrine.

  2. This is very interesting, I am grateful you brought up the specific point of the pandemic and how that affected pop culture. Specifically, the problem with celebrities being taken advantage of with foul content generated by AI. It was saddening and disappointing to see how celebrities such as Taylor Swift had content generated that mimicked her voice saying things that she would never say such as her voting status being in support of Trump or being hateful towards her fans. It is becoming scary how easy it is to sound like a public figure. Additionally, it is good you brought up the point regarding the invasive content and using people’s images without their consent. I have done some more extensive research on this for my project and there is specifically a lot of this exploitation happening in the global south.

Leave a Reply