AI-made Videos and Images

Recently, AI (such as ChatGPT) has developed a new function to generate pictures and images. Like the application of AI in academic writing mentioned earlier, using AI to create pictures also brings many conveniences and potential ethical issues.

First of all, let’s talk about the convenience of AI generating pictures. AI can generate pictures of any style, just like generating articles. For example, I gave ChatGPT the command “Generate a picture of The Return of the Condor Heroes for me”. Such characters do not exist in daily life, and may have been described in novels. But ChatGPT can better visualize the image on the text. For example, some special effects and advertisements, Coca-Cola uses some AI for advertising, shooting, and production. AI can assist in producing some videos that are difficult to shoot on site, such as special effects transitions. In daily life, ChatGPT can also perform fine-tuning and animation of self-portraits.

However, as mentioned earlier about the copyright and plagiarism issues of AI, AI cannot guarantee originality. AI can grab online and synthesize according to the input instructions. If AI becomes more and more mature in the generation of movies and pictures, this will lead to challenges to the employment of art and film and television practitioners. Can people accept AI artworks? For example, if AI can generate a portrait like Mona Lisa, will humans accept that the painter is an expressionless AI? Will the human artistic level decline or even disappear completely in a few decades due to excessive reliance on AI art creation?

Week 6: Extra Credit

I think there is a lot of ethical considerations to consider when discussing AI-generated images or videos.

We don’t realize on what grounds artificial intelligence is trained and modified to make it easier to use for the average population (typically first world users). The main users are those of the United States followed by China respectively. The two nations are currently the world’s technological superpowers and will remain as such most likely for a long time. While we as the primary users benefit a lot from things like Large Language Models and various AI generated research tools, writing models, etc, we have yet as a society to consider the other aspects of artificial intelligence and how it is trained.

AI generated images and videos are one of the most unethically trained models of AI. While LLMs as writing assistants definitely are not scot-free, the ethical implications of the images and videos and where it comes from is extremely problematic.

For example, those in third world countries are offered seemingly good jobs by big AI tech corporations in the states, to work for $1.50 an hour, working with AI image generation and detoxification of the systems. The workers are required at times to use personal pictures such as ones of their faces, pets, and homes.

Additionally, these same workers are required to filter through disgusting strings of words, phrases, and descriptions such as sexual abuse, slurs, violence, etc. The workers are then technically ‘able’ to see counselors but it is both rare and not accessible or good-quality enough for proper treatment, leaving many underpaid workers disturbed and exploited.

As for AI generated images, companies such as OpenAI outsource these third world workers (as in Sama), to deliver images to the company to help cleanse the image generation models. These images include anything from death to sexual violence. For almost 1.5 thousand images, the working company was not even paid $800.

Week 6 : AI Images and Videos

AI-generated images and videos are changing the way we create and interact with digital content, but this can raise ethical concern that must be addressed to ensure responsible use. One of the most pressing concerns of AI generated content is the potential for misinformation and the creation of realistic fake images or videos that can be used to deceive, manipulate public opinion, or damage reputations; these are known as deepfakes. There was a time in 2020 during the pandemic era when many social media posts (specifically on X/Twitter) were deepfakes that portrayed celebrities in inappropriate situations typically surrounding nudity. This not only threatens individual privacy and safety but also undermines trust in media and information sources.

Additionally, the data used to train AI models often reflects existing societal biases, which can result in outputs that reinforce harmful stereotypes or exclude marginalized groups. Privacy is another major issue, as AI systems may use images of real people without their consent, leading to the creation of exploitative content. Intellectual property rights are also complicated by AI content; it raises questions about whether ownership and fair use are considered when AI models are trained on copyrighted works.

To address these challenges, there should be implementation of clear guidelines; such as labeling AI content, ensuring diverse and representative training data, obtaining consent for the use of personal likenesses, and developing regulations that promote transparency and accountability. While AI-generated art can democratize creativity and provide valuable educational and accessibility tools, its use must be guided by ethical principles that prioritize transparency, fairness, and respect for individual rights, ensuring positive contributions to society.

Week 5: Academic Writing

Now, AI has gradually become an indispensable tool in each individual’s school and working life. However, it is difficult to determine exactly how much AI is too much. In my opinion, this will largely depend on each person’s intended use. For example, when using AI for learning purposes, I feel it is enough for students to see them as supporting tools, helping them understand the lesson more easily. However, if you use AI just to write out answers and submit them to pass, then that is already a distorted use of AI. This is probably the same for professional lives. We should use AI to understand the questions we are asking, not just simply to solve them. Therefore, I feel like we can’t know how much is enough for AI, and let’s just consider it a support tool.

When we consider them as support tools, it also means that we should not depend on them too much. As Tang et al. mentioned in the article, everyone using AI can encounter artificial hallucination when they receive a lot of information, but it is not entirely accurate. We have mentioned in our class sessions that we should always check the information carefully to avoid making mistakes from AI. Overall, what I mean by this post is that no one is stopping us from using AI, but we need to have the right awareness about it and when to use it properly.

AI in Academic

The advent of artificial intelligence has brought many conveniences to people, no matter what industry they are in. In academia, the benefits of AI are faster data processing and analysis, better positioning of required academic articles, etc. At the same time, AI brings a series of disadvantages, such as long-term use leading to a decline in students’ independent thinking ability and plagiarism and patent problems.

In academia, the word originality is very important. The invention of AI such as Chatgpt and Deepseek has brought unprecedented impact on originality. The definition of plagiarism proposed by Tang et al (2023) is not only direct word-for-word copying, but also includes indirect plagiarism such as plagiarizing ideas. Most of the texts generated by AI do not have “human” ideas, but are captured and generated through their own big data LLM model. Of course, there are countermeasures for policies. Turnitin is a widely used plagiarism and AI usage detection software in academia. Professors can place students’ papers in this software to detect potential plagiarism. With the widespread use of AI, many universities have also issued guidelines to teach students how to write with AI correctly, such as declaring the use of AI and ethically using AI for writing assistance (University of Kansas).

Sources:

Tang, A., Li, K-K., Kwok, K. O., Cao, L., Luong, S. & Tam, W. (2024). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship, 56, 314–318. https://doi.org/10.1111/jnu.12938 15475069, 2024, 2, Downloaded from https://sigmapub

https://www.forwardpathway.com/135174 (Xuhang Education: The rise and challenges of artificial intelligence in academia. Webpage content in Chinese)

https://cte.ku.edu/ethical-use-ai-writing-assignments (University of Kansas: Using AI ethically in writing assignments)

Week 5: Academic writing

This week, we encountered a lot of questions about the fairness of AI use in academic fields. As I decided to further my studies in AI, I consider AI to be just a tool for improvement. What I mean by a tool here is that it’s just something similar to Google or Facebook. When Google first came out, the world was blown away: it became a significant moment in the history of the internet. Same as Google, I think Artificial Intelligence is also a revolution in terms of how we access and process information. The key is not whether we should use AI, but how we use it responsibly. To me, AI can enhance learning, boost creativity, and save time when used correctly. It’s not a shortcut to avoid thinking, but a powerful aid to deepen our understanding.

When it comes to determining ” how much is too much” in the use of AI, it is less about the restriction of limits but the intention behind its use. I agree that AI is a great tool in supporting learning, researching, and productivity, but not to replace our critical thinking skills and creativity. Finding a balance in the use of AI is hard, but to me, AI is just a supplement, not a substitute for everything.

Also, when considering the ethical use of AI, it is a very sensitive opinion relating to originality, authorship, and fairness. In Tang et al Transparency in academic writing (2023), they consider authorship in academic work as attribution to humans, not AI. This means AI cannot be accountable for the integrity of the content. Nowadays, there are already a few guidelines on how to cite AI in any academic work. However, Tang et al. argue that merely citing AI is not enough if transparency about its role is not made explicit.

References

Tang et al Transparency in academic writing (2023)