Creative AI

I saw some interesting patterns in the creative abilities of both ChatGPT and Claude AI today as I tested them. At first, I asked for poems without giving any details about the title or subject. Both AIs wrote abstract poems that were hard to understand. They used a lot of metaphors and philosophical thoughts that, while they may have been well written technically, didn’t make me feel anything.

But when I gave them more specific information asking for poems about a Japanese father who is 50 years old they immediately came up with simple poems that anyone could understand. The pictures became more real: views of mountains, cherry blossoms, and special times between a father and his child took the place of the vague ideas they had used before.

This experience backs up what I said last week  if you want accurate and useful responses from AI systems, you need to be very clear about what you want. The result will be more like what you wanted if you give more information. This is true not only for creative writing, but also for finding information, writing code, and solving problems.

I also saw that these AI systems might be biased. All of the names they came up with for their characters were Western or white-sounding, which I thought was an interesting pattern that should be looked into further. Even though both systems are meant to show different points of view, this small bias suggests that their training data may have imbalances that affect the creative work they produce.

In this comparison, both the pros and cons of the current AI language models are shown. It’s easy for them to follow specific instructions, but it’s hard for them to be creative in general. They can make content that sounds a lot like human speech, but it still shows patterns that come from their training data instead of real understanding or cultural awareness.

The lesson for people who want to get the most out of these AI systems is clear: being precise with your prompts will lead to precise results. When you ask for vague things, you get vague answers. But when you give these systems thoughtful, detailed prompts, they can do everything they can.

Ethics of AI Images–Gabrielle Adu-Akorsah

AI w7.docx

Summary:

AI-generated images and videos offer creative possibilities but raise serious ethical concerns like misinformation, unfair use of artists’ work, and a loss of trust in media. To use them responsibly, there should be transparency, consent, and accountability. While AI can be helpful in areas like entertainment, it needs strict limits in sensitive areas like news and politics.

Week 6: Extra credit

I can imagine using AI-generated art. Like, while brainstorming and developing ideas. Without spending hours on preliminary drawings, writers, designers, or artists might utilize AI to produce mood boards swiftly, come up with visual ideas, or sketch rough drafts. AI art may also be helpful in social media and marketing, where short, eye-catching pictures are frequently needed for banners, posters, and background images, as originality is not that important in banners and posters. 

However, there are several situations in which I would never think about utilizing AI-generated art. Which includes the creation of personal fine arts, such as family paintings or wedding pictures, where the human artist’s emotional depth and connection are crucial. Additionally, as AI may overlook crucial details or inadvertently misrepresent spiritual meanings, I would refrain from employing AI-generated art for cultural or heritage-based artworks, particularly when they deal with customs and histories.

AI-made Videos and Images

Recently, AI (such as ChatGPT) has developed a new function to generate pictures and images. Like the application of AI in academic writing mentioned earlier, using AI to create pictures also brings many conveniences and potential ethical issues.

First of all, let’s talk about the convenience of AI generating pictures. AI can generate pictures of any style, just like generating articles. For example, I gave ChatGPT the command “Generate a picture of The Return of the Condor Heroes for me”. Such characters do not exist in daily life, and may have been described in novels. But ChatGPT can better visualize the image on the text. For example, some special effects and advertisements, Coca-Cola uses some AI for advertising, shooting, and production. AI can assist in producing some videos that are difficult to shoot on site, such as special effects transitions. In daily life, ChatGPT can also perform fine-tuning and animation of self-portraits.

However, as mentioned earlier about the copyright and plagiarism issues of AI, AI cannot guarantee originality. AI can grab online and synthesize according to the input instructions. If AI becomes more and more mature in the generation of movies and pictures, this will lead to challenges to the employment of art and film and television practitioners. Can people accept AI artworks? For example, if AI can generate a portrait like Mona Lisa, will humans accept that the painter is an expressionless AI? Will the human artistic level decline or even disappear completely in a few decades due to excessive reliance on AI art creation?

Week 6: Extra Credit

I think there is a lot of ethical considerations to consider when discussing AI-generated images or videos.

We don’t realize on what grounds artificial intelligence is trained and modified to make it easier to use for the average population (typically first world users). The main users are those of the United States followed by China respectively. The two nations are currently the world’s technological superpowers and will remain as such most likely for a long time. While we as the primary users benefit a lot from things like Large Language Models and various AI generated research tools, writing models, etc, we have yet as a society to consider the other aspects of artificial intelligence and how it is trained.

AI generated images and videos are one of the most unethically trained models of AI. While LLMs as writing assistants definitely are not scot-free, the ethical implications of the images and videos and where it comes from is extremely problematic.

For example, those in third world countries are offered seemingly good jobs by big AI tech corporations in the states, to work for $1.50 an hour, working with AI image generation and detoxification of the systems. The workers are required at times to use personal pictures such as ones of their faces, pets, and homes.

Additionally, these same workers are required to filter through disgusting strings of words, phrases, and descriptions such as sexual abuse, slurs, violence, etc. The workers are then technically ‘able’ to see counselors but it is both rare and not accessible or good-quality enough for proper treatment, leaving many underpaid workers disturbed and exploited.

As for AI generated images, companies such as OpenAI outsource these third world workers (as in Sama), to deliver images to the company to help cleanse the image generation models. These images include anything from death to sexual violence. For almost 1.5 thousand images, the working company was not even paid $800.

Week 6 : AI Images and Videos

AI-generated images and videos are changing the way we create and interact with digital content, but this can raise ethical concern that must be addressed to ensure responsible use. One of the most pressing concerns of AI generated content is the potential for misinformation and the creation of realistic fake images or videos that can be used to deceive, manipulate public opinion, or damage reputations; these are known as deepfakes. There was a time in 2020 during the pandemic era when many social media posts (specifically on X/Twitter) were deepfakes that portrayed celebrities in inappropriate situations typically surrounding nudity. This not only threatens individual privacy and safety but also undermines trust in media and information sources.

Additionally, the data used to train AI models often reflects existing societal biases, which can result in outputs that reinforce harmful stereotypes or exclude marginalized groups. Privacy is another major issue, as AI systems may use images of real people without their consent, leading to the creation of exploitative content. Intellectual property rights are also complicated by AI content; it raises questions about whether ownership and fair use are considered when AI models are trained on copyrighted works.

To address these challenges, there should be implementation of clear guidelines; such as labeling AI content, ensuring diverse and representative training data, obtaining consent for the use of personal likenesses, and developing regulations that promote transparency and accountability. While AI-generated art can democratize creativity and provide valuable educational and accessibility tools, its use must be guided by ethical principles that prioritize transparency, fairness, and respect for individual rights, ensuring positive contributions to society.