Week 7: What’s Next?

Question: a) What current AI-related issues, developments, or decisions do you find especially relevant to contemporary society? Craft a short post to give your classmates an overview of the issues involved and why it’s so important.

I feel that current AI issues and developments that are the most relevant to contemporary society is specifically the ethical issues. I think that the environment where it stands is extremely fragile. The energy and water sources that are used by AI each day are extremely detrimental to the environment. Although these are very significant issues, there are other environmental and general ethical problems that are not as talked about and get swept under the rug. I feel that the biggest problem are the issues that go without being noticed that end up being the most harmful to certain human demographics and the earth. For example, environmentally, a lot of society doesn’t realize the waste that is produced in mass amounts that accumulates from the factories that build and develop artificial intelligence. These infrastructures are known to put harmful substances into the earth such as mercury and even lead. This is toxic to the soil and water surrounding the factory. It is important to note it ts especially exploitative that many undeveloped countries in the global south already suffer from poor air quality and un-clean water. Speaking to clean water, this is extremely concerning given that %25 of the global population does not have good access to clean water, and it is projected soon that global AI water usage will accumulate to exceed the water usage of an entire small country (such as Denmark – 6million). As water is one of the largest substances widely used in mass by AI production, energy is the other respectively. According to the International Energy Agency, a ChatGPT request doesn’t use double, not 5 times, but 10 times the amount of energy as a google search. The energy that is needed to produce these is very stressful specifically in our atmostphere as it contributes to higher carbon emissions. This can also all have effects directly on ecosystems that are smaller and have negative impacts on city water.

Sources:

https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about

https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

Week 6: Extra Credit

I think there is a lot of ethical considerations to consider when discussing AI-generated images or videos.

We don’t realize on what grounds artificial intelligence is trained and modified to make it easier to use for the average population (typically first world users). The main users are those of the United States followed by China respectively. The two nations are currently the world’s technological superpowers and will remain as such most likely for a long time. While we as the primary users benefit a lot from things like Large Language Models and various AI generated research tools, writing models, etc, we have yet as a society to consider the other aspects of artificial intelligence and how it is trained.

AI generated images and videos are one of the most unethically trained models of AI. While LLMs as writing assistants definitely are not scot-free, the ethical implications of the images and videos and where it comes from is extremely problematic.

For example, those in third world countries are offered seemingly good jobs by big AI tech corporations in the states, to work for $1.50 an hour, working with AI image generation and detoxification of the systems. The workers are required at times to use personal pictures such as ones of their faces, pets, and homes.

Additionally, these same workers are required to filter through disgusting strings of words, phrases, and descriptions such as sexual abuse, slurs, violence, etc. The workers are then technically ‘able’ to see counselors but it is both rare and not accessible or good-quality enough for proper treatment, leaving many underpaid workers disturbed and exploited.

As for AI generated images, companies such as OpenAI outsource these third world workers (as in Sama), to deliver images to the company to help cleanse the image generation models. These images include anything from death to sexual violence. For almost 1.5 thousand images, the working company was not even paid $800.

Week 5: Academic Writing

As Artificial Intelligence becomes a larger and larger part of our lives socially and industrially, it is also becoming a huge component in academia. What does this mean for research, writing, and the general workings of academia? How is this further being implemented in our lives?

Artificial intelligence in the classroom has become a long winded debate and hurdle for many teachers and students. Generally, we see most of the ethical problems arising in research and writing aspects of academic prospects. Specifically, in humanities courses in which papers are assigned in high volumes or in social/earth/bio fields in which investigative and in-depth research is required in high quantities. Not only has AI inhibited proper learning in classrooms but it continues to allow students to hold themselves back, as they use the large language models as a crutch in these works. Of course copying and pasting a 4 page assignment paper from ChatGPT, putting it through a humanizing AI model, turning it in, and not crediting is 100% plagiarism and not moral writing. We should utilize models such as NotebookLM in our research that can help us find sources, make study guides, and prepare for exams with a podcast.

We have to as a society learn how to utilize artificial intelligence both ethically and beneficially, it is here, getting bigger, getting smarter, and there is nothing we can do about it.

Week 4: Creative AI

Today In class, Clio and I experimented with creating different prompts for two different large language models: ChatGPT and Google AI’s Gemini. We wanted to use these two as they are two of the most popular AI LLM companies. We did two separate ones for the sake of comparison.

PartI:

I experimented with giving It the persona of having the voice/style of Sylvia Plath and writing a poem about a relationship. We found that Clio’s Gemini output was not only generic and not very ‘good’ but it didn’t really sound like Plath. She for one, does not rhyme in her works, however Gemini outputted a completely rhyme-bound piece.

My ChatGPT out put however was more promising, with words, phrasing, and style much more similar to Plath’s works. It still had flaws in genericness and some inaccuracies in how she would write, and some of the themes/flow did not make much sense and sounded like it was “fake deep/sad,” but it was significantly better. We think this is because I used the ChatGPT customization tool in which I gave it specific personality traits like being “poetic.”

Part II:

We then experimented with asking it to write short stories and song verses.

For the short stories we asked it to write “a short story about the feeling of summer.” For the stories we again felt that my ChatGPT output was a much ‘better’ story while the Gemini one was a little odd and bland. It was interesting to see that mine was extremely romantic, soft, deep, and nice sounding.

Week 3: Prompting LLMs

In this week’s research, I found learning about the inputs and outputs of prompting LLMs to be very intriguing. Specifically, the Persona Pattern (Prompt Improvement) made me interested to learn more about the process of large language model prompt engineering and how giving artificial intelligence more characterization can help reach more in-depth and accurate responses.

To utilize the Persona Pattern prompt, I wanted to experiment specifically with the LLM: ChatGPT. I decided to used ChatGPT because it is customizable to impersonate specific personas. For this experiment I decided to customize it with a few general personality traits, one of which being “talking like a member of Gen-Z.”

To help it, and for the sake of the experiment I wanted to take on a little bit of the persona myself to engage more with it. I thought that experimenting with a typical young-adult romantic topic would be interesting in funny. I asked it if I should ‘like totally dump my bf?.” I received an almost comical and slightly ridiculously gen-z slang-saturated response, with some good advice and a little bit of humor. It seemed really engaged and I found it interesting that every time I generated a prompt, it would ask something like “need any more like totally fire advice?” and start generating even more things I could ask it to do.

Week 2: AI Ethics

Shen Article:

During our research lab and this week’s findings, media articles, etc I found a lot of interesting takes and facts on ethics in the world of artificial intelligence and large language models. 

                  I thought it was incredibly interesting to learn about how different nations and cultures use large language models. 

This lab specifically prompted my idea for my final project as it made me question the ethics behind large language models and artificial intelligence in regards to the exploitation of other cultures and how it directly benefits first world consumers such as ourseleves. 

It begs the question: at what cost is our learning/creativity/functions through large language models and artificial intelligence, reaping those who carry these technologies on their backs (considerably underpaid). Additionally, it is important to note the accessibility of these clearly differ from around the globe. 

In our group discussion, we referenced specifically Chinese culture and how their access to these technologies is seen as beneficial to many of the nation due to China’s industrial culture. Their nation prides itself on becoming a technological superpower and exceeding standards of technological safety/security.

Here in the united states, from a privledged, first-world stand point, many citizizens here see LLMs and AI as dangerous and threatending to creative intergrity, jobs and industry, education, and more. This differs from the majority of the Chineese perspective as they see benfits in surveillance control, facial recognition, genetic profiling, and other forms of government-oriented safety measures for their country. 

It is also important to note the disadvantages that were also discussed in the article such as social control. The “Big Brother is Always Watching” effect being unforgiving to certain artistic and political freedoms of the citizens. 

I also found an article from the Journal of International and Public Affairs from 2019 that discusses the certain benefits in 3rd world countries that AI has implemented such as structured learning methods, easy money transferring models, etc.