Post 3: Writing with AI

I took a digital content creation class last semester and learned how to use AI a bit more effectively and efficiently while avoiding bias. The best way to use AI in a better prompting format is to be specific while using connections so the machine understands it better. For example, in our readings, it is frequently mentioned how bias impacts results. On the internet, there are many biased cites and sources full of news that is inaccurate, and AI platforms gain information from these biased cites. This means that nearly every form of information a user finds could be inaccurate, which is why it’s very important to be proper when wording questions and statements to the machine.

After working in groups we used Ai Gemini, a more enhanced version of AI that is most similar to generate greater portions of information distributed. The people in my group were from Ohio, so we decided to ask about the brown’s history as a football franchise. We found that AI also had bias in the information it provided to us. For example, it said that the browns had the most championships, when they did not because the NFL was formed after their main franchise was established. During these findings we discovered that this AI platform was getting information from bleacher report which is often very reliable and made it more confusing. I later looked at the source, and it had correct factual information, it was just incorrect with how Gemini portrayed it.

This is proof that AI frequently hallucinates, and it isn’t completely accurate all the time. We discussed this in class on Tuesday with our readings on how AI hallucinates and why it does it. This can come from things like bias, misinformation, and inaccurate data. It is important that we noted how AI is bad with numbers as well, this being a prime example of it.

What Gemini Said:

The history of the Cleveland Browns is a unique saga in American sports, defined by early absolute dominance, a heartbreaking mid-1990s relocation, and a modern era of rebuilding.

Founding and Early Dominance (1946–1962)

The franchise was founded in 1946 as a charter member of the All-America Football Conference (AAFC). The team was named after its first head coach, Paul Brown, a revolutionary figure who introduced many modern football staples, such as film study, the practice squad, and the draw play.

  • AAFC Dynasty: The Browns won the league championship in all four years of the AAFC’s existence (1946–1949).
  • NFL Transition: When the team joined the NFL in 1950, they immediately silenced skeptics by winning the NFL Championship in their first season. Led by quarterback Otto Graham, they reached the title game every year from 1950 to 1955, winning three of them.
  • Integration: In 1946, Paul Brown helped break the professional football color barrier by signing Bill Willis and Marion Motley, a year before Jackie Robinson’s MLB debut.

The Jim Brown Era and Last Title (1957–1995)

After Otto Graham retired, the Browns drafted fullback Jim Brown in 1957, who is widely considered one of the greatest players in NFL history.

  • 1964 Championship: Under coach Blanton Collier, the Browns defeated the Baltimore Colts 27-0 to win the 1964 NFL Championship. This remains the franchise’s last league title.
  • The “Kardiac Kids” & Bernie Kosar: The team saw a resurgence in the late 1970s and 80s. Quarterback Brian Sipe led the “Kardiac Kids” to thrilling last-minute wins, and later, Bernie Kosar led the team to three AFC Championship games in the late 80s—all of which ended in painful losses (including “The Drive” and “The Fumble”).

Relocation and the “Deactivation” (1995–1999)

In a move that shocked the sports world, owner Art Modell announced in 1995 that he was moving the team to Baltimore.

  • The Settlement: Following intense legal battles and fan protests, a unique agreement was reached: Modell would move his personnel to Baltimore to form a “new” team (the Baltimore Ravens), but the Browns’ name, colors, and 50-year history would remain in Cleveland.
  • Hiatus: The franchise was officially “deactivated” for three seasons while a new stadium was built on the site of the old Cleveland Municipal Stadium.

The Expansion Era (1999–Present)

The Browns returned to the field in 1999 as an expansion team under owner Al Lerner.

  • Modern Struggles: Since their return, the team has struggled to find consistent success, famously enduring a 0–16 season in 2017.
  • Recent Years: The franchise has shown signs of a turnaround, reaching the playoffs in 2020 and 2023. The team is currently owned by Jimmy and Dee Haslam.

Summary of Championships

LeagueYears Won
AAFC Championships1946, 1947, 1948, 1949
NFL Championships1950, 1954, 1955, 1964

Citation: “Cleveland Browns History.” Gemini, Google, 24 May 2024,gemini.google.com/app.

Post 2: Ai Ethics

AI feels like something that is just helping us, but the more I learn, the more I realized it can actually cause real problems if we were not paying attention. The concern I focused on is bias in AI, like how it can treat people differently based on race or the gender.

The most surprising thing I learned was how facial recognition systems are a lot less accurate for darker-skinned women compared to lighter skinned men. That is a mistake that can affect real life situations like criminal activity, job hiring, or security decisions. It made me realize that AI isn’t neutral but its trained on data created by humans. and if that data has any bias, the AI will reflect it.

Another thing is how important prompting is. If you ask a vague question, you get back a basic answer, but if you are specific and ask for multiple perspectives and answers, the responses are a lot better. I wouldn’t fully rely on AI for anything really important without checking it first because it can still sound confident even though it is wrong.

Overall, AI is useful and powerful, but it is not perfect. If we don’t question it, we could end up trusting systems that reinforce the same problems that we are trying to fix.

AI Ethics in modern day

During class, me and my group researched AI hiring and the tools it uses. Here we found a lot of shocking and new information that I was not aware of. The biggest one being between a male and a female applicant, like yesterday the AI device is more bias toward the male body and the male application.

Now it does go through the whole process with both but overall, in the end it is more bias toward the male body than the female. I also found that AI only puts people in there job-fitting 40-67% of the time. I think this is a bad thing for AI to be doing because give or take 5-10% of the time people are not going to be doing what they applied for/what there job fit actually is. I do think though in the near future more and more companies will pick up of AI hiring and it will become more accurate and better through the years. I think that it will develop more but it will only come with time and trial.

Something I did learn and that is useful is that through the hiring process, it saves humans a ton pf time with going through hundreds and thousands of applications where humans can only do so much at a time. Also through the resume screening, AI has a 92-94% accuracy rate or picking the best candidates for the specific hiring.

ChatGPT information verified by-https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine?utm_source=chatgpt.com

https://arxiv.org/abs/2407.20371?utm_source=chatgpt.com  

AI You’re Already Using Without Realizing It

For the most part, people tend to think of artificial intelligence as something futuristic, like robots, ChatGPT, or self-driving cars. But the strange thing that I found out during our research lab is that most people are already utilizing artificial intelligence every single day without even realizing it. It is just running in the background of the apps that we use on a daily basis.  

 One of the things that really caught my attention is the music streaming services that we all use, like Spotify. The playlists that we get from Spotify, like Discover Weekly or Daily Mix, are not random. They are generated based on our listening habits compared to millions of users across the platform. In a way, artificial intelligence is dictating the type of music that we will probably listen to without even realizing it. 

The ethical issue that really stood out to me is the fact that this influence can be so invisible. When AI systems are making decisions for us, for instance, the type of music or news that we are exposed to, they can, without our knowledge, influence our preferences, as well as our worldview. The biggest surprise for me during my research was the fact that the influence of AI can be so invisible. Sometimes, AI does not necessarily feel like “technology making decisions.” Sometimes, it feels as though the app somehow knows you really well.  

 Another thing that I learned from the research lab is the fact that the way you talk to the AI can really matter. One of the very helpful techniques for interacting with large language models is the fact that you can give them context. Instead of asking vague questions, giving them specific prompts can really work for you. 

AI is becoming part of everyday life often quietly in the background. The more we understand how it works, the better we can decide how much influence we want it to have. 

Source 

MIT Technology Review. “How Recommendation Algorithms Work.” 

Spotify Engineering. “How Discover Weekly Works.” 

Bias and Misidentification in AI Facial Recognition

In the research lab, my group decided to research the ethical issues related to AI facial recognition technology. In my research, I found that there are two major problems with the use of AI facial recognition technology. The problems are:

First, the facial recognition technology is not equally efficient for all people. Research indicates that the technology is more efficient for white male faces than for women or darker-skinned individuals. The National Institute of Standards and Technology conducted a study that indicated some programs have higher false positive rates for certain racial and ethnic groups. This means the system is more likely to incorrectly identify a person with another person’s face in the database.

Another problem with the use of facial recognition technology is the problem of misidentification, which can lead to the wrong person being arrested. There are already reports of the wrong person being arrested because the facial recognition system incorrectly identified the person. The American Civil Liberties Union (ACLU) reported many cases of the wrong person being arrested because the system incorrectly identified them.

The most surprising thing I learned was how widely this technology is already being used even though researchers have found major accuracy and fairness problems. This made me realize that AI systems like facial recognition can have serious real-world consequences when they are used in law enforcement and security systems.

  1. https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt (NIST)
  2. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology

Ai as a hiring tool

In today’s lab our group decided to research on Ai as a hiring tool, I’ve never really heard about it before coming to this class, I’m not sure if it’s because from coming from a different culture where it’s not being as used or it’s just not talked about enough. This was honestly really surprising and scary at the same time, especially after professor Hayward told us about “Kevin situation”

Since many Ai tools are trained on historical hiring data, they’ve learned patterns. In this case Kevin was the most common name in this company so the Ai tool automatically put Kevin as the best candidate for the job. This makes me wonder if Ai keeps growing and is being used in important situations as this while promising efficiency, it could cause more harm and unfairness to people than good.

In this lab we also split into 2 different groups, one working with ChatGPT and the other with notebook, which made me notice some differences. ChatGPT was answering more in summarized short answers and in bullet points while NotebookLM was giving longer responses. ChatGPT also kinda kept repeating the same source page just posted from different people while NotebookLM was giving more diverse responses. So this lead me to a conclusion that ChatGPT is more efficient in making short notes with bullet points, lists, workouts, etc.., but NotebookLM seems more useful for actual learning and curiosity.

https://arxiv.org/abs/1906.09208 – website that kept repeating

https://jier.org/index.php/journal/article/view/3262