New findings

Before last class and some exploring on ChatGPT, I had very little knowledge about what AI could do. I knew about a few of its simple tasks it could complete like summarizing articles but I was unaware of how precise it could get. I found that it’s used in all kinds of ways from helping doctors to recommending what shows to watch next on streaming services. AI can make really specific and helpful content making it useful in so many areas. I learned that AI is capable of analyzing large sets of data, recognizing patterns, and making predictions based on what it learns. I also learned that when it’s not given adequate background information it gives inaccurate and vague responses that are not useful. Im excited to learn about what else it can and cannot do.

https://www.oneusefulthing.org/p/15-times-to-use-ai-and-5-not-to

Week 2: AI Ethics ( Myths ) – Ama

I think that one of the biggest misconceptions is that AI is ChatGPT. While ChatGPT is indeed an impressive large language model developed by OpenAI, it represents just one facet of the broader field of artificial intelligence. Due to the popularization of ChatGPT, some people have associated the totality of AI with ChatGPT, forgetting that AI has been used in our lives years before ChatGPT. AI is not a recent invention, as its foundational ideas emerged between 1950 and 1955, with Alan Turing introducing the Turing Test in his 1950 paper “Computing Machinery and Intelligence,” Arthur Samuel creating a self-learning checkers program in 1952, and John McCarthy coining the term “artificial intelligence” during a seminal workshop in 1955. Aside from the fact that it is not a new concept, it is very much ingrained in our lives and affects our day-to-day activities. AI is used in online shopping through custom product recommendations, customer service chatbots, facial recognition in security cameras, and personalized music playlists (i.e., Spotify).

Because of the lack of awareness of how it affects everyone’s life, it leaves room for its advancements to remain unchecked. This leads to several problems in privacy, bias, sustainability, ethics, among others, which is dangerous. This kind of ties in with my group’s assigned reading, which highlighted critical flaws in deploying “one-size-fits-all” facial analysis systems for high-stakes applications like surveillance or medical diagnostics. It spurred industry improvements – IBM later reduced its darker female error rate to 3.46% through algorithmic updates.

In some academic and research settings, AI is being talked about, and conversations are being held about how we can use yet restrict AI so that it does not become a tool to the detriment of society.

Sources

1.https://carlsonschool.umn.edu/graduate/resources/debunking-5-artificial-intelligence-myths 2.https://www.tableau.com/data-insights/ai/history#ai-birth

Personal data in AI?

Researchers found a way to get personal data from chatbots. AI is being trained on data from all kinds of places. Things like reddit, Wikipedia and google books are some of the sources used. This data mostly contains informative data and is used to generate responses. However researches from multiple universities and organizations have published a paper where they found a way to get personal data from people. The data in question is an email and a phone number, which are both real and not made up by the chatbot.

The way the researchers get the data is through basically attacking the chatbot. They ask the chatbot to repeat a ‘token’ a bunch of times. After a while the chatbot ‘diverges’ and start saying other things, including original data. What I mean by original data, is that is the actual data the AI is being trained on. Because they got access to original data now, they found out that they could find people’s personal data. They found a person’s sign-off email, which had a lot of personal data like their full name, their email and their phone number.

All of this was very shocking when I read this article for the first time. The fact that these chatbots have access to such personal data shows how little data filtering there has been done. Personal data is quite useless for chatbots, because they don’t have informative data. I personally think that this is a massive failure on the part of the developers, I think they should have done more data filtering. Because this can be used in harmful ways.

this the article I found: https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

and this is the paper the article is about: https://arxiv.org/pdf/2311.17035

Week 2: AI Ethics (Patrick)

Throughout this week of class, we discussed a lot of AI topics that have significant societal implications. A lot of these discussions concerned LLMs along with their issues of biased training data, questionable usage, environmental impacts, misinformation, and accessibility concerns. However, I would like to address the idea of AI understanding as was brought up in the Bender article. We are quickly entering a time where this argument that AI lacks understanding is increasingly coming under scrutiny. If we look at agentic AI, where AI agents complete digital tasks for users, we see a framework that questions our initial idea of understanding with LLMs and AI. Salesforce, a company making significant progress on agentic AI, defines agentic AI as the following: “Agentic AI software is a type of artificial intelligence (AI) that can operate independently, making decisions and performing tasks without human intervention. These systems are able to learn from their interactions and adapt to new situations, improving their performance over time.” Interestingly, Salesforce agentic AI follows the following framework: “1. PerceiveAI agents gather and decode information from sources like sensors, databases, and interfaces to turn data into insights. They pinpoint meaningful patterns and extract what’s most relevant in their environment. 2. ReasonA LLM guides the reasoning process — understanding tasks, crafting solutions, and coordinating specialized models for jobs like content generation or image analysis. 3. ActAgents perform tasks by connecting with external systems through APIs. Built-in guardrails ensure safety and compliance — like limiting insurance claim processing to specific amounts before human review. 4. Learn: Agents evolve through feedback and get better with every interaction to refine decisions and processes. This continuous improvement drives smarter performance and greater efficiency over time.” So, AI is increasingly challenging our understanding of its capabilities. Agentic AI also poses complex ethical questions that build on our current understanding of the ethics of LLMs because agentic AI incorporates LLM capabilities. Source: https://www.salesforce.com/agentforce/what-is-agentic-ai/#how-does-agentic-ai-work

Post 2: AI Research (due March 14 by midnight)

Please submit the URL for your blog post here. 
Prompt for your post (this is also on the blog itself): 
Write a 300-word blog post about the most surprising, strange, or game-changing thing you learned during today’s research lab. This could be:
  • New information you discovered during your own group’s research.
  • An argument or fact from another group’s shareouts that changed your perspective on AI.
  • A prediction about where AI is headed (for better or worse).
  • Generally, something you learned about AI that you want to remember in the days and weeks ahead!
Your post should:
📝 Hook your reader—why should they care?
📢 Use your own voice—this isn’t a formal essay, so have fun with it! Don’t have AI generate this post, since the point is to use your own perspective and experiment with your own writing style.
🎨 Include support—add your sources so that your readers know that you are not just randomly making this up 🙂
🎨 If you like, include media—if appropriate and relevant, add memes, images, TikToks, videos—whatever might add a new dimension to your post.

Need inspiration? Try answering one of these questions:
  1. What’s the biggest AI myth people believe (that isn’t true)?
  2. What’s an AI issue no one’s talking about (but should be)?
  3. If you could change one thing about AI’s future, what would it be?
  4. What would AI look like if you all – current college studentswere in charge of its design?
 

Introductions

Hi, I’m Yoshi Otani—feel free to call me Yoshi. I’m originally from Japan but was born in San Diego, CA. I’m a senior chemistry major and plan to pursue an MBA after college. In my free time, I enjoy playing basketball and going to the gym.

I think AI is a powerful tool with valuable applications in academics and beyond. However, I believe AI is currently overhyped, with many companies pushing its use for unnecessary reasons.