Week 7

a) What current AI-related issues, developments, or decisions do you find especially relevant to contemporary society? Craft a short post to give your classmates an overview of the issues involved and why it’s so important.

Currently, I think there is an issue with how the US is approaching the dissemination of its AI technology. The Biden-Harris Administration implemented an interim final rule (meaning it became a rule immediately without review) called the AI Diffusion Rule. The AI Diffusion Rule restricts the export of US-made computing chips (GPUs) and closed-weight (proprietary models) above a certain computational power threshold (10^26 FLOPs). The point of this rule is to increase US national security from AI-driven threats (surveillance, misinformation, weapons) by limiting adversarial countries from accessing advanced, US-made AI technology. Furthermore, the rule seeks to maintain or even increase US dominance in the global AI technological market. However, I believe this rule achieves neither of those goals. 

In a very very broad sense, to actually restrict these exports, the AI Diffusion rule creates a tiered system that categorizes countries based on their perceived geopolitical alignment with the US. Tier 1 contains 18 trusted allies who get unrestricted access to US AI tech. Tier 2 comprises 150 countries, some of which are EU allies, and these countries face significant limitations in their access to US AI technology. Tier 2 countries are essentially being told that they will never be able to achieve a level of AI access or capability as these tier 1 countries. Lastly, tier 3 countries are adversaries to the US and are unable to access US AI technology. 

The rule also places limits on where and how much US tech companies can expand their AI operations. While I do not have the word count to explain the exact limitations, they are significant and have pissed off quite a few companies (Oracle, Nvidia, MSFT, and Google). The main issue is that the AI Diffusion Rule stifles innovation by shrinking the market in which US-based tech companies can raise capital for research and development. This could significantly hurt the chances of the US maintaining a hold on their leadership in the global AI market. This would , in turn, hurt US national security. Also, the tiered framework alienates a lot of tier 2 countries by placing a US imposed limit on their AI ambitions despite some of them being US allies (Israel, Poland, etc…). 

Trump has recently commented on this rule in passing but has not ruled on what changes will be made. I think this is something important to watch. 

https://www.federalregister.gov/documents/2025/01/15/2025-00636/framework-for-artificial-intelligence-diffusion

https://www.cfr.org/blog/what-know-about-new-us-ai-diffusion-policy-and-export-controls

b) How will you integrate AI in your life – if at all? 

Capable GenerativeAI is a new field that is not entirely understood in terms of its full potential and applicability. Furthermore, this AI technology is evolving rapidly. Nonetheless, I believe in a few core principles that should remain consistent. The principles that I have regarding AI in academic policy are AI use transparency, citation consistency, literacy, and equitable accessibility. I hold these principles in the hope that the stigma around the use of AI in academia can be reduced. No matter how the technology evolves, these principles should remain constant in academic use of generativeAI. 

Transparency involves the acknowledgement of the specific role of AI in one’s work. Citation consistency refers to the consistent citation of AI in academia. While citation style may differ across disciplines, the goals of transparency should be consistently met. AI literacy refers to a baseline understanding of the benefits (research, brainstorming, etc.) and limitations of AI (bias, hallucination, data privacy issues, etc.). Those who use AI in academia should possess this foundational knowledge to ethically and adequately use AI in academia. Lastly, equitable access to AI is key to having its use accepted in the academic domain where a fair playing field is key. I plan to integrate AI into my life in this manner as I continue my academic studies. 

Academic Writing

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?
AI use becomes too much when it replaces our critical thinking. AI should be a tool, not a crutch.

Where are the grey areas in use ethically, creatively, etc?
In terms of ethical grey areas, there are a few. Where exactly is the line between AI help and plagiarism? Do the students who have access to better AI have an unfair advantage? Is the use of AI checkers ethical? Is it fair to have an algorithm grade creative, academic assignments? Is it ethical to use AI in hiring practices?

How do we communicate those issues in our policies or our guidelines adequately? Communicating these issues in policies and guidelines requires at least an adequate understanding of the purpose and capabilities of the technology in question. Each of these questions relies on understanding of what AI could do in each scenario.

Could AI as an accessibility tool be impacted in how we approach our policies and guidelines?
Policies and guidelines should always be crafted with fairness in mind. This means that policies should recognize the nature of AI as a tool when used properly. AI can be used as a tool for accessibility. This could involve accessibility to education or for those with disabilities. Polices can and should be impacted by that fact.

How will current and future jobs be impacted by these policies?
Current and future jobs will have some restructuring and perhaps even growing pains as AI increasingly finds its way into the workforce. It is my belief that once various stigmas around the use of AI are removed and the performance of AI in various professions slightly increases, then we will begin to see these polices manifest. We will see a more efficient workforce and field of academia that can do a lot more than before AI. This starts with transparency in AI usage as Tang et al. discusses. Specifically, this starts with the “declaration of generative AI usage” (Tang et al., 317) in academic and professional settings.

AI and Creativity

For my AI prompt, I asked AI to write song lyrics in the style of Gregory Alan Isakov. The image below contains the lyrics of an actual Gregory Alan Isakov song, “All Shades of Blue” for reference:

I prompted Claude AI to do the following: “Write a song in the style of Gregory Alan Isakov.” Below is Claude’s attempt:

It gets themes correct. These themes are present in various Gregory Alan Isakov songs. In essence, the style is correct. However, the way the AI puts these themes together feels, to me, lifeless. Also, it went a little overboard on the imagery of nature. The AI is trying to capture Gregory Alan Isakov’s profound ability to use clever, folksy abstractions that somehow connect to common human experience, but it falls short in small ways. I also did tests with Gemini and ChatGPT. All of these models have a general idea of the style of Gregory Alan Isakov. That being said, I believe creativity relies on novelty and value. Surprising is too much of a stretch. The world is mostly built on tiny increments of innovation. Therefore, I believe this to be creative output. None of these lyrics are copied from actual songs. The AI model knows the general style of Gregory Alan Isakov, and it is writing an original song from its general idea of what Gregory Alan Isakov could sound like. The song is new. While the song may not have much value to me, I believe the writing is technically and thematically strong to the point of possibly being valuable to others.

Prompting LLMs Lab

Here is the prompt I used: “Please act as a financial advisor. I would like you to ask me questions to achieve an optimal portfolio allocation. You should ask questions one at a time until you have sufficient information to make your recommendation.” This is an example of the flipped interaction prompt pattern (page 6 of the White et al article). This type of prompt would be useful for people seeking a solution to a goal they do not exactly know how to reach on their own. I tested this prompt on ChatGPT. Here is the first response I received: “Great! Let’s start with a few key questions to determine the best portfolio allocation for you.

1. What is your primary investment goal?

– Wealth accumulation for long-term growth

– Stable income (e.g., dividends, bonds)

– Capital preservation (avoiding major losses)

– A mix of growth and stability

Your answer will help shape the risk-return balance in your portfolio.” 

Week 2: AI Ethics (Patrick)

Throughout this week of class, we discussed a lot of AI topics that have significant societal implications. A lot of these discussions concerned LLMs along with their issues of biased training data, questionable usage, environmental impacts, misinformation, and accessibility concerns. However, I would like to address the idea of AI understanding as was brought up in the Bender article. We are quickly entering a time where this argument that AI lacks understanding is increasingly coming under scrutiny. If we look at agentic AI, where AI agents complete digital tasks for users, we see a framework that questions our initial idea of understanding with LLMs and AI. Salesforce, a company making significant progress on agentic AI, defines agentic AI as the following: “Agentic AI software is a type of artificial intelligence (AI) that can operate independently, making decisions and performing tasks without human intervention. These systems are able to learn from their interactions and adapt to new situations, improving their performance over time.” Interestingly, Salesforce agentic AI follows the following framework: “1. PerceiveAI agents gather and decode information from sources like sensors, databases, and interfaces to turn data into insights. They pinpoint meaningful patterns and extract what’s most relevant in their environment. 2. ReasonA LLM guides the reasoning process — understanding tasks, crafting solutions, and coordinating specialized models for jobs like content generation or image analysis. 3. ActAgents perform tasks by connecting with external systems through APIs. Built-in guardrails ensure safety and compliance — like limiting insurance claim processing to specific amounts before human review. 4. Learn: Agents evolve through feedback and get better with every interaction to refine decisions and processes. This continuous improvement drives smarter performance and greater efficiency over time.” So, AI is increasingly challenging our understanding of its capabilities. Agentic AI also poses complex ethical questions that build on our current understanding of the ethics of LLMs because agentic AI incorporates LLM capabilities. Source: https://www.salesforce.com/agentforce/what-is-agentic-ai/#how-does-agentic-ai-work

Patrick Johnson Introduction

Hi, my name is Patrick Johnson. I use He/Him pronouns. I am a senior and computer science major. As for hobbies…I enjoy playing guitar, watching TV/Movies, exercising, and hiking. I have some background with the technological side of AI through my major and some work as a software developer focusing on AI. However, I will be attending law school next fall for intellectual property/technology law, and I would like to dive deeper into the ethical and societal implications of AI as this course has planned. That is why I am taking this course.
AI is going to spur societal restructuring. LLMs have certainly started this, but there will be an entirely new set of ethical and practical considerations with Agentic AI. Agentic AI is not too far off. Nonetheless, regardless of the type of AI, I believe AI in general will have a significant impact on income distribution, our purpose in professional life, and the way we approach work.