Academic Writing

When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?
AI use becomes too much when it replaces our critical thinking. AI should be a tool, not a crutch.

Where are the grey areas in use ethically, creatively, etc?
In terms of ethical grey areas, there are a few. Where exactly is the line between AI help and plagiarism? Do the students who have access to better AI have an unfair advantage? Is the use of AI checkers ethical? Is it fair to have an algorithm grade creative, academic assignments? Is it ethical to use AI in hiring practices?

How do we communicate those issues in our policies or our guidelines adequately? Communicating these issues in policies and guidelines requires at least an adequate understanding of the purpose and capabilities of the technology in question. Each of these questions relies on understanding of what AI could do in each scenario.

Could AI as an accessibility tool be impacted in how we approach our policies and guidelines?
Policies and guidelines should always be crafted with fairness in mind. This means that policies should recognize the nature of AI as a tool when used properly. AI can be used as a tool for accessibility. This could involve accessibility to education or for those with disabilities. Polices can and should be impacted by that fact.

How will current and future jobs be impacted by these policies?
Current and future jobs will have some restructuring and perhaps even growing pains as AI increasingly finds its way into the workforce. It is my belief that once various stigmas around the use of AI are removed and the performance of AI in various professions slightly increases, then we will begin to see these polices manifest. We will see a more efficient workforce and field of academia that can do a lot more than before AI. This starts with transparency in AI usage as Tang et al. discusses. Specifically, this starts with the “declaration of generative AI usage” (Tang et al., 317) in academic and professional settings.

AI and Creativity

For my AI prompt, I asked AI to write song lyrics in the style of Gregory Alan Isakov. The image below contains the lyrics of an actual Gregory Alan Isakov song, “All Shades of Blue” for reference:

I prompted Claude AI to do the following: “Write a song in the style of Gregory Alan Isakov.” Below is Claude’s attempt:

It gets themes correct. These themes are present in various Gregory Alan Isakov songs. In essence, the style is correct. However, the way the AI puts these themes together feels, to me, lifeless. Also, it went a little overboard on the imagery of nature. The AI is trying to capture Gregory Alan Isakov’s profound ability to use clever, folksy abstractions that somehow connect to common human experience, but it falls short in small ways. I also did tests with Gemini and ChatGPT. All of these models have a general idea of the style of Gregory Alan Isakov. That being said, I believe creativity relies on novelty and value. Surprising is too much of a stretch. The world is mostly built on tiny increments of innovation. Therefore, I believe this to be creative output. None of these lyrics are copied from actual songs. The AI model knows the general style of Gregory Alan Isakov, and it is writing an original song from its general idea of what Gregory Alan Isakov could sound like. The song is new. While the song may not have much value to me, I believe the writing is technically and thematically strong to the point of possibly being valuable to others.

Prompting LLMs Lab

Here is the prompt I used: “Please act as a financial advisor. I would like you to ask me questions to achieve an optimal portfolio allocation. You should ask questions one at a time until you have sufficient information to make your recommendation.” This is an example of the flipped interaction prompt pattern (page 6 of the White et al article). This type of prompt would be useful for people seeking a solution to a goal they do not exactly know how to reach on their own. I tested this prompt on ChatGPT. Here is the first response I received: “Great! Let’s start with a few key questions to determine the best portfolio allocation for you.

1. What is your primary investment goal?

– Wealth accumulation for long-term growth

– Stable income (e.g., dividends, bonds)

– Capital preservation (avoiding major losses)

– A mix of growth and stability

Your answer will help shape the risk-return balance in your portfolio.” 

Week 2: AI Ethics (Patrick)

Throughout this week of class, we discussed a lot of AI topics that have significant societal implications. A lot of these discussions concerned LLMs along with their issues of biased training data, questionable usage, environmental impacts, misinformation, and accessibility concerns. However, I would like to address the idea of AI understanding as was brought up in the Bender article. We are quickly entering a time where this argument that AI lacks understanding is increasingly coming under scrutiny. If we look at agentic AI, where AI agents complete digital tasks for users, we see a framework that questions our initial idea of understanding with LLMs and AI. Salesforce, a company making significant progress on agentic AI, defines agentic AI as the following: “Agentic AI software is a type of artificial intelligence (AI) that can operate independently, making decisions and performing tasks without human intervention. These systems are able to learn from their interactions and adapt to new situations, improving their performance over time.” Interestingly, Salesforce agentic AI follows the following framework: “1. PerceiveAI agents gather and decode information from sources like sensors, databases, and interfaces to turn data into insights. They pinpoint meaningful patterns and extract what’s most relevant in their environment. 2. ReasonA LLM guides the reasoning process — understanding tasks, crafting solutions, and coordinating specialized models for jobs like content generation or image analysis. 3. ActAgents perform tasks by connecting with external systems through APIs. Built-in guardrails ensure safety and compliance — like limiting insurance claim processing to specific amounts before human review. 4. Learn: Agents evolve through feedback and get better with every interaction to refine decisions and processes. This continuous improvement drives smarter performance and greater efficiency over time.” So, AI is increasingly challenging our understanding of its capabilities. Agentic AI also poses complex ethical questions that build on our current understanding of the ethics of LLMs because agentic AI incorporates LLM capabilities. Source: https://www.salesforce.com/agentforce/what-is-agentic-ai/#how-does-agentic-ai-work

Patrick Johnson Introduction

Hi, my name is Patrick Johnson. I use He/Him pronouns. I am a senior and computer science major. As for hobbies…I enjoy playing guitar, watching TV/Movies, exercising, and hiking. I have some background with the technological side of AI through my major and some work as a software developer focusing on AI. However, I will be attending law school next fall for intellectual property/technology law, and I would like to dive deeper into the ethical and societal implications of AI as this course has planned. That is why I am taking this course.
AI is going to spur societal restructuring. LLMs have certainly started this, but there will be an entirely new set of ethical and practical considerations with Agentic AI. Agentic AI is not too far off. Nonetheless, regardless of the type of AI, I believe AI in general will have a significant impact on income distribution, our purpose in professional life, and the way we approach work.