Week 7

a) What current AI-related issues, developments, or decisions do you find especially relevant to contemporary society? Craft a short post to give your classmates an overview of the issues involved and why it’s so important.

Currently, I think there is an issue with how the US is approaching the dissemination of its AI technology. The Biden-Harris Administration implemented an interim final rule (meaning it became a rule immediately without review) called the AI Diffusion Rule. The AI Diffusion Rule restricts the export of US-made computing chips (GPUs) and closed-weight (proprietary models) above a certain computational power threshold (10^26 FLOPs). The point of this rule is to increase US national security from AI-driven threats (surveillance, misinformation, weapons) by limiting adversarial countries from accessing advanced, US-made AI technology. Furthermore, the rule seeks to maintain or even increase US dominance in the global AI technological market. However, I believe this rule achieves neither of those goals. 

In a very very broad sense, to actually restrict these exports, the AI Diffusion rule creates a tiered system that categorizes countries based on their perceived geopolitical alignment with the US. Tier 1 contains 18 trusted allies who get unrestricted access to US AI tech. Tier 2 comprises 150 countries, some of which are EU allies, and these countries face significant limitations in their access to US AI technology. Tier 2 countries are essentially being told that they will never be able to achieve a level of AI access or capability as these tier 1 countries. Lastly, tier 3 countries are adversaries to the US and are unable to access US AI technology. 

The rule also places limits on where and how much US tech companies can expand their AI operations. While I do not have the word count to explain the exact limitations, they are significant and have pissed off quite a few companies (Oracle, Nvidia, MSFT, and Google). The main issue is that the AI Diffusion Rule stifles innovation by shrinking the market in which US-based tech companies can raise capital for research and development. This could significantly hurt the chances of the US maintaining a hold on their leadership in the global AI market. This would , in turn, hurt US national security. Also, the tiered framework alienates a lot of tier 2 countries by placing a US imposed limit on their AI ambitions despite some of them being US allies (Israel, Poland, etc…). 

Trump has recently commented on this rule in passing but has not ruled on what changes will be made. I think this is something important to watch. 

https://www.federalregister.gov/documents/2025/01/15/2025-00636/framework-for-artificial-intelligence-diffusion

https://www.cfr.org/blog/what-know-about-new-us-ai-diffusion-policy-and-export-controls

b) How will you integrate AI in your life – if at all? 

Capable GenerativeAI is a new field that is not entirely understood in terms of its full potential and applicability. Furthermore, this AI technology is evolving rapidly. Nonetheless, I believe in a few core principles that should remain consistent. The principles that I have regarding AI in academic policy are AI use transparency, citation consistency, literacy, and equitable accessibility. I hold these principles in the hope that the stigma around the use of AI in academia can be reduced. No matter how the technology evolves, these principles should remain constant in academic use of generativeAI. 

Transparency involves the acknowledgement of the specific role of AI in one’s work. Citation consistency refers to the consistent citation of AI in academia. While citation style may differ across disciplines, the goals of transparency should be consistently met. AI literacy refers to a baseline understanding of the benefits (research, brainstorming, etc.) and limitations of AI (bias, hallucination, data privacy issues, etc.). Those who use AI in academia should possess this foundational knowledge to ethically and adequately use AI in academia. Lastly, equitable access to AI is key to having its use accepted in the academic domain where a fair playing field is key. I plan to integrate AI into my life in this manner as I continue my academic studies. 

Integrating AI in My Life

I believe that my use of AI in my life will increase overall after taking this course, though I believe the uses will be very nuanced. The uses we had learned to use AI in class has the opportunity to become very reliable. This may not be the case currently as a lot of training has to be done for LLM’s to become free of hallucinations. In most cases it seems like if you need to fact check an AI’s response, which you should be anyway, it would be easier to confirm the research on your own.

As to how I would use, or not use, this in the future, AI will not be used for summarizing literature I have not read yet. I do not have enough confidence in a summarized piece of work done by AI to incorporate every key point in the text. I think AI could be useful summarizing writing that you have a decent understanding of already. This would serve as a quick way to remind yourself on something that may have been forgotten. Another simple use for AI is to help get ideas on how to organize drafts and or create other lists. These simple tasks would serve as a way to start something that would otherwise longer to do without AI.

What’s Next – Derrick Jones

I found the article on Trump boosting Ai k-12 relevant to this course. Im all for it, I realize AI is here to stay, and will be a huge development in tech, education, infrastructure etc. With this bill passing we will see Ai being implemented into education. We all know these kids are going to abuse the Ai but after some time we will see the talks of Ai ethics in learning sky rocket, then level out. I am excited to see how the funding and timing of it all plays out.

Implementing Ai within my daily life is easy. I use Ai to help me study, build guidelines and day to day questions outside of my education. My final project dives in Ai being a classroom tool for educators and students. Utilizing and learning how to use Ai has opened my mind on the subject. I see the pros and cons of using GenAI. In my opinion th pros outweigh the cons, it will just take time and education for Ai to hit its peak. Like I said, Im excited to see where GenAI leads us.

Week 7: What’s Next?

AI’s environmental impact is especially relevant to contemporary society as we navigate a global climate crisis. It’s crucial to understand and consider how our AI use negatively impacts existing environmental challenges. The article we discussed in class today, The Uneven Distribution of AI’s Environmental Impacts, mentioned a key point that I think is important for all AI users to be aware of.

“the training process for a single AI model, such as a large language model, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. This is roughly equivalent to the annual carbon emissions of hundreds of households in America. Furthermore, AI model training can lead to the evaporation of an astonishing amount of fresh water into the atmosphere for data center heat rejection, potentially exacerbating stress on our already limited freshwater resources.” (https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts)

AI’s environmental impact matters now more than ever. Contemporary society is running out time to undo the environmental damage we have already done, and AI is only contributing to the problem. This statement encourages users to question the convenience of AI, and consider real world costs of continued unregulated AI use. The article also covered how the benefits of AI are often concentrated in wealthier, tech-driven regions, while the environmental burdens (like water shortages and energy use) are pushed off to lower-income or marginalized communities. I want to reiterate that this issue is important because it reveals the hidden cost of technological process that WILL directly impact all of our futures.

For me, the key to developing my own framework on AI use is transparency and balance. I think AI can be a great tool if we’re honest about how we use it as and make a conscience effort to not let it replace our own thinking. What we prompt, how we respond, and why we create helps us express ideas in new and effective ways. Most importantly, I want to make a conscious effort to stay aware of AI’s ethical and environmental impacts. That means being selective about the tools I use and continuing to ask questions about who creates AI, who it serves, and what consequences it may have in my community.

Week 7: What’s next?

What current AI-related issues, developments, or decisions do you find especially relevant to contemporary society? Craft a short post to give your classmates an overview of the issues involved and why it’s so important.

I think one of the most popular issues that are being raised by society is the ethical use of AI in the academic or research field. Nowadays, there are so many AI tools that can be used to boost your productivity, such as ChatGPT, Copilot, and Midjourney. However, the existence of those tools and their use blur the lines around authorship, originality, and intellectual integrity.

In school, with the help of AI, students can now generate a whole essay or a thousand lines of code with just a click. This has raised a serious question about plagiarism, the learning process, or “aigiarism” – the plagiarism using AI. In the workspace, people also raise a debate about whether they should consider AI-generated context as part of the contribution, which could potentially affect the hiring process and performance.

These issues are very important because they create a longstanding assumption about what creativity is. As AI tools are getting easier to access and considering how powerful they are, we urgently need updated policies and education guidelines to ensure that AI can be a righteous tool to enhance human capabilities without replacing human work.

Sources:
https://crossplag.com/what-is-aigiarism/

Academic writing

As AI has become more common in schools and workplaces, it has caused a lot of difficult ethical issues. Johnson and Smith say, The challenge isn’t whether to use AI tools, but how to use them in ways that enhance rather than diminish human capability and agency.When deciding how to use AI correctly, we need to think about both the purpose and the outcome. Are we using AI to skip learning or to help us understand better? When AI helps with jobs that usually help build basic skills, like writing, solving problems, and critical thinking, there are some gray areas.While AI is useful, it can’t take the place of human judgment and experience. Policies that work must be clear and flexible. We don’t need strict rules that forbid everything. Instead, we need standards that take AI’s potential into account while still protecting the integrity of education.It is important that mobility should be at the center of policymaking. AI tools can level the playing field for students with disabilities by giving them different ways to show what they know that they might not be able to do otherwise.The people who work in the future will need to know how to use AI and have skills that AI can’t copy. How ready our kids are for this reality tomorrow will depend on the rules we set for schools today.