Post 7: Copyright and Deepfakes

One thing that stood out to me about AI-generated images is how unclear the copyright situation still is. These systems are trained on massive amounts of online data, including images, films, and music, and a lot of that content is copyrighted. What makes this uncomfortable is that most of it was not uploaded with permission from the original creators. It makes me wonder if AI creativity is is just remixing human work in a way that is hard to trace. I also understand why artists are frustrated, since their style can basically be copied without them being involved or compensated. Even the legal side is still not fully settled. The U.S. Copyright Office has pointed out that questions of ownership and authorship in AI-generated content are still evolving, which honestly shows how fast the technology is moving compared to the law.

Another concern that stands out is the use of people’s faces and identities without consent. AI systems are now capable of producing highly realistic images and videos of individuals who have no involvement in their creation. This becomes even more serious with deepfake technology, where a person’s likeness can be inserted into situations they were never part of. While some of these uses may appear harmless on the surface, the potential for misuse is significant. Deepfakes can be used to fabricate political statements, create explicit content, or spread misinformation in ways that appear credible at first glance. The main issue is not only the creation of this content, but how easily it can circulate and the difficulty of fully removing it once it is online. Even when something is later identified as fake, the reputational and social impact can already be lasting.

Post 3: Prompting LLMs

One of the most useful prompting strategies I’ve learned is how to get LLMs to write structured literature reviews. I started doing this because reading empirical economics papers one by one takes a lot of time, especially when you’re trying to go through many of them for a project. Using an LLM helps me move faster and compare papers more efficiently.

At first, I used a very simple prompt:
“Write a literature review of the paper attached (PDF).”

The result honestly wasn’t great. The output was very general, kind of surface level, and didn’t really engage with the empirical parts of the paper. It also wasn’t structured like a real literature review, and sometimes it included claims that weren’t clearly supported by the text. It felt more like a loose summary than something I could actually use for academic work.

Then I tried a more detailed prompt:
“Write a literature review of the paper attached. You are an economics professor writing for an academic journal. Include sections on: research overview, methodology, key findings, research strengths, and limitations. The review should be 800 words. Do not synthesize or cite sources not present in the source material attached. Do not make unsupported claims.”

This worked much better. The output was clearly organized, more technical, and actually focused on things that matter in empirical economics, like identification strategy and limitations. It also stayed grounded in the paper instead of making things up.

I tested this on ChatGPT (GPT 5.4), and the difference between the two prompts was pretty clear. With the second one, the model produced something that actually resembled a real literature review, with sections like methodology and key findings written in a more academic tone.

This connects directly to what we read about prompt literacy. The UT Aspire guide explains that being specific about things like role, format, and purpose leads to better outputs, while vague prompts usually give broad and less useful answers .

Overall, this kind of prompting is really useful for students or researchers who need to go through a lot of papers quickly. It doesn’t replace actually reading the paper, but it makes the process way more efficient and helps you decide what’s worth spending more time on.

Bias in AI Hiring Tools

One of the most surprising things I learned during today’s research lab was how widely AI hiring tools are already used and how much influence they can have on the labor market. Many companies now rely on AI systems to screen resumes, rank candidates, and even evaluate video interviews before a human recruiter ever reviews an application. At first glance, this seems efficient. These tools can process thousands of applications quickly and reduce the time and cost of hiring. However, what surprised me the most is that these systems can unintentionally reproduce existing social biases.

Because many AI hiring tools are trained on historical hiring data, they often learn patterns from past decisions that were already biased. For example, an experimental hiring algorithm developed by Amazon reportedly penalized resumes that included the word “women’s,” because the system had been trained on data from a male-dominated tech workforce. This shows how algorithms can replicate discrimination even when companies are trying to automate hiring objectively.

The consequences go beyond individual applicants. If biased algorithms systematically filter out qualified candidates from certain groups, this can reduce equal access to employment and reinforce existing inequalities in the labor market. Over time, that can also affect the broader economy. When companies fail to hire the most qualified workers because of biased screening systems, the result can be poorer job matching, lower productivity, and wider wage inequality. Researchers from the Brookings Institution argue that biased algorithmic hiring can limit opportunities for marginalized workers and reduce the overall efficiency of the labor market.

Overall, this research made me realize that while AI hiring tools promise efficiency, they also raise serious ethical and economic questions about fairness, opportunity, and the future of work.

Othmane Oumnad

Hi everyone! My name is Othmane (he/him). I’m a senior from Morocco. I’m really passionate about music and DJing, and when the weather is nice I like to play soccer or go swimming. I’m excited to take this class because AI seems to be advancing much faster than the law, and studying the ethics around it feels especially important right now. Personally, I’m curious to see how AI develops. Ideally, I’d love for AI to take care of things like laundry and dishes so I have more time for art and music, not the other way around.