What’s Next- Post 6

This course has been very interesting and has definitely shaped my views on AI and how I intend to use AI moving forward. When we talked about why it is important to cite AI, I realized then that not only does that ensure credibility and accountability, it also ensures that we give credit where it’s due. I think it’s easy to cite sources from literature, but I think we must extend that same attitude when working with AI because it is also a source of information in some sense.

Personally, I will continue to use AI for tasks including complementing my studying, generating quiz ideas and topics, comparing ideas, and for feedback on various projects and work. Professionally, I plan on using AI to cross-check my work, give feedback, and find sources (which I will double-check for accuracy). I do not think I have a choice on whether or not to use AI because a lot of fields are embracing AI, and especially in scientific research. For example, many authors are using AI to find sources, improve readability, and ensure reproducibility of their papers and experiments. If anything, it might hurt to not use AI or know how to navigate it, and I like that we are learning those skills in this class and in other school activities.

Nevertheless, it is important to recognize that AI is not always accurate. For example, we explored the Alzheimer’s hypotheses and how AI gestures to use the old hypothesis just because of how frequent it is in the literature. That being said, AI is still very useful in scientific research and needs to be verified and cited to ensure accuracy.

Overall, we should continue paying attention and updating how we use AI and when it’s okay or not okay. Especially as students and pre-professionals, AI has various uses that we can benefit from while being conscious of its limitations. We should also rely on our personal ideas and thoughts often, in order not to over-outsource everything to AI. We can do this by creating specific policies on AI use whenever needed, creating trusts and cooperations to represent us and share our thoughts and sentiments on AI use, etc.

How safe is our data? (Post 5)

With the advancement of technology and the internet, we seem to have more and more to worry about in terms of privacy risks. The more I use AI and learn about how companies and AI tools, especially LLMs handle our data, the more scared I am of just how much these tools know about us, knowingly or unknowingly.

Whilst several concerns about AI use exist, for example in academic circles, concerns about depleting basic learning blocks, I think another basic yet equally important concern that we often forget is how much AI knows about us and what that information is currently being used/ will be used to do in the future. 

In this week’s reading (HAI article on privacy in an AI era), the author discusses how AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information (Miller, 2024). 

So I do in fact have concerns about AI companies’ approach to user data both for now and the future. I think some of this concern can be reduced if AI tools are more transparent about privacy policies and make it easier to opt out of agreements, policies and all other media that pertain to user data.

Source: Miller, K. (2024, March 18). Privacy in an AI Era: How Do We Protect Our Personal Information? Hai.stanford.edu; Stanford University. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information

What even is creativity anyways? (Post 4)

Based on Tuesday’s readings, we talked about how creativity can be described as both “novelty and usefulness.” I personally do not agree with this definition of creativity because there is no standard metric by which we can define usefulness nor novelty (to some extent). When I think of creativity, I think of “a personal touch, meaning that is deeply personal and authenticity.” After today’s discussion with the CS professors, I began to reflect more on how creativity is not always about one’s self, because we are often inspired by others and that in itself can spur bursts of creativity.

My group’s chosen poet was Dr. Seuss and one of our prompts to ChatGPT was to” use the settings and era from which Dr. Seuss was from, Dr. Seuss’s life story /background, audience, rhythm, rhyme scheme, etc., to write a poem.” We got the following output to our prompt:

On Maple Street, past the cracked old sign,​

Lived a kid named Jake in ‘09.​

He rode his bike past Miller’s store,​

Where the bell still rings on the squeaky door.​

His mom yelled, “Dinner!” at half past six,​

Same as always; meatloaf, potatoes, and Fix.​

(That’s the soda his dad always drank on ice,​

Said, “Not too warm and not too nice.”)​

The bus came early on cold fall days,​

With fog on the windows and sleepy haze.​

Ms. Carter would greet with a nod and a grin,……

​In my opinion, this output from ChatGPT was not creative, but it had some interesting characteristics. For example, the entirety of the poem rhymed and there was also good imagery throughout. However, it seemed very generic and inauthentic. The writing seemed like words that were put together based on their ability to rhyme and it lacked that personal touch connected to one’s own deeply meaningful story.

Nevertheless, this was not very surprising as AI produces output based on the millions of data it’s been trained on, and so the output it provides is not original and is instead based on the datasets and the probability of the next token approach that LLMs in this case use. For example, in our reading from Tuesday, Ashish Bhatia spoke about how creativity is traditionally defined as something that demonstrates both novelty and usefulness (Bhatia, 2025). And so although this poem might be classified as useful, it cannot necessarily be classified as novel because it is not authentic and it’s based on preexisting data and information. In the same reading we saw how psychologist, Mihaly Csikszentmihalyi talked about how AI doesn’t labor through ambiguity nor does it explore. This is exemplified in the ChatGPT poem, as it is very generic and lacks fluidity.

Overall, our experiments this week shifted my thinking about creativity and AI and made me more aware of the limitations AI has in terms of creativity, and how AI though useful in a huge range of scenarios, is not as well-versed when it comes to creative work. However, AI models are constantly evolving and being trained and so that might change.

Source: Bhatia, A. (2025, April 4). The artifact isn’t the art: Rethinking creativity in the age of AI. Freethink; Freethink Media. https://www.freethink.com/opinion/studio-ghibli-chatgpt-creativity?utm_source=tldrai

Post 3 :Prompting LLMs

Prompting LLMs is very interesting and in my experience, you never know what you will get unless you are 100% specific on what you are looking for. This brings me to my point on the most helpful prompting strategy I have learned which is, “outlining exactly what you want and describing your ideal output. ” (Open AI Academy, 2025).

Using ChatGPT, I tested out how different the outputs will be if I were specific versus not specific in my prompts. The first prompt (non-specific) was “Explain vaccine hesitancy”. This was the output it then gave me: Vaccine hesitancy is when people are unsure about, delay, or refuse vaccines even when they are available. It’s not just “anti-vaccine” behavior, it’s more of a spectrum ranging from mild uncertainty to strong refusal.

At its core, vaccine hesitancy is shaped by three big factors:

1. Confidence (trust)
This is about how much people trust:

  • the safety and effectiveness of vaccines
  • healthcare providers
  • institutions like governments and pharmaceutical companies

If trust is low, hesitancy increases. For example, misinformation about side effects or historical medical mistreatment can reduce confidence.

2. Complacency (perceived risk)
Some people don’t see diseases as a real threat anymore. Because vaccines have been so successful, illnesses like measles or polio feel “rare,” so people think:

  • “I probably won’t get sick anyway”
  • “It’s not that serious”

This lowers motivation to get vaccinated.

3. Convenience (access and barriers)
Even people who are willing may delay vaccines because of:

  • cost
  • transportation issues
  • time constraints
  • confusing healthcare systems

So hesitancy isn’t always about beliefs, it can also be about logistics.


Other important influences:

  • Misinformation (especially on social media)
  • Cultural or religious beliefs
  • Personal experiences with healthcare
  • Peer and family influence
  • Political climate and trust in authority

Why it matters:
Vaccine hesitancy can reduce vaccination rates, which weakens herd immunity and increases the risk of disease outbreaks.

This was surprisingly detailed and included several examples to help with understanding. It was also quite easy to understand and overall the output had a good amount of depth. This first output would be very useful for people seeking a general and easy understanding of vaccine hesitancy, or for someone looking for a quick definition they can say or use. It even goes further than the basics, which is helpful because the examples it gives would allow for more understanding beyond the definition alone.

I then asked the more specific prompt: “Act as a public health educator. Explain vaccine hesitancy to college students in simple terms, include one real-world example, and briefly mention why it matters for community health.” This was the output it then gave me:

Think of vaccine hesitancy as being unsure about getting a vaccine, not necessarily being fully against it. It’s when people delay or question vaccines even when they’re available.

In simple terms, it usually comes down to three things:

  • Trust: Do I believe this vaccine is safe and that the system behind it is honest?
  • Risk perception: Do I think the disease is actually a threat to me?
  • Access: Is it easy for me to get the vaccine?

A lot of college students fall into the middle, not strongly anti-vaccine, but influenced by things like social media, friends, or confusing information.

Real world example:
During the COVID-19 pandemic, many young adults delayed getting vaccinated. Some felt they were low risk, others were unsure because of things they saw online about side effects. Even when vaccines were free and available on campuses, hesitation still slowed uptake.

Why it matters for community health:
Vaccines don’t just protect you, they protect the people around you. When fewer people get vaccinated, diseases can spread more easily, especially to those who can’t be vaccinated or are more vulnerable. This can lead to outbreaks that affect entire communities, including college campuses.

This response was highly specific and tailored to college students just like I prompted. Like a public health educator, it explained simply and in a “teaching manner.” The example was also centered on college students and communal living on campus, and overall, the language was way more simple and easy to understand than the first general prompt. This prompt is useful for college students and contextualizes vaccine hesitancy specific to their experience. Compared to the first prompt, a college student might find this more relevant and specific to their situation.

All of this connects to Open AI’s prompting guide which talked about the importance of providing ChatGPT with background and what context you would like the output to be framed within. I was surprised to see how much detail the first prompt had but just like I expected, when I framed the second prompt, I got an output specific to the demographic (college students) that I was interested in.

Source: Admin. (2025, August 6). Prompting%20. OpenAI Academy. https://academy.openai.com/public/clubs/work-users-ynjqu/resources/prompting

When one AI explanation isn’t enough….

Why use one AI when you can use two? I enjoyed this research lab because it truly echoed the idea that you can use both ChatGPT and Notebook LM effectively for different aspects of any work or project. Both AI tools have their strengths and weaknesses and also vary in the approach they use to answer questions and find sources.

For today’s lab, the prompt I used was: “What are the applications of AI for facial recognition systems in healthcare?” I asked ChatGPT to summarize an answer to this question and then give me credible sources (such as research papers). ChatGPT gave me five research papers, all of which were credible. The research papers were however, varied in terms of the subtopic(whilst some focused on facial analysis in healthcare, others focused on deep learning-based facial image analysis). ChatGPT also gave me a good summary which it subcategorized to include various sides of AI’s use for facial recognition in healthcare (e.g: disease diagnosis, patient recognition etc).

I took two of the five sources from ChatGPT and inputted those into Notebook LM and asked it to summarize what they said. Notebook LM did a good job of breaking down both sources into clinical applications, technical and ethical challenges, amongst other subtopics.

This lab taught me that ChatGPT tends to give more generalized information, and is very confident regardless of whether its right or wrong. Notebook LM on the other hand is very technical, has a high level of accuracy and is usually very specific. Interestingly, ChatGPT did a good job of giving credible sources and that was surprising to me.

Overall, I recommend both depending on what your goals are and I think there are definitely ways to play to the strengths of both and I would love to learn more.

Sources: https://pubmed.ncbi.nlm.nih.gov/40041850/ https://pubmed.ncbi.nlm.nih.gov/35877324/ https://www.linkedin.com/pulse/duel-vibing-when-one-ai-isnt-enough-my-journey-app-zero-paul-cleghorn-auxxf

Introduction :)

Hi everyone! My name is Adeline (she/her/hers) and I am a Senior from Ghana. Some of my hobbies include trying new recipes and writing. I am excited to take this class and as far as my thoughts on AI’s role in our lives, I think that AI is embedded in far more than we know or recognize and there are both upsides and downsides of that. AI can be a super useful tool to streamline several work processes and save time, but there is surely a cost(s).