ChatGPT-4.5: A True Leap or Just Hype?

AI has been evolving at an incredible pace, but the recent release of ChatGPT-4.5 feels like a significant shift. While previous iterations brought major improvements, GPT-4.5 introduces refinements that make AI interactions feel more fluid, reliable, and, dare I say, almost human.

So, what makes ChatGPT-4.5 stand out?

One of the biggest changes is its enhanced contextual understanding. GPT-4.5 processes longer conversations with better memory, meaning it can recall details more accurately throughout a discussion. In contrast, older versions struggled with maintaining consistency, often forgetting information or repeating answers unnecessarily.

Another notable upgrade is the reduction of hallucinations—when AI generates misleading or entirely false information. While GPT-4 still had moments of confident inaccuracy, GPT-4.5 shows marked improvement in factual accuracy, thanks to better training data filtering and reinforcement learning.

Moreover, the multilingual capabilities have been refined. While GPT-4 supported multiple languages, GPT-4.5 delivers more natural translations and culturally aware responses. This makes it an even better tool for global users.

Performance-wise, GPT-4.5 is also faster and more efficient, even with its increased complexity. It generates responses quicker and requires less computation per query, making interactions smoother.

However, it’s not just about speed and accuracy—the conversational tone is more dynamic. GPT-4.5 adapts better to emotions and writing styles, making responses feel more natural and engaging.

So, is GPT-4.5 a game-changer? It’s definitely a step forward, though we’re still far from a perfect AI. But one thing is clear: the gap between human and AI interactions is shrinking faster than ever.

Feel free to check out news about GPT-4.5 at https://www.wired.com/story/openai-gpt-45

AI Ethics (James)

This week we mainly talked about AI ethics. The rapid development of AI has brought many conveniences to people, but people will consider ethical issues when anything appears. Generally speaking, the ethical issues caused by AI include copyright, plagiarism, bias, etc. This week’s group study learned about the risks brought by the growth of AI language models. The growth of large language models will be biased against minority languages ​​such as Dhiveli. Because the expansion of language models is mainly based on the world’s common languages, minority languages ​​have a small base of use, which leads to bias in building AI language models. The courseera website pointed out that scholars can use the intersection of modern society’s political, social, and economic issues with artificial intelligence to eliminate the bias of AI language models. The time range of Chatgpt and Deepseek’s databases is relatively short. They came out in 2023 and 2025 respectively, so the scope of the database is relatively closed. For example, if Chatgpt is instructed to “find a few peer-reviewed articles”, the GPT database is limited and sometimes it will generate completely non-existent documents. If I can create an open artificial intelligence, I will try to collect all the literature in the 20th and 21st centuries to create an AI that helps locate and summarize literature (similar to perplexity)

source: https://www.coursera.org/articles/ai-ethics

The Future of AI

AI is becoming a lot more accurate than I originally thought. By the end of the decade, I sense AI will be able to recite anything on the spot with precision. During Thursday’s lab period in which the class used three different sources for AI to summarize without using a document, I was surprised at the results. The common guess among the class was that the AI would not be able to pull specific details about the articles without the use of a pdf to upload to the LLM. My group, which had a very popular paper (Group 2), were shocked to see the accuracy of the results ChatGPT gave us on the summary of the paper.

Sure, it helped when a document was uploaded to the server, but the majority of the information was provided without the use of a pdf upload. The other groups had some similarities but group 3 had a bit more hallucinations from AI as the source was not as popular. As these algorithms train over the years, I feel like every source will become incorporated into their algorithms, allowing for the use of a simple search without the use of a pdf to read from.

The issue with this is that we can not consider what is “known” by AI. This simply means that whatever we input to AI such as ChatGPT cannot be considered fully accurate at its current state. Until labs like ours on Thursday start to become concise summaries across several papers without the need of a pdf upload, it will be necessary to always double check the paper itself to check for accuracy. Even then, our paper was more accurate with a pdf attached to the AI though it is always good to cross check papers when summarized by AI, pdf or not.

Week 2: AI Ethics (Ishani)

During my group’s research, I discovered that AI diminishes our creativity. It lacks human artistic expression and experiences. Hence, it takes away the originality. Since AI is capable of creating complex and striking pieces of art, it lacks human consciousness and emotional depth, all essential components of artistic expression. Because AI tools like DALL-E 2 and Midjourney can mimic artists’ styles without permission, potentially undermining their creative endeavors and financial livelihoods, this has raised concerns within the art community.

There are also drawbacks to using AI, like legally in the US, copyright protection for AI-generated works is difficult because U.S. laws demand human authorship.

After group shareouts, I also had a different perspective on AI since they talked about how AI tends to make responses longer and more wordy than they should be.

Reference: https://www.neilsahota.com/ai-art-creativity-controversy-and-the-question-of-originality/#:~:text=of%20existing%20styles.-,AI%2Dgenerated%20Art%3A%20Key%20Takeaways,experiences%20that%20distinguish%20human%20artistry.

AI’s Blind Spot

When we ask our beloved Gen AI assistant a simple question about world history, it confidently seems to spit out an answer. But what if this answer might be slightly fabricated? Or even better, entirely made up? Or what if the fabrication is solely based on some preconceived biases baked into the data that AI was trained on, leading to certain perspectives that seem factual? This was part of the things that we discussed in our research lab in class today and it made me think a bit differently on what AI’s future might hold.

AI models are not trained to form opinions, they simply learn from massive datasets collected from various literature sources such as books, articles and online resources. But in the real world, as we know it, all forms of medium of sharing information and knowledge come with built-in biases, whether they are political, cultural or even in some cases, accidental. This leads to the inherent problem of reinforcing propaganda rather than neutral information. For instance, Chinese AI models might align more with state ideologies, while Western models might reflect their own forms of biases. Neither reflects true ‘objectivity’.

So what is the scary part? The issue isn’t solely political. Biases in AI show up even in certain hiring algorithms that have preference over certain demographics, facial recognition systems that struggle with non-white faces, and even medical AI that might underdiagnose illness in minority groups. Thus, if AI is predicted to be the future, we need to ask: whose future will it shape?

The answer for this might be even more complicated. This problem fundamentally arises from the architecture of LLMs or cutting-edge AI systems that simply prioritize text that is meant to be syntactically correct over understanding semantic meaning. If I could change one thing about AI’s future, it would be to make it maximally curious, designed to question its own training data by seeking diverse perspectives and recognizing gaps in its own knowledge. AI should be perceived as a tool for discovery, not just a mirror of reflecting biases of the past and repeating the same mistakes.

So can we make AI more self-aware, or are we stuck with its blind spots forever?

A

Sources

https://americanedgeproject.org/new-report-chinese-ai-censors-truth-spreads-propaganda-in-aggressive-push-for-global-dominance/?utm_source=chatgpt.com

https://medium.com/%40jangdaehan1/algorithmic-bias-and-ideological-fairness-in-ai-fbee03c739c7

https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/?utm_source=chatgpt.com

https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/?utm_source=chatgpt.com

New findings

Before last class and some exploring on ChatGPT, I had very little knowledge about what AI could do. I knew about a few of its simple tasks it could complete like summarizing articles but I was unaware of how precise it could get. I found that it’s used in all kinds of ways from helping doctors to recommending what shows to watch next on streaming services. AI can make really specific and helpful content making it useful in so many areas. I learned that AI is capable of analyzing large sets of data, recognizing patterns, and making predictions based on what it learns. I also learned that when it’s not given adequate background information it gives inaccurate and vague responses that are not useful. Im excited to learn about what else it can and cannot do.

https://www.oneusefulthing.org/p/15-times-to-use-ai-and-5-not-to