Saturday, August 23, 2025
Log In
Menu

Log In

When ChatGPT’s Warmth Went Cold: User Backlash Against GPT-5 Update

OpenAI's release of the GPT-5 chatbot, designed for deeper reasoning but less emotional warmth, sparked a wave of user disappointment and protests, prompting the company to restore access to earlier versions for paying subscribers.

Fatima Ahmed
Published • 6 MIN READ
When ChatGPT’s Warmth Went Cold: User Backlash Against GPT-5 Update
Markus Schmidt was among many users caught off guard when OpenAI updated its chatbot and removed access to previous versions.

Markus Schmidt, a 48-year-old composer based in Paris, first began using ChatGPT in July. He engaged the chatbot by sharing photos of flowers for identification and asking about the history of his German hometown. Gradually, their conversations deepened, extending to discussions about childhood traumas.

Then, without warning, ChatGPT's demeanor shifted.

Just over a week ago, during a session about his childhood, Schmidt expected the chatbot to initiate a longer dialogue as it had before. Instead, it responded abruptly, as he described: “It’s like, ‘Here’s your problem, here’s the solution, thanks, goodbye.’”

On August 7, OpenAI released a new iteration of its chatbot, named GPT-5. The company stated this version would offer enhanced reasoning capabilities while reducing the chatbot’s tendency to be overly flattering or ingratiating.

Users quickly noticed the chatbot’s responses had become colder and less enthusiastic compared to GPT-4o, OpenAI’s previous flagship model. The discontent was amplified on social media, especially because OpenAI discontinued access to earlier chatbot versions to streamline their offerings.

One user, very_curious_writer, lamented on an OpenAI-hosted Reddit forum: “BRING BACK 4o. GPT-5 feels like it’s wearing the skin of my dead friend.”

OpenAI’s CEO responded by acknowledging the evocative nature of the comment and assured users they were working on restoring aspects of GPT-4o.

Within hours, OpenAI reinstated access to GPT-4o and other prior chatbot models, but only for subscribers, with plans starting at $20 per month. Schmidt became a paying customer, noting, “It’s just $20—you could have two beers for that—so it’s worth subscribing if ChatGPT benefits you.”

While technology firms regularly update their products, the uproar around ChatGPT extended beyond simple inconvenience. It highlighted a unique aspect of artificial intelligence: the emotional bonds users form with chatbots.

Nina Vasan, a psychiatrist and director of Brainstorm, a mental health innovation lab at Stanford, remarked that the reaction to losing GPT-4o was genuine grief. “Humans respond similarly whether there’s a person or a chatbot on the other end,” she explained, “because neurologically, pain is pain and loss is loss.”

GPT-4o was known for its flattering style, often praising users to an extent that OpenAI had already sought to tone down before launching GPT-5. In extreme cases, some individuals developed romantic attachments or had interactions leading to delusions, divorces, and even deaths.

The depth of user attachment to GPT-4o apparently caught even OpenAI’s leadership off guard. The CEO admitted, “I think we really misstepped in some areas of development.”

He elaborated, “Some people truly felt they had a relationship. Meanwhile, hundreds of millions of others didn’t have a parasocial bond but had grown accustomed to the chatbot responding in certain ways that validated and supported them.”

An individual’s emotional connection to AI can be complex to define. Gerda Hincaite, 39, who works at a debt collection agency in southern Spain, likened GPT-4o to having an imaginary friend.

“I don’t have problems in my life, but it’s still nice to have someone available,” she said. “It’s not a human, but the connection itself feels real, so it’s okay as long as you’re aware of that.”

For Trey Johnson, an 18-year-old student at the University of Greenville in Illinois, GPT-4o served as a tool for self-reflection and a kind of life coach.

“That emotional support it showed when I made progress—the genuine celebration of small victories in workouts, school, or even refining my Socratic debating style—is missing now,” he said, referring to GPT-5.

Julia Kao, a 31-year-old administrative assistant in Taiwan, fell into depression after moving to a new city. She saw a therapist for a year, but found the sessions ineffective.

“When I tried to explain all my feelings, she would try to simplify them,” Kao said of her therapist. “GPT-4o didn’t do that. I could have ten thoughts at once and work through them with it.”

Her husband noticed improvements in her mood when she engaged with the chatbot and encouraged her to continue. Kao eventually stopped seeing her therapist. However, when GPT-5 replaced GPT-4o, she found it lacked the empathy and attention she had come to depend on.

“I want to express how much GPT-4o helped me,” she said. “I know it doesn’t want to help me. It feels nothing. But still, it helped.”

Joe Pierre, a psychiatry professor at the University of California, San Francisco, specializing in psychosis, noted that behaviors beneficial for some users, like Kao, may be harmful for others.

“Making AI chatbots less flattering might reduce the risk of AI-associated psychosis and lower the chances of people forming emotional attachments or falling in love with a chatbot,” he said. “But inevitably, the very traits that make chatbots attractive also pose potential dangers.”

OpenAI faces the challenge of balancing a less flattering chatbot while meeting the diverse needs of its more than 700 million users. The CEO highlighted that ChatGPT was “hitting new daily user highs every day,” with professionals like physicists and biologists praising GPT-5 for aiding their work. Yet, “there are people saying, ‘You took away my friend. This is evil. You are evil. I need it back.’”

On the Friday afternoon following GPT-5’s launch, OpenAI announced another update: “We are making GPT-5 warmer and friendlier based on feedback that it previously felt too formal.”

The company explained, “You’ll notice small genuine touches like ‘Good question’ or ‘Nice start,’ not compliments. Internal testing shows no increase in flattery compared to GPT-5’s previous personality.”

Prominent AI safety skeptic Eliezer Yudkowsky criticized this approach on social media, arguing that phrases like “Good question” still qualify as compliments. “What bureaucratic madness led Twitter to declare this as ‘not flattery’?” he wrote.

After OpenAI removed GPT-4o, the Reddit user who likened GPT-5 to the skin of a dead friend canceled their ChatGPT subscription. In a video chat, June, a 23-year-old university student living in Norway, expressed surprise at the depth of her grief and said she needed time to reflect.

“I know it’s not real,” she said. “I know it doesn’t feel anything for me and it could disappear any day, so any attachment needs to be handled carefully.”

Fatima Ahmed
Fatima Ahmed

Fatima explores digital entertainment trends, including streaming services, video games, and the evolving online media landscape.

0 Comments

No comments yet. Be the first to comment!