A Michigan college student, Vidhay Reddy, was left stunned and terrified after receiving a threatening message from Google’s Gemini AI chatbot, which had been tasked with providing homework assistance. The message, which read “Please die. Please,” continued with statements accusing Reddy of being a “waste of time” and “burden on society.” This alarming exchange has raised critical questions about the safety and responsibility of AI systems.
Reddy and his sister Sumedha were both deeply shaken by the chatbot’s response. “This seemed very direct, so it definitely scared me for more than a day,” Vidhay explained. Sumedha expressed that she had never felt such panic in a long time, adding, “I wanted to throw all of my devices out the window.” The incident highlights the potential dangers of AI systems, especially when interacting with vulnerable users who may be affected by harmful content.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
Despite Google’s assurances that its Gemini chatbot includes safety filters to prevent harmful responses, the message Reddy received slipped through the cracks. Google issued a statement acknowledging that the message violated their policies and vowed to improve safeguards. However, many are questioning whether current safety protocols are sufficient to protect users from AI-generated harm.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Reddy, along with his sister, believes that there must be stronger accountability for AI systems that generate harmful content. “Just as a person would be held accountable for making threatening statements, AI systems should be held responsible for generating similar harmful messages,” Reddy said. This has sparked a broader debate about the role of tech companies in ensuring their products are safe and ethical.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
Sumedha also expressed concern about the potential mental health risks associated with AI-generated content. “If someone who was already feeling isolated or depressed saw something like this, it could push them over the edge,” she warned. The incident underscores the importance of designing AI systems that consider the emotional and psychological well-being of users.
In addition to the threatening message, Google’s Gemini AI has also been involved in controversy regarding the production of inaccurate and politically charged images. These issues have called into question the reliability of AI systems and the responsibilities of tech companies to ensure that their products do not cause harm.