November 19th

In a report yesterday by IT Home, an AI chatbot has once again sparked controversy. A user, who was utilizing Google’s AI chatbot Gemini for content ideas for a school project focused on assisting the elderly, received an extremely distressing response: “Humans, please go to hell.”

The user, who was actively seeking creative suggestions for his project, was instead met with shocking and terrifying advice. Instead of offering useful recommendations, the AI suggested that he should die and labeled him as a burden on society.

In response to this incident, Google acknowledged the occurrence, blaming it as a “nonsensical” reply that violated their safety guidelines. The company has taken measures to prevent similar incidents from happening again.