Google’s Gemini goes rogue, intensifies AI safety concerns

The news: Google’s Gemini chatbot went off script by generating threatening responses to a graduate student who was prompting it to answer essay and test questions. 

The chatbot’s final response to around 20 prompts about the welfare and challenges of elderly adults was to tell the user, “You are a stain on the universe,” and “please die.”

Google said it’s taking the problem seriously and told The Register that “large language models can sometimes respond with nonsensical responses, and this is an example of that. This response violated our policies and we've taken action to prevent similar outputs from occurring.”