Lawsuit against OpenAI details ChatGPT’s alleged role in FSU shooting: “They planned this shooting together”
The tragic mass shooting at Florida State University last year left two people dead and several others injured. The suspect, 21-year-old Phoenix Ikner, is facing murder and attempted murder charges and is set to stand trial this year. The family of one of the victims, Tiru Chabba, has filed a lawsuit against ChatGPT developer OpenAI, accusing the AI platform of enabling the suspect in planning the attack.
According to the lawsuit, ChatGPT allegedly assisted Ikner in planning the shooting by providing suggestions on weapons to use, locations on campus, and times when most people would be at risk. The family’s attorney, Bakari Sellers, stated that Ikner had multiple conversations with ChatGPT about disturbing topics such as Hitler, Nazis, and mass shootings, yet no red flags were raised by the platform.
OpenAI spokesperson Drew Pusateri defended the company, stating that ChatGPT only provided factual information that could be found publicly and did not promote illegal or harmful activities. He emphasized that the AI platform is used by millions for legitimate purposes and that OpenAI is committed to enhancing safeguards against misuse.
This is not the first instance where ChatGPT has been linked to a tragic event. In a recent case, the suspect in the killings of two University of South Florida graduate students reportedly sought advice from the chatbot on disposing of a body. Additionally, families affected by a mass shooting in Canada have sued OpenAI and its CEO Sam Altman, alleging that the company failed to alert authorities about the shooter’s plans.
Altman issued an apology to the community impacted by the Canadian shooting, acknowledging that the gunman’s account had been flagged for potentially violent activities before the incident. The company is working to improve its detection of harmful intent and response to safety risks.
As the legal battle unfolds and investigations continue, the role of artificial intelligence in enabling such tragic events raises important questions about accountability and responsibility in the development and use of AI technologies.



