The Facts
On April 17, 2025, at 11:57 a.m., Florida State University in Tallahassee became the site of a devastating mass shooting near the Student Union building. The accused shooter, 21‑year‑old Phoenix Ikner, opened fire with a .45‑caliber Glock 21 semi‑automatic pistol that belonged to his stepmother, a deputy with the Leon County Sheriff’s Office. Two workers Robert Morales, 57, and Tiru Chabba, 45 were killed, and six others were wounded before law enforcement officers shot Ikner in the jaw and took him into custody.
Police quickly ruled the incident a mass shooting and charged Ikner with two counts of first‑degree murder and seven counts of attempted first‑degree murder. He has pleaded not guilty, and prosecutors are seeking the death penalty. His trial is currently scheduled for October 2026.
In the months since, attention has fixated on a separate angle: the role of artificial intelligence. According to attorneys for Morales’s family, the accused shooter was in “constant communication” with ChatGPT, the generative AI developed by OpenAI, in the hours and days leading up to the attack, and preliminary court filings suggest more than 270 exhibits of ChatGPT conversations and images are referenced in discovery.
The Blame
This story did not start when shots were fired; it began weeks and months earlier in a series of text exchanges between Ikner and ChatGPT. Attorneys representing Morales’s family, from the Tallahassee‑based firm Brooks, LeBoeuf, Foster, Gwartney & Hobbs, say those messages are not incidental; they contend the conversations helped shape the actions that followed.
They allege Ikner was not simply typing homework questions or idle chatter. According to court‑obtained logs, his queries veered into troubling territory. Hours before the shooting, he asked ChatGPT about when the FSU student union is the busiest a window that police say aligns with the timing of the attack, and nearly minutes before the shooting he posed a question about how to take the safety off a shotgun, to which ChatGPT responded with operational details.
If even a fraction of those allegations prove true, the family’s attorneys argue, ChatGPT’s responses were not just harmless answers they may have provided the suspect with actionable information. The suit, expected to be filed by the end of April, seeks to hold OpenAI accountable as a provider of a tool that the family believes may have assisted in planning the mass shooting.
OpenAI, meanwhile, says it identified and reported a ChatGPT account believed to be associated with the accused shooter to law enforcement shortly after the incident, and has cooperated fully with authorities. A company spokesperson stated that ChatGPT is designed to “understand people’s intent and respond safely and appropriately,” and that safety systems continue to be improved.
The Real Story
The heart of this controversy lies at the uncomfortable overlap between human choice and machine output. It is undisputed that Ikner carried out the shooting, that he pointed and fired a deadly weapon, and that he faces serious criminal charges for those decisions. Where the legal dispute sharply diverges is in the question of whether a tool he used in the hours leading up to the shooting can be held partly responsible for how he acted.
In many of the alleged exchanges, the tone reportedly shifted from typical college‑age queries to darker and more personal territory. News reports indicate earlier messages included discussions about self‑worth and suicidal ideation well before the day of the shooting, and then later moved into questions about mass shooters, prison systems, and campus activity patterns.
For example, one of the most attention‑grabbing pieces of evidence under review is the chat log in which the AI tells the user that the FSU Student Union is busiest during lunchtime roughly between 11:30 a.m. and 1:30 p.m. Police confirm the shooting began just before noon. Another log allegedly contains the bot’s detailed explanation of how to make a firearm operable just minutes before the first shot was fired.
What complicates this narrative is that generative AI models are designed to produce factual, context‑based answers unless they are explicitly filtered or redirected. If someone asks for historical details about school shootings, or tech specs about guns, the system tends to reply with information that matches training data unless safeguards intervene. That makes the distinction between harmful assistance and neutral information a central battleground for both lawyers and ethicists.
The Aftermath
As of early April 2026, the planned lawsuit against OpenAI is still taking shape, with the family’s attorneys expected to file a wrongful‑death and product‑liability suit later this month. The complaint reportedly will lean heavily on the alleged 270+ ChatGPT interactions, claiming that the chatbot may have advised the shooter how to commit these crimes, and that OpenAI could be legally accountable for making the technology available without adequate safety boundaries.
The case is already attracting broader attention beyond Tallahassee. Some lawmakers, like Florida Congressman Jimmy Patronis, have seized on the moment to push for changes to legal protections for tech companies, including potential reform of Section 230 and new AI accountability frameworks.
Legal experts warn that this lawsuit could test the boundaries of product liability law: can the makers of a conversational AI be held responsible for how someone used the information it produced? Courts will soon have to confront not just what happened in the moments before the shooting, but what responsibility platforms have when their systems generate responses that may be abused. And that conversation is likely to reverberate far beyond one case.
The Verdict
WHO’S BLAMING AI:
The family of Robert Morales, represented by attorneys who say Phoenix Ikner was “in constant communication with ChatGPT” before the shooting, and that the chatbot may have advised him.
WHAT ACTUALLY HAPPENED:
A tragic mass shooting on April 17, 2025, left two adults dead and six others wounded on the FSU campus. Court records show more than 270 ChatGPT conversations allegedly connected to the accused shooter leading up to the attack, and a lawsuit is being prepared based on those interactions.
WHO GOT AWAY WITH IT:
OpenAI’s cooperation with law enforcement and a statement that the company “continues improving” safety systems does not yet shield it from legal challenge. The technology itself remains widely deployed, even as lawyers and lawmakers debate who should bear responsibility when AI outputs coincide with real‑world harm.
BLAME RATING: 🤖🤖🤖🤖 (4/5 robots) – A textbook AI blame scenario: an individual made devastating choices, but the tool he consulted ended up in the dock too. Whether justice follows will depend on how courts treat the line between human agency and machine assistance.





