Three days before two University of South Florida doctoral students vanished, someone typed a chilling question into ChatGPT: “What happens if a human has [been] put in a black garbage bag and thrown in a dumpster?” When the AI responded that the scenario sounded dangerous, the user reportedly replied: “How would they find out.”
That digital breadcrumb trail now sits at the center of a double murder case against Hisham Abugharbieh, 26, who faces first-degree premeditated murder charges for the deaths of Zamil Limon and Nahida Bristy, both 27-year-old doctoral students from Bangladesh. Court documents reveal the ChatGPT exchange occurred on April 13—three days before the victims were last seen alive on April 16.
Physical Evidence Matches Digital Queries
The suspect’s AI conversation became a roadmap that investigators followed to real-world crime scenes.
Your phone tracks more than you realize, but Abugharbieh’s case shows how AI interactions can become smoking guns. Police discovered Limon’s remains wrapped in a black plastic bag near the Howard Frankland Bridge, multiple stab wounds evident.
Investigators recovered his student ID, glasses, and credit cards from a dumpster, alongside a CVS receipt for trash bags, Lysol wipes, and Febreze. Similar bags were found beneath Abugharbieh’s bed, according to court filings.
The charges paint a picture of methodical planning:
- Two counts of first-degree murder
- Unlawful moving of a dead body
- Tampering with evidence
- False imprisonment
- Battery
- Failure to report a death
Bristy’s body has not been recovered.
Regulatory Spotlight on AI Safety
This case has pushed Florida’s attorney general to launch a criminal investigation into OpenAI itself.
Florida Attorney General Ashley Moody announced a criminal investigation into OpenAI following the case, signaling broader scrutiny of how AI companies handle potentially dangerous queries. This isn’t your typical “AI gone wrong” story—it’s about whether tech companies should monitor conversations that could indicate criminal intent.
The case raises uncomfortable questions about digital privacy versus public safety. You might assume your ChatGPT conversations remain private, but this investigation demonstrates how they can surface in criminal proceedings. As AI becomes more integrated into daily life, the intersection of digital footprints and criminal evidence will only intensify.
The Abugharbieh case represents a new frontier where your digital interactions—even with AI—can become the most damning evidence against you.




























