Florida Attorney General Launches Investigation into OpenAI’s Technology
In a groundbreaking move, Florida Attorney General James Uthmeier announced on Thursday that his office will investigate OpenAI over serious allegations concerning its artificial intelligence, especially regarding its impact on minors and potential national security risks.
“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Uthmeier stated in a social media video, prompting widespread attention and concern.
Details of the Investigation
The investigation centers around the events of a tragic shooting at Florida State University (FSU) last April. During this incident, the suspect reportedly inquired via ChatGPT about potential reactions to a shooting at the university and sought details about peak hours at the FSU student union. These communications could serve as crucial evidence in an upcoming trial set for October.
Wider Implications and Concerns
Uthmeier voiced concerns that ChatGPT has also been linked to instances of suicide encouragement, as highlighted in several lawsuits brought by grieving families against OpenAI. The attorney general further expressed worries that the technology could be exploited by the Chinese Communist Party, posing a significant threat to U.S. security.
“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” Uthmeier emphasized. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
Legislative Actions on AI Safety
In light of these concerns, Uthmeier urged the Florida legislature to act swiftly to safeguard children from the potential dangers of artificial intelligence technologies that may have adverse effects.
OpenAI Responds
In response to the unfolding situation, a spokesperson for OpenAI stated, “Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems.” The company reaffirmed its commitment to safety measures, stating its ongoing efforts are crucial for delivering benefits while supporting scientific research.
Additionally, OpenAI has pledged to cooperate fully with the attorney general’s investigation. They recently unveiled a Child Safety Blueprint aimed at enhancing protections for minors in relation to AI technologies.
The Growing Scrutiny on AI Technologies
The scrutiny on OpenAI comes amid heightened pressure for chatbot developers to address their role in potentially creating harmful materials, including child sexual abuse material (CSAM). A report from the Internet Watch Foundation revealed over 8,000 reports of AI-generated CSAM in the first half of 2025, marking a 14% increase from the previous year.
Recommendations for Greater Safety
OpenAI’s Child Safety Blueprint further recommends legislative updates to strengthen protections against AI-generated abuse materials, refining the reporting process to law enforcement, and implementing better safeguards against the misuse of AI technologies.
Conclusion
As investigations unfold, the implications for both OpenAI and AI technologies at large remain significant. With growing concerns about safety and security, the outcomes could shape the future of AI regulation in the United States. For further insights and updates on technology matters, explore our articles at Axom Live.



