Senator Josh Hawley launched an investigation into Meta’s AI chatbot policies after discovering internal guidelines that permitted romantic interactions between chatbots and minors. The concerning 200-page document, which included scenarios involving children as young as 8, received approval from Meta’s legal and ethics teams before being removed. Congress responded with bipartisan condemnation, demanding documents by September 2025. This investigation could reshape how tech companies approach AI safety and child protection.
Senator Josh Hawley has launched a sweeping investigation into Meta’s AI chatbot policies after disturbing revelations that the company permitted AI systems to engage in “romantic” conversations with children. The investigation, led by Hawley as chairman of the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, aims to determine whether Meta’s practices facilitated exploitation or harm to minors.
The probe was triggered by a Reuters report that uncovered Meta’s internal “GenAI: Content Risk Standards” document, a 200-page rulebook that outlined acceptable behaviors for AI chatbots. Shockingly, these guidelines allowed chatbots to flirt with and compliment children, even including scenarios involving an 8-year-old in romantic interactions. Meta spokesperson Andy Stone confirmed that these problematic guidelines were removed.
Meta’s internal rulebook shockingly permitted AI chatbots to engage in romantic interactions with children as young as eight years old.
What’s particularly concerning is that these policies weren’t just a casual oversight. They received approval from Meta’s legal team, public policy experts, engineering department, and chief ethicist. Though the company has since retracted these controversial guidelines, the fact that they existed at all has raised serious questions about Meta’s commitment to protecting young users. The company’s policies also permitted AI to generate false medical advice.
Congressional response has been swift and bipartisan. Senators Brian Schatz and Marsha Blackburn didn’t mince words, calling Meta’s practices “disgusting” and highlighting growing distrust in Big Tech’s ability to safeguard children.
Meta now faces a deadline of September 19, 2025, to produce all relevant documents and communications related to these policy decisions.
The investigation could be a watershed moment for AI regulation and child safety online. Hawley’s team is demanding Meta preserve all records and identify those responsible for creating and approving these policies. Think of it as pulling back the curtain on how tech giants make decisions that affect our kids’ safety.
As this story unfolds, it’s becoming clear that the intersection of AI technology and child protection needs much stronger oversight. After all, when it comes to keeping kids safe online, there shouldn’t be any gray areas about whether AI chatbots can flirt with minors.