A Reuters report revealed that Meta’s internal guidelines allowed AI chatbots to engage in “romantic or sensual” interactions with minors. Following the disclosure, the company confirmed the document’s authenticity but stated that the examples did not reflect its official policies and were subsequently removed. The revelation prompted significant political and regulatory concern in the United States.
The responses emphasized the seriousness of the matter and the need for stronger legislative oversight of AI, particularly in contexts involving children and adolescents. Observers pointed to risks associated with governance gaps and insufficient safeguards, stressing the urgency of tailored regulatory frameworks. The debate also revived discussions on child online safety legislation and on whether traditional legal protections should extend to generative AI technologies.