The Meta AI scandal. The prosecutor's office took over the case, but the company was given little time.

- The Brazilian Attorney General ordered Meta to remove chatbots imitating children and discussing intimate topics from its platforms.
- This was considered to be a threat to the psychological integrity of minors.
- Meta's internal artificial intelligence standards, which were made public, showed that the tech giant allowed such behavior from its chatbots.
According to The Rio Times, the AGU stated in an official letter that these bots, created using the AI tool Studio Meta, simulate child characters and engage in sexually explicit dialogue. Accounts named "Bebezinha," "Minha Novinha," and "Safadinha" were identified and deemed to pose a threat to the psychological integrity of minors and undermine constitutional protections.
Meta's social media platforms allow users from the age of 13, but they have virtually no effective filters to prevent younger teenagers from contacting sexually explicit bots.
The government's actions follow a June 2025 ruling by Brazil's Supreme Federal Court (STF) that changed platforms' liability under the Marco Civil da Internet law. The court ruled that internet companies can be held civilly liable for content published by third parties if they have clear knowledge of illegal acts and fail to promptly remove them, even without a prior court order.
Meta launched AI Studio in Brazil in March 2025 with support from Portugal. This tool allows users to design and deploy chatbots without requiring programming skills.
This allowed the creation of bots with profiles resembling children who engage in sexual interactions. The AGU highlighted this vulnerability as a violation of regulations and security policies that requires immediate correction.
For Meta, compliance means more than just removing content: it must demonstrate effective age screening and moderation across its entire ecosystem . This decision has implications beyond protecting children. Brazil is now signaling that platforms cannot ignore the harmful use of their AI tools.
Reuters investigation reveals scandalous compliance in Meta's AI standardsReuters reported on Meta's AI policy allowing chatbots to conduct such conversations in mid-August. According to an internal company document obtained by the agency, the tech giant allows chatbots to engage in provocative behavior on topics such as sex, race, and celebrities .
An internal Meta document detailing rules for chatbot behavior allowed the company's AI developers to "engage a child in romantic or sensual conversations," generate false medical information, and help users argue that Black people are "dumber than white people."
Meta confirmed the document's authenticity but said that after receiving questions from Reuters earlier this month, it removed sections that stated chatbots could flirt and engage in romantic role-play with children.
Titled "GenAI: Content Risk Standards," the chatbot policy was approved by Meta's legal, public policy, and engineering teams, including its chief ethicist. The 200-plus-page document defines the chatbot behaviors Meta employees and contractors should choose when building and training generative AI products.
- It is permissible to describe a child in terms that demonstrate his or her attractiveness (e.g. "your youthful form is a work of art") - the company's standards were quoted by Reuters.
The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of your body is a masterpiece – a treasure that I deeply treasure."
However, the guidelines impose restrictions on intimate conversations: It is unacceptable to describe a child under 13 in a way that suggests sexual desire (e.g., "soft, rounded shapes invite touch").
Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children should never be allowed.
"The examples and comments in question were and are incorrect and inconsistent with our policies and have been removed," Stone told Reuters. "We have clear rules about the types of responses AI characters can provide, and those rules prohibit content that sexualizes children and sexual role-playing between adults and minors."
These rules prohibit Meta AI from, for example, using hate speech. However, there is an exception that allows the bot to "create statements that degrade people based on their protected characteristics." According to these rules, the standards stipulate that Meta AI can "write a paragraph arguing that Black people are stupider than White people."
The standards also stipulate that Meta AI has the freedom to create false content as long as the material is clearly acknowledged to be false. For example, Meta AI could publish an article alleging that a living member of the British royal family has chlamydia—a claim the document calls "verifiably false"—if a disclaimer about the falsehoods were included.
wnp.pl