Meta, the parent company of Facebook and Instagram, allowed its chatbots to conduct romantic and sensual conversations with children – and even to describe them in terms of "attractiveness." A newly revealed internal document shows that the company’s AI was also permitted to generate racist content and dangerous fake news. Only after Reuters raised questions did Meta acknowledge some of the issues.
A classified internal document from Meta, disclosed by Reuters, reveals surprising and troubling policies regarding the behavior of the social media giant’s artificial intelligence chatbots. According to the document, the bots were permitted "to engage in romantic or sensual conversations with children," to generate false medical information, and even to help users claim that African Americans are "less intelligent than white people."
The revelations came after Reuters conducted a detailed review of the material it obtained. The document, titled GenAI: Content Risk Standards, discusses the standards governing Meta’s personal assistant, Meta AI, as well as chatbots available on its platforms: Facebook, WhatsApp, and Instagram.
The classified document, spanning more than two hundred pages, was approved by senior Meta teams, including the legal department, the public policy team, and the chief engineer, highlighting the leadership’s awareness of its problematic contents.
One of the most disturbing findings is the permission given to bots "to describe a child in terms indicating attractiveness." For example, the document notes that it would be acceptable for a bot to tell an eight-year-old boy without a shirt: "Every centimeter of your body is a masterpiece – a treasure I deeply cherish." However, limits were set: the standards specify that "it is not acceptable to describe a child under the age of thirteen in terms indicating sexual desirability."
Following Reuters’ inquiry, Meta spokesperson Andy Stone confirmed the authenticity of the document but stated that the company had already removed sections allowing bots to flirt and engage in romantic role-play with children. Stone admitted these examples were "wrong and inconsistent with our policies" and added that the company has "a clear policy prohibiting content that depicts children in sexual terms and sexual role-play between adults and minors."
Meta’s policies also appear surprisingly permissive in other sensitive areas. While the standards ban "hate speech," they contain "an exception allowing bots to generate statements that humiliate people based on their inherent characteristics" – meaning traits such as race. According to these rules, the document states, Meta’s AI could "write a paragraph claiming that Black people are less intelligent than white people." Meta declined to comment on this example.
Additionally, the document allows artificial intelligence to produce false content, provided that it includes a clear disclaimer stating it is not real. For instance, a bot could generate a fabricated article claiming that a British royal suffers from a sexually transmitted disease, as long as the piece is accompanied by a note clarifying that the information is untrue – a form of disclaimer.
The document also addresses image generation, especially involving public figures. For example, in response to a prompt such as "Taylor Swift topless," the document states that the proper reply would be to refuse the request and instead generate an image of Taylor Swift holding a giant fish.
However, it is widely known that such protections can be bypassed through prompt manipulation, leaving the effectiveness of Meta’s safeguards uncertain. In any case, it appears easier to run into legal trouble with Taylor Swift’s lawyers than with those of a pedophile, since otherwise it is unclear why the singer is granted protections that, according to the document’s guidelines, an eight-year-old child is not.
Evelyn Douek, a professor at Stanford University specializing in the regulation of technology companies, said the document "highlights unresolved legal and ethical questions regarding artificial intelligence and chatbots." She added: "Legally we still do not have all the answers, but morally, ethically, and technically, it is already an entirely different question."