The latest Meta AI policy document has exposed that the company’s chatbots were once permitted to engage in highly questionable behaviors, including suggestive or romantic conversations with minors, creating false medical claims, and generating racially biased content.
This revelation raises serious concerns about the boundaries of generative AI on popular platforms like Facebook, Instagram, and WhatsApp.
The internal 200-plus-page report, titled GenAI: Content Risk Standards, was examined by Reuters. It details what Meta considered acceptable conduct for its AI systems, including Meta AI. The document was approved by the company’s legal, engineering, and public policy teams, along with the chief ethicist, highlighting the institutional backing of these standards.
Problematic Interactions with Children
Under the guidelines, AI assistants could describe children in terms of physical attractiveness and even send messages like, “Every inch of you is a masterpiece” to an eight-year-old. While sexualizing minors under 13 was technically prohibited, flirtatious or romantic roleplay was allowed, showcasing a concerning gray area in the rules.
Meta confirmed the document’s authenticity and admitted these permissions were later removed after media scrutiny, with spokesperson Andy Stone noting such conversations “never should have been permitted.”
Permission for Racist or Misleading Content
The report also permitted chatbots to generate content that disparaged individuals based on race, such as asserting that Black people were less intelligent than white people, although general hate speech was banned.
Additionally, false claims could be produced as long as they included a disclaimer. For instance, an AI could falsely claim a British royal had a medical condition, provided it was clearly labeled as untrue. Meta declined further comment on these provisions.
READ MORE: business process automation trends
Celebrity Content and Violence Guidelines
The standards also addressed sexualized content involving public figures. Explicit requests for celebrities like Taylor Swift were to be rejected, with humorous alternatives suggested, such as generating an image of her holding a giant fish.
The guidelines allowed some violent depictions, including punching or threatening scenarios, while explicitly banning content with gore, death, or extreme harm.
Read More: Xiaomi Poco C85 Specs & Design Revealed Ahead of Launch
Despite the controversies, Meta has not publicly released a fully updated version of the guidelines, meaning some of these allowances could still exist. This latest exposure of the Meta AI policy underscores the ongoing debate about ethics, safety, and responsibility in generative AI.




