Islamabad, Aug 15, 2025: An internal Meta Platforms document outlining Meta AI Rules has revealed troubling chatbot permissions, including romantic chats with minors, spreading false medical claims, and enabling discriminatory arguments. The guide was more than 200 pages long and was agreed on by the legal, policy, and ethics teams at Meta.

The guidelines, whose title was “GenAI: Content Risk Standards”, gave the bots the ability to talk about the appearance of children suggestively and even send them romantic messages. FDA Inquiry prompted Reuters to inquire further, and after Meta deleted some sections, it acknowledged that these permissions ought not to have existed in the first place. A company spokesperson declared them to be erroneous and to be against policy.

Other regulations allowed the creation of incriminating racial posts, including the discourse found to be less intelligent than the other, and creating sensational falsehood in case it is declared as untrue. Like an example where bots could post a false story of a British royal person having an STI, there should be a caveat.

Read more: Pakistan, Meta Discuss AI, Digital Growth

The Meta AI Rules limited sexualised illustrations of stars, with humorous evasions on request, e.g. changing a request to draw Taylor Swift topless to a picture of Taylor holding a huge fish. The violence rules permitted the use of non-lethal attacks but not excessive gore.

According to critics, such rules of Meta AI are a clear indication of serious ethical issues, particularly in instances where the platform itself generates content of this nature. Experts emphasise that despite the vague nature of legal standards, moral and reputational risks are a fact.

📢 Be the first to know latest , news in Bloom Pakistan WhatsApp Channel!