Oversight Board on Thursday said the company’s rules were “not sufficiently clear” in barring sexually explicit AI-generated depictions of real people and called for changes to stop such imagery from circulating on its platforms.
The board found both images violated Meta’s rule barring “derogatory sexualized photoshop,” which the company classifies as a form of bullying and harassment, and said Meta should have removed them promptly. The user appealed, but the company again declined to act, and only reversed course after the board took up the case, it said.“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”
The board also slammed Meta for declining to add the Indian woman’s image to a database that enables automatic removals like the one that occurred in the American woman’s case.