Facebook’s moderation policies allow users to attack “public figures,” including with “calls for death,” without being suspended, The Guardian reported on Tuesday, citing leaked guidelines.
Public figures are characterized by internal moderator instructions as renowned persons who may become an object of discussion in social media. The list varies from politicians of all stripes to local celebrities, journalists who write or speak publicly, or users with an audience of over 100,000 people. It also includes “people who are mentioned in the title, subtitle or preview of 5 or more news articles or media pieces within the last 2 years.”
Facebook, according to the Guardian, assumes that they may become targets for certain types of abuse, as discussions “often include critical commentary of people who are featured in the news.” It's legitimate to attack famous people, if they are not “purposefully exposed,” for example, tagged in a post, which is prohibited when it comes to private individuals.
“For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. For private individuals, our protection goes further: we remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity,” it says.
According to Imran Ahmed, founder of the Center for Countering Digital Hate, the differentiation between private and public individuals is “flabbergasting” as it endangers the safety of officials and other well-known people.
“Despite high-profile attacks in recent years, including the murder of Jo Cox MP and the US Capitol domestic terrorist attacks, promoting violence against public servants is sanctioned by Facebook if they aren’t tagged in the post,” Ahmed said.
“Highly visible abuse of public figures and celebrities acts as a warning – a proverbial head on a pike – to others. It is used by identity-based hate actors who target women and minorities to dissuade participation by the very groups that campaigners for tolerance and inclusion have worked so hard to bring into public life. Just because someone isn’t tagged doesn’t mean that the message isn’t heard loud and clear,” he said.
A Facebook spokesperson told the Guardian that the platform believes that it is important to allow critical discussion about politicians and other figures, but it doesn’t mean Facebook allows “people to abuse or harass them” on the apps.
“We remove hate speech and threats of serious harm no matter who the target is, and we’re exploring more ways to protect public figures from harassment. We regularly consult with safety experts, human rights defenders, journalists and activists to get feedback on our policies and make sure they’re in the right place,” they stated.
Content moderation is a long-standing challenge for Facebook, constantly leading to scandals in a variety of situations, including interference in elections and the spread of the coronavirus. Recently, the social media platform has been accused of failing to provide a "safe" environment for users. Last week, Paris-based advocacy organization Reporters Without Borders filed a lawsuit alleging that Facebook's policies are "largely mendacious" and contradicted by "the large-scale dissemination of hate speech and false information on its networks."