Leaks 'expose peculiar Facebook moderation policy'
How Facebook censors what its users see has been revealed by internal documents, the Guardian newspaper says.
It
said the manuals revealed the criteria used to judge if posts were too
violent, sexual, racist, hateful or supported terrorism.
The Guardian said Facebook's moderators were "overwhelmed" and had only seconds to decide if posts should stay.
The BBC understands the documents seen by the newspaper closely resemble those Facebook currently uses to guide staff.
The leak comes soon after British MPs said social media giants were "failing" to tackle toxic content.
Careful policing
The newspaper
said it had managed to get hold of more than 100 manuals used internally
at Facebook to educate moderators about what could, and could not, be
posted on the site.
The manuals cover a vast array of sensitive
subjects, including hate speech, revenge porn, self-harm, suicide,
cannibalism and threats of violence.
Facebook moderators
interviewed by the newspaper said the policies Facebook used to judge
content were "inconsistent" and "peculiar".
The decision-making
process for judging whether content about sexual topics should stay or
go were among the most "confusing", they said.
The Open Rights
Group, which campaigns on digital rights issues, said the report started
to show how much influence Facebook could wield over its two billion
users.
"Facebook's decisions about what is and isn't acceptable
have huge implications for free speech," said an ORG statement. "These
leaks show that making these decisions is complex and fraught with
difficulty."
It added: "Facebook will probably never get it right
but at the very least there should be more transparency about their
processes."
'Alarming' insight
In a
statement, Monica Bickert, Facebook's head of global policy management,
said: "We work hard to make Facebook as safe as possible, while enabling
free speech.
"This requires a lot of thought into detailed and
often dfficult questions, and getting it right is something we take
very seriously," she added.
As well as human moderators that look
over possibly contentious posts, Facebook is also known to use
AI-derived algorithms to review images and other information before they
are posted. It also encourages users to report pages, profiles and
content they feel is abusive.
In early May, the UK parliament's
influential Home Affairs Select Committee strongly criticised Facebook
and other social media companies as being "shamefully far" from tackling
the spread of hate speech and other illegal and dangerous content.
The government should consider making sites pay to help police content, it said.
Soon after, Facebook revealed it had set out to hire more than 3,000 more people to review content.
British
charity the National Society for the Prevention of Cruelty to Children
(NSPCC) said the report into how Facebook worked was "alarming to say
the least".
"It needs to do more than hire an extra 3,000 moderators," said a statement from the organisation.
"Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe."
BBC
No comments
Your comments and Encouragement are welcome