Facebook reports spike in violent content

Facebook reports spike in violent content

"Accountable to the community". In the first quarter, Facebook disabled about 583 million fake accounts and removed 837 million pieces of spam, the report said.

Facebook's new Community Standards Enforcement Report "is very much a work in progress and we will likely improve our methodology over time", Chris Sonderby, VP and deputy general counsel, wrote in a blog post about the report. This is trumpeted as a victory, as Facebook says that close to 100 percent of these incidents were detected by its own algorithms before users noticed anything happened.

Continued conflict in Syria may be one factor, said Alex Schultz, vice president of data analytics: "Whenever a war starts, there's a big spike in graphic violence". The company says more than 96 percent of the posts removed by Facebook for featuring sex, nudity or terrorism-related content were flagged by monitoring software before any users reported them. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

During the press call, Schultz noted it will be a mix of full-timers and contractors spread across 16 locations around the world.

Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.

The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity - quite apart from child pornography, which is not covered by the report.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

More news: Facebook has suspended 200 apps as it investigates misuse of data
More news: Simpson extends lead as Woods, Spieth shoot 65
More news: Ohtani pitches well but fails to get 4th win

Facebook also took down 837 million pieces of spam in Q1, nearly all of which were identified and flagged before anyone reported them.

Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4.

The company has a policy of removing content that glorifies the suffering of others.

"These kinds of metrics can help our teams understand what's actually happening to 2-plus billion people", he said.

- 836 million instances of spam had action taken against them.

The social network's global scale - and the extensive efforts it undertakes to keep the platform from descending into chaos - was outlined Tuesday in its first ever transparency report. While it's blocked more than 500 million fake accounts, Facebook estimates that about 3 to 4 percent of accounts on the website are fake.

Facebook banned "about 583 million fake accounts - most of which were disabled within minutes of registration". In this case, 86% was flagged by its technology.

Related Articles