Facebook revealed Tuesday that it removed more than half a billion fake accounts and millions of pieces of violent or obscene content during the first three months of 2018, pledging more transparency while shielding its chief executive from new public questioning about the company's business practices.
Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart-that is, in terms of detecting fake accounts or spam, at least.
Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4.
Facebook has been in hot water following allegations of data privacy violations by Cambridge Analytica, an election consultancy that improperly harvested information from millions of Facebook users for the Brexit campaign and Donald Trump's United States presidency bid.
Facebook reports increased action on graphic content
Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4. One hypothesis for the increase, Schultz said, is that "in [the most recent quarter], some bad stuff happened in Syria".
The report covers the six months from October 2017 to March 2018, and also covered graphic violence, nudity and sex, terrorist propaganda, spam and fake accounts.
Most of the content was found and flagged before users had a chance to spot it and alert the platform. For every 10,000 content views, an estimated 22 to 27 contained graphic violence and 7 to 9 contained nudity and sexual violence that violated the rules, the company said.
Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content.
Getting rid of racist, sexist and other hateful remarks on Facebook is challenging for the company because computer programs have difficulties understanding the nuances of human language, the company said Tuesday.
The company said most of the increase was the result of improvements in detection technology.
Robinson Cano Suspended 80 Games by Major League Baseball for Positive PED Test
Cano was hitting a healthy.287/.385/.441 with 23 RBI through 39 games before he was injured and suspended just days apart. The 35-year-old veteran is on the 10-day disabled list after suffering a broken bone in his right hand on a hit by pitch.
In total the social network took action on 3.4m posts or parts of posts that contained such content. During Q1, Facebook found and flagged 85.6% of such content it took action on before users reported it, up from 71.6% in Q4.
While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.
"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards, so we tend to find and flag less of it, and rely more on user reports, than with some other violation types", the report says.
Facebook says AI has played an increasing role in flagging this content.
The social networking giant also said that it disabled 583m fake accounts in the first quarter of the year and now estimates that between 3pc and 4pc of all active accounts during the period were fake. It said it disabled 583 million fake accounts. "While not always ideal, this combination helps us find and flag potentially violating content at scale before many people see or report it".
Facebook has not fully answered questions on data privacy -UK lawmakers
In its latest response, Facebook added a little more colour - although the answer is unlikely to satisfy privacy experts. Zuckerberg had made its clear that any app that either refused or failed an audit would be banned from Facebook.