Begin typing your search...
FB shares data on child abuse, terror content on Instagram
Facebook has shared for the first time data on how it takes action against child nudity and child sexual exploitation, terrorist propaganda, illicit firearm and drug sales and suicide and self-injury on its photo-sharing app Instagram.
San Francisco
In Q2 2019, Facebook removed about 512,000 pieces of content related to child nudity and child sexual exploitation on Instagram.
"In Q3 (July-September period), we saw greater progress and removed 754,000 pieces of content, of which 94.6 per cent we detected proactively," Guy Rosen, VP Integrity, said in a statement on Wednesday.
It is ironic that Instagram has also become a platform, like Facebook, for such acts.
"For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram," Rosen explained.
In its "Community Standards Enforcement Report, November 2019," the social networking platform said it has been detecting and removing content associated with Al Qaeda, ISIS and their affiliates on Facebook above 99 per cent.
"The rate at which we proactively detect content affiliated with any terrorist organisation on Facebook is 98.5 per cent and on Instagram is 92.2 per cent," informed the company.
In the area of suicide and self-injury, Facebook took action on about 2 million pieces of content in Q2 2019.
"We saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3 per cent we detected proactively.
"On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8 per cent we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1 per cent we detected proactively," said Rosen.
In Q3 2019, Gacebook removed about 4.4 million pieces of drug sale content. It removed about 2.3 million pieces of firearm sales content in the same period.
On Instagram, the company removed about 1.5 million pieces of drug sale content and 58,600 pieces of firearm sales content.
On spread of hate speech on its platforms, Facebook said it can detect such harmful content before people report it and, sometimes, before anyone sees it.
"With these evolutions in our detection systems, our proactive rate has climbed to 80 per cent, from 68 per cent in our last report, and we've increased the volume of content we find and remove for violating our hate speech policy," said Rosen.
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android
Next Story