Facebook is intensifying efforts to crack down on child abuse content on its platform, as the social media giant has tightened norms, improved detection capabilities, and updated tools to prevent sharing of content that victimises children.
Advocating its stringent zero-tolerance stance on child sexual exploitation content on its platform, Facebook said using its apps to harm children is "abhorrent and unacceptable".
"...we're announcing new tools. we're testing to keep people from sharing content that victimises children and recent improvements. we've made to our detection and reporting tools," Antigone Davis, Global Head of Safety at Facebook said in a blogpost.
Facebook said it is developing solutions, including new tools and policies to curb sharing of such content on its platform.
"We've started by testing two new tools — one aimed at the potentially malicious searching for this content and another aimed at the non-malicious sharing of this content," Davis explained.
The first is in form of a pop-up that is shown to people who search for terms associated with child exploitation, on its apps. The pop-up offers ways to seek help from offender diversion organisations and warns about the consequences of viewing illegal content.
"The second is a safety alert that informs people who have shared viral, meme child exploitative content about the harm it can cause and warns that it is against our policies and there are legal consequences for sharing this material," Davis added.
The safety alert is in addition to removing the content, and reporting it to US-headquartered National Center for Missing and Exploited Children (NCMEC).
"Accounts that promote this content will be removed. We are using insights from this safety alert to help us identify behavioural signals of those who might be at risk of sharing this material, so we can also educate them on why it is harmful and encourage them not to share it on any surface — public or private," Davis wrote.
Facebook has scaled up efforts to detect and remove networks that violate the platform's rules and has updated child safety policies.
"...We've updated our child safety policies to clarify that we will remove Facebook profiles, pages, groups and Instagram accounts that are dedicated to sharing otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary about the children depicted in the image," the blog post further said.
Under the new policy, while the images alone may not break rules, the accompanying text will help the platform assess whether the content sexualizes children and if the associated profile, page, group or account should be removed.
Facebook said it has made it easier to flag content for violating child exploitation policies.
"To do this, we added the option to choose "involves a child" under the "Nudity and Sexual Activity" category of reporting in more places on Facebook and Instagram," it said adding that such reports would be prioritised for review.
Facebook's analysis of 150 accounts it reported to NCMEC for uploading child exploitative content in July and August of 2020 and January 2021, had revealed that an estimated over 75 per cent of the people did not exhibit malicious intent.
"Instead, they appeared to share for other reasons, such as outrage or in poor humour...Based on our findings, we are developing targeted solutions, including new tools and policies to reduce the sharing of this type of content," Davis said in the blog post.
Speaking to reporters, Karuna Nain, Director, Global Safety Policy at Facebook said that the company has a strict zero-tolerance policy for child exploitation on its platform.
Nain, however, did not comment on how the changes proposed by India in IT intermediary rules -- that seek to raise the accountability of social media platforms -- would impact the firm.
Issues ranging from provocative and inflammatory posts to misinformation to data breaches and privacy issues have placed social media companies including Facebook, WhatsApp and Twitter on the line of fire in India, and other markets.
The Indian government has confronted social media companies on multiple occasions and asserted that they need to comply with the rules of the land and crackdown on misuse of their platforms.
IT and Communications Minister Ravi Shankar Prasad has warned social media platforms of strict action for failure to crackdown on provocative content, saying they have to fully comply with the country's law.
The intermediary guidelines, which applies to digital platforms like Facebook, are in the final stages and are expected to be announced soon.
The proposed rules aim to curb misuse of social media by possibly tightening the timelines for taking action on malicious content, and will raise the accountability of digital platforms to users and Indian law.