Meta's apps are still promoting child predation content, report finds

The company says it's taken new steps to identify "potentially suspicious adults."

Francis Mascarenhas / reuters

Meta is failing to stop vast networks of people using its platform to promote child abuse content, a new report in The Wall Street Journal says, citing numerous disturbing examples of child exploitation it uncovered on Facebook and Instagram. The report, which comes as Meta faces renewed pressure over its handling of children’s safety, has prompted fresh scrutiny from European Union regulators.

In the report, The Wall Street Journal detailed tests it conducted with the Canadian Centre for Child Protection showing how Meta’s recommendations can suggest Facebook Groups, Instagram hashtags and other accounts that are used to promote and share child exploitation material. According to their tests, Meta was slow to respond to reports about such content, and its own algorithms often made it easier for people to connect with abuse content and others interested in it.

For example, the Canadian Centre for Child Protection told the paper a “network of Instagram accounts with as many as 10 million followers each has continued to livestream videos of child sex abuse months after it was reported to the company." In another disturbing example, Meta initially declined to take action on a user report about a public-facing Facebook Group called “Incest.” The group was eventually taken down, along with other similar communities.

In a lengthy update on its website, Meta said that “predators are determined criminals who test app, website and platform defenses,” and that it had improved many of its internal systems to restrict “potentially suspicious adults.” The company said it had “expanded the existing list of child safety related terms, phrases and emojis for our systems to find” and had employed machine learning to uncover new search terms that could be potentially exploited by child predators.

The company said it’s using technology to identify “potentially suspicious adults” in order to prevent them from connecting with each other, including in Facebook Groups, and from seeing each other’s content in recommendations. Meta also told The Wall Street Journal it “has begun disabling individual accounts that score above a certain threshold of suspicious behavior.”

The social network is facing a growing backlash over its handling of child safety. The Wall Street Journal also recently reported that Instagram Reels recommendations are serving content aimed at people who “might have a prurient interest in children.” Dozens of states recently sued Meta for allegedly harming the mental health of its youngest users, and failing to keep children younger than 13 off its apps. Mark Zuckerberg will no doubt face intense questions about these allegations next month when he appears at a Senate Judiciary Committee hearing focused on child safety online. His counterparts from TikTok, Snap, X and Discord are also slated to testify.

Meanwhile, Meta is also facing new pressure from regulators abroad. European Union officials are using a new law to probe the company’s handling of child abuse material, following The Journal’s report. The company has been given a December 22 deadline to turn over data to the bloc.