Instagram under fire for images of sexualized children | Online abuse
Instagram fails to remove accounts that attract hundreds of sexualized comments for posting photos of children in swimsuits or partially clothed, even after they are reported there via the Instagram’s built-in reporting tool. application.
Instagram’s parent company, Meta, says it takes a zero-tolerance approach to child exploitation. But accounts that have been flagged as suspicious via the in-app reporting tool have been deemed acceptable by its automated moderation technology and remain online.
In one case, an account posting photos of children in sexualized poses was reported, using the in-app reporting tool, by a researcher. Instagram provided a response the same day saying that “due to high volume” they were unable to view the report, but that their “technology discovered that this account was likely not in violation of guidelines. of our community”. The user was prompted to block or unfollow the account, or report it again. It remained live on Saturday, with more than 33,000 subscribers.
Similar accounts – known as “tribute pages” – were also discovered on Twitter.
An account, which posted photos of a man performing sex acts over images of a 14-year-old TikTok influencer, has been deemed not to break Twitter’s rules after it was reported using the tools built into the app – although he suggested in the posts that he was looking to connect with people to share illegal material. “Looking to swap some younger stuff,” one of his tweets said. It was taken down after campaign group Collective Shout spoke about it publicly.
The findings raise concerns about the platforms’ built-in reporting tools, with critics saying the content appeared to be allowed to remain online as it did not meet a criminal threshold – despite being linked to suspected illegal activity .
Often the accounts are used for “breadcrumbs” – where offenders post technically legal images but arrange to meet online in private messaging groups to share other content.
Andy Burrows, head of online safety policy at the NSPCC, described the accounts as a “showcase” for paedophiles. “Companies should proactively identify this content and then remove it themselves,” he said. “But even when it is brought to their attention, they feel it is not a threat to children and they should stay on the site.”
He called on MPs to tackle ‘loopholes’ in the Online Safety Bill – which aims to regulate social media companies and will be debated in Parliament on April 19. They should, he said, require companies to tackle not only illegal content, but also content that is clearly harmful but may not meet the criminal threshold.
Lyn Swanson Kennedy of Collective Shout, an Australia-based charity that monitors abusive content globally, said platforms rely on outside organizations to moderate their content for them. “We call on platforms to address some of these very concerning activities which put underage girls in particular at serious risk of harassment, exploitation and sexualisation,” she said.
Meta, Instagram’s parent company, said it has strict policies against content that sexually exploits or endangers children, and removed it when it became aware of it. “We’re also focused on preventing harm by banning suspicious profiles, preventing adults from messaging kids they’re not connected with, and defaulting under 18s to private accounts,” said a spokesperson.
Twitter said accounts reported to it have now been permanently suspended for violating its rules. A spokesperson said: “Twitter has zero tolerance for any material that depicts or promotes the sexual exploitation of children. We aggressively combat CSE online and have invested heavily in technology…to enforce our policy.
Imran Ahmed, chief executive of the Center for Countering Digital Hate, a nonprofit think tank, said: an abrogation of the fundamental duty to protect children.