logo

AI-Generated Child Abuse Content Spreads on TikTok Despite Platform Rules

tiktok

Video clips created using software that appears to show underaged girls in sexually suggestive clothing or poses accumulated millions of likes on TikTok, despite the platform’s rules against such content, new research from an online safety non-profit has found.

Researchers uncovered over a dozen accounts sharing the videos of AI-generated girls in tight clothes, underwear and school uniforms (and sometimes in suggestive poses). The accounts collectively have hundreds of thousands of followers. Comments on many of the videos contained links to chats on the messaging platform Telegram in which child pornography was being sold, the report said.

Thirteen of those accounts were still active as of Wednesday evening, after all 15 had been reported last week using TikTok’s in-app tool for flagging content, said Carlos Hernández-Echevarría, who was the lead researcher and is assistant director and head of public policy at Maldita. es. The report was published Thursday by the Spain-based nonprofit, which examines online disinformation and pushes for transparency in the media.

The report highlights potential blind spots that TikTok has demonstrated in enforcing its own policies on AI content, even when that content appears to project sexualized images of computer-generated minors. Tech platforms are coming under mounting pressure to protect young users as more jurisdictions implement online safety laws such as Australia’s under-16 social media ban, which took effect this week.

“There’s really nothing nuanced about this at all,” Hernández-Echevarría told CNN. “No one that is, you know, an actual person doesn’t find this gross and want it removed.”

TikTok maintains a zero tolerance policy for content that “depicts, promotes or engages in child sexual abuse or exploitation,” it says. Its community guidelines specifically ban “accounts focused on AI images of under-18 individuals in unsuitable adult-like situations, or involving nudity or sex.” Another section in its policies reads that TikTok does not permit “sexual content involving a young person, including anything that shows or facilitates sexual activity or abuse,” which encompasses “AI-generated images” and anything that “sexualizes or fetishizes a real life ‘body’ from anyone who is, or appears to be, under 18.”

The company has said it employs a mixture of vision, audio and text-based tools and human teams to moderate content. Over the three months from April to June, TikTok removed more than 189 million videos and banned around 108 million accounts, the company said. It says 99 percent of content that was in violation of its policies on nudity and body exposure (including of young people) was removed proactively, and 97 percent violating its policies on AI-generated content were removed proactively.

A TikTok spokesman did not offer a comment in response to the report.

Maldita. es found the TikTok videos through test accounts it relies on to keep an eye out for potential disinformation or other harmful content in its work.

“One of our team members sort of began to notice that there was this trend of these (AI-generated) really, really young kids dressed as adults and particularly when you kind of started going into the comments, it turned out we had a little bit of money incentive,” Hernández-Echevarría said.

Some of the accounts identified their videos in their bios as depicting “delicious-looking (female) high school students” and “junior models,” according to the report. “It’s not just girls twerking in the videos, even tamer clips of teenage girls licking ice cream are filled with profane sexual comments,” it says.

In some instances, the accountholders used TikTok’s “AI Alive” feature — a tool that animates still images — to transform AI-generated pictures into videos, Hernández-Echevarría said. While some of the videos appear to have been created using outside AI tools, he added.

Comments on a number of the videos contained links to private Telegram chats that were promoting child pornography, the report said.

“Some of those accounts answered direct messages on TikTok with links to external websites selling AI-generated videos and images that sexualize minors, ranging from 50 to 150 euros ($71-$213),” the report says.

No transactions were completed and the group raised concerns about the websites and Telegram accounts to police in Spain.

“Telegram is deeply committed to ensuring that CSAM does not appear on its platform, and takes a strict zero-tolerance stance,” Telegram spokesperson Remi Vaughn said in a statement to CNN. “Telegram checks all media uploaded to the public channels against a database of child sexual abuse content,” it also scanned “all photos and videos from private chats on its service in Russia.” “No encrypted messaging app that is offered on the market today can do that,” he said, adding: “We are monitoring, we have both AI and human monitors, but it needs to be material that would really slip through our user-reporting pool.” Telegram does allow this kind of scrutiny from NGOs around the world in order for the company to enforce its terms of service.”

More than 909,000 public groups and channels dedicated to child sexual abuse material have been taken down by Telegram in 2025, Vaughn said.

The organization said it reported 15 accounts and 60 related videos to TikTok using the app’s reporting tools on Dec. 12, categorizing them as involving “sexual suggestive behavior by youth.” The accounts collectively had nearly 300,000 followers and their 3,900 videos had been liked more than 2 million times in total, the report said.

By Friday, TikTok had replied to say 14 of the accounts were not in violation and that one account had been “restricted,” Maldita. es said. The group protested every decision, but “exactly 30 minutes” after the appeal for each case, TikTok repeated its earlier determination that it was warranted to suppress the content, the report says.

Of the 60 videos the group had reported, TikTok said in response on Friday that it had found only 46 to have violated its policies and removed or restricted 14 of them. Upon researchers’ appeal, TikTok took down three additional videos and limited another. It was not immediately clear how video restrictions were different from removals.

Among the videos that weren’t taken down was one of an AI-generated young girl taking a shower, wearing skimpy clothing, and other AI-generated depictions that looked like young girls posing provocatively in lingerie or bikinis, the group said.

“There’s no way a human being looks at this and sees what’s happening,” Hernández-Echevarría said. “The comments are super crude, are full of the most disgusting people on earth making comments.

By Wednesday, at least one account and one video that TikTok’s review process had earlier responded to and said did not violate its rules could no longer be accessed. It was not clear, she said, why they were not taken down originally when people first reported them.

Thursday’s report follows a study released in October by another UK not-for-profit, Global Witness, which found that TikTok had guided young users to sexually explicit content via its suggested search terms. That report determined that TikTok was suggesting to users as young as 13 highly sexualized searches who claimed that they were searching in “restricted mode.” In response, TikTok said it had taken down content that violated its policies and made improvements to the search suggestion feature.