THE MAKERS OF TIKTOK, the Chinese video-sharing app with hundreds of millions of users around the world, instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform, according to internal documents obtained by The Intercept. These same documents show moderators were also told to censor political speech in TikTok livestreams, punishing those who harmed “national honor” or broadcast streams about “state organs such as police” with bans from the platform.
These previously unreported Chinese policy documents, along with conversations with multiple sources directly familiar with TikTok’s censorship activities, provide new details about the company’s efforts to enforce rigid constraints across its reported 800 million or so monthly users while it simultaneously attempts to bolster its image as a global paragon of self-expression and anything-goes creativity. They also show how TikTok controls content on its platform to achieve rapid growth in the mold of a Silicon Valley startup while simultaneously discouraging political dissent with the sort of heavy hand regularly seen in its home country of China.
On TikTok, livestreamed military movements and natural disasters, video that “defamed civil servants,” and other material that might threaten “national security” has been suppressed alongside videos showing rural poverty, slums, beer bellies, and crooked smiles. One document goes so far as to instruct moderators to scan uploads for cracked walls and “disreputable decorations” in users’ own homes — then to effectively punish these poorer TikTok users by artificially narrowing their audiences.
Today, The Intercept and The Intercept Brasil are publishing two internal TikTok moderation documents, recreated with only minor redactions, below. One lays out bans for ideologically undesirable content in livestreams, and another describes algorithmic punishments for unattractive and impoverished users. The documents appear to have been originally drafted in Chinese and later — at times awkwardly — translated into English for use in TikTok’s global offices. TikTok is owned by ByteDance, a Beijing-headquartered company that operates a suite of popular sites and social apps, a sort of Chinese analog to Facebook. ByteDance, founded in 2012, has come under scrutiny by the U.S. government over its ties to the Chinese Communist Party and numerous reports that the app’s censorship tactics mirror those of Beijing; Sens. Chuck Schumer and Josh Hawley have both worked to limit TikTok’s use by government personnel, arguing that it presents a risk to national security.
TikTok spokesperson Josh Gartner told The Intercept that “most of” the livestream guidelines reviewed by The Intercept “are either no longer in use, or in some cases appear to never have been in place,” but would not provide specifics. Regarding the policy of suppressing videos featuring unattractive, disabled, or poor users, Gartner stated that the rules “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
Sources indicated that both sets of policies were in use through at least late 2019 and that the livestream policy document was created in 2019. Gartner would not explain why a document purportedly aimed at “preventing bullying” would make zero mention of bullying, nor why it offers an explicit justification of attracting users, not protecting them.
Excluding Undesirable Users From the “For You” Fire Hose
One moderation document outlining physical features, bodily and environmental, deemed too unattractive spells out a litany of flaws that could be grounds for invisibly barring a given clip from the “For You” section of the app, where TikTok videos are funneled to a vast audience based on secret criteria. Although what it takes to earn a spot on the “For You” section remains a mystery, the document reveals that it took very little to be excluded, all based on the argument that uploads by unattractive, poor, or otherwise undesirable users could “decrease the short-term new user retention rate,” as stated in the document. This is of particular importance, the document stresses, for videos in which the user “is basically the only focus of the video … if the character’s appearance or the shooting environment is not good, the video will be much less attractive, not worthing [sic] to be recommended to new users.”
Under this policy, TikTok moderators were explicitly told to suppress uploads from users with flaws both congenital and inevitable. “Abnormal body shape,” “ugly facial looks,” dwarfism, and “obvious beer belly,” “too many wrinkles,” “eye disorders,” and many other “low quality” traits are all enough to keep uploads out of the algorithmic fire hose. Videos in which “the shooting environment is shabby and dilapidated,” including but “not limited to … slums, rural fields” and “dilapidated housing” were also systematically hidden from new users, though “rural beautiful natural scenery could be exempted,” the document notes.
The document, presented in both English and Chinese, advised TikTok’s moderators that for videos shot in someone’s house with “no obvious slummy charactor [sic],” special care should be given to check for slummy features such as a “crack on the wall” or “old and disreputable decorations.” The mere appearance of residential disrepair or crooked teeth in the frame, the document shows, could mean the difference between worldwide distribution and relative invisibility.
The justification here, as with “ugly” uploaders, was again that TikTok should retain an aspirational air to attract and hold onto new users: “This kind of environment is not that suitable for new users for being less fancy and appealing.” Social startups, eager to build on their momentum rather than disappear into the app heap of history, commonly consider growth and user retention to be by far their top priority, but rarely is the public privy to the details of this kind of nakedly aggressive expansion.
It’s unclear how widespread this exclusionary practice has been. Gartner, the TikTok spokesperson, told The Intercept that “the policies mentioned appear to be the same or similar to those published by” German publication Netzpolitik in December in a story about how TikTok was artificially suppressing access to videos created by disabled, overweight, and LGBT users and represented an effort “at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
However, the TiKTok documents reviewed by The Intercept include a range of policies beyond those reported by Netzpolitik, involving among other things the suppression of content from poor, old, and “ugly” users. Furthermore, these documents contain no mention of any anti-bullying rationale, instead explicitly citing an entirely different justification: The need to retain new users and grow the app.
In stark contrast to its practice of strategically suppressing the unattractive, infirm, and despondent, TikTok also conducted closed-door outreach to its more popular users, one TikTok moderation source told The Intercept. “Operators who spoke directly to influencers and official content creators always make video conferences with groups to pass ‘safety rules’, thus reducing the chances of creating videos that go against what [ByteDance] think is right”, the source told us. According to this source, their office held regular video conferences between operators, a person from the “safety team,” and select TikTok “influencers” to provide advanced warning of changes to the app’s content policies, helping ensure that they wouldn’t run afoul of any new rules or prohibitions as they made their way across two billion smartphones. Gartner did not comment when asked about this outreach.
Censoring Political Speech
While TikTok policies around the “For You” section had to do with suppression, that is, keeping certain content from becoming too popular, a second document obtained by The Intercept is concerned with censorship, laying out rules for outright removing content from the company’s video livestreaming feature. The rules go far beyond the usual Beijing bugbears like Tiananmen Square and Falun Gong. Crucially, these rules could be easily interpreted to proscribe essential components of political speech by classifying them as dangerous or defamatory.
Any number of the document’s rules could be invoked to block discussion of a wide range of topics embarrassing to government authorities: “Defamation … towards civil servants, political or religious leaders” as well as towards “the families of related leaders” has been, under the policy, punishable with a terminated stream and a daylong suspension. Any broadcasts deemed by ByteDance’s moderators to be “endangering national security” or even “national honor and interests” were punished with a permanent ban, as were “uglification or distortion of local or other countries’ history,” with the “Tiananmen Square incidents” cited as only one of three real world examples. A “Personal live broadcast about state organs such as police office, military etc,” would knock your stream offline for three days, while documenting military or police activity would get you kicked off for that day (would-be protestors, take note).
Gartner refused to clarify whether the substance and intent of these restrictions are still in effect under different phrasing, for example, whether there is any current rule whatsoever against “harming national honor” or documenting police movements. “Like all platforms, we have policies that protect our users, and protect national security, for example banning any accounts that promote hate speech or terrorism, as outlined in our Community Standards,” Gartner wrote in an emailed statement.
Click here to read the complete article