The app Tiktok has been in the spotlight of late with reports that it has been losing its war against disinformation. These reports have cited the lack of metadata in the app, as well as Russian-owned accounts that are still visible on the platform. They have also cited the lack of real-time data on the content moderation process on the app. As a result, the government has had no way of knowing whether or not accounts have been removed from the app.
Russian-owned and even state-run accounts still visible on the platform
Despite being banned from posting new content in Russia, Russian-owned and even state-run accounts are still visible on TikTok. However, many users have managed to find ways to circumvent the ban.
A report by European non-profit research group Tracking Exposed examined the effects of the TikTok ban on its Russian users. They found that there is a vast network of pro-Putin and Russian-owned accounts on the platform. Moreover, Tracking Exposed found that the recommendation algorithm used by the platform was promoting content from the Russian government.
TikTok is owned by a Chinese company, ByteDance. It is the last non-Russian-owned global social media platform operating in the country. The platform announced that it would limit Russian-based content, citing a newly enacted “fake news” law that punishes disinformation.
After the new law went into effect, the Kremlin began blocking access to most of the major western platforms that operate in the country. As a result, journalists were forced to leave Russia and coverage of the situation there was cut off by CNN, Microsoft, Apple, and others. In addition, the New York Times removed its journalists from the country and stopped covering Russia altogether.
During the censorship measures, the Russian government also restricted access to Twitter and Facebook. While these measures did not specifically target TikTok, many Russian users posted videos of anti-war protests in Moscow and other Russian cities.
Some TikTokers also used the platform to express their support for President Vladimir Putin and the Russian military in the US. Other TikTokers have documented their experiences under siege in Ukraine and detailed violence in major Ukrainian cities.
The TikTok algorithm is also allowing users to amplify videos they’ve previously posted inside Russia. This means that many users can still view and comment on videos from before March 6 of this year.
Algorithm takes obscure content into main feeds
The TikTok app is a great platform for both users and creators alike. For one thing, it allows them to get paid on their terms. The other major advantage is the app’s ubiquity. With 75 percent of smartphone owners keeping their devices muted during the day, it’s no wonder that this nifty little application is the latest and greatest social media network. It’s also a perfect platform to experiment with the next generation of vlogging. And, unlike Instagram, TikTok hasn’t abandoned the notion of letting its users create their own content. This allows for a plethora of interesting content to be spawned.
Aside from the usual suspects, the app is populated with thousands of unique creators and fans. TikTok has also done its share of showcasing what works, and what doesn’t, from announcing a “no comments” policy to providing detailed information about its Transparency and Accountability Center. They even announced an impressive number of “firsts,” like the world’s largest collection of content from creators. Moreover, they’re allowing for their fabled creators to build and curate their own landing pages. As of this writing, they had just over a thousand.
Of course, it’s no secret that TikTok has a vested interest in maintaining a healthy ecosystem for its creators and fans, which is where the real magic happens. In order to keep users happy, the app has been a test subject to several algorithm updates, from announcing a slew of “no comments” policy changes to releasing a set of surveys aimed at figuring out what works and what doesn’t. On top of that, it’s a tested ground for new features aplenty, a fact that hasn’t gone unnoticed by competitors like Instagram and YouTube.
Lessons from government responses during the first few months of the war
During the first few months of Russia’s invasion of Ukraine, governments responded in a variety of ways. Some focused on debunking, while others resorted to information sharing and fact-checking. Others have aimed to increase awareness of the threat. Among these, government responses include the G7 Rapid Response Mechanism (RRM), which was established by the G7 group of nations in 2018. In addition, the United States’ Global Engagement Center has been monitoring and providing information on Russian efforts since before the war began.
International organisations have also provided assistance. The Hybrid Centre of Excellence, an international collaborative effort to counter Russian disinformation, supports Ukrainian counter disinformation exercises. It is based in Helsinki, Finland and focuses on analyzing the effects of Russian disinformation on European security. As part of its mission, it plans to expand its programming to Ukraine.
Despite the increased pressures on Russia, state-backed media has continued to spread disinformation. A Russian Foreign Ministry false narrative on a secret biological weapons laboratory in Ukraine was amplified by RT Arabic. RT, which is a state-run channel, generated up to USD 27 million in advertising revenues on YouTube between 2017 and 2019. However, RT’s channels were banned across Europe by YouTube on 1 March. This followed a similar move by Twitter and Reddit, which labelled accounts affiliated with Russian state media.
Social media platforms, such as Facebook, Twitter, and Instagram, have been the focus of disinformation efforts. Facebook, for example, found evidence of a Russian disinformation campaign, which it uncovered on Instagram. RT and Sputnik were subsequently blocked from YouTube.
Governments have also used the public communication function to fill in information gaps. In the case of the invasion of Crimea in 2014, for example, the Government Information Cell advised up to 30 NATO allies.
Keeping real-time data on content moderation inaccessible
The biggest challenges social media platforms have are identifying and removing content from unknown sources and spotting the fake or stolen from elsewhere. For example, it’s difficult to tell whether or not a copy of a terrorist video has been tampered with to create a false reality. It’s even more difficult to figure out the origin of a viral video that has made the rounds on Facebook or Twitter. A recent study by the Washington Post found that the top three most popular disinformation accounts were active on all four of the major social networks. Thankfully, some platforms have taken measures to combat the misinformation.
TikTok, a micro-blogging service, has also faced heavy scrutiny for its ability to moderate the content that flows across its platform. For example, the company has been criticized for allowing accounts to impersonate high-profile political figures during the last national election in Germany. However, the company has not commented on the allegations. That said, the company is still employing over 7,000 moderators around the world, many of whom have been accused of engaging in dubious practices. And despite the fact that TikTok claims to have removed over 500,000 pieces of content from its platforms, it’s not clear what the company is doing to keep them from infecting the masses.
While it’s not easy to come by reliable data about TikTok’s content moderation policies and performance, a cursory search on the company’s website yielded some interesting tidbits. Among other things, the site’s chief exec has said that the site’s ‘top-tier’ moderators are being rewarded for their contributions with a bonus and free dinner at a local restaurant, but it’s unclear how the rewards are distributed.
App lacks crucial metadata
There has been a huge increase in the amount of disinformation on TikTok, and it’s a serious concern. The company’s algorithm, which is designed to serve content that has a higher likelihood of attracting users, makes it easier for rumors to spread.
While the company has taken some steps to stop the proliferation of disinformation, it’s still facing a number of issues. One major problem involves AI-enabled deep fakes. These videos are misleadingly depicting real-life events. Some of the images have bodies omitted, and others have bombed buildings with no visible dead. They may give the impression that there are no deaths, and therefore no need for further investigation.
While there has been a dramatic increase in the amount of misinformation on TikTok, there have also been improvements in the algorithms used by the platform. In particular, the company has a new system designed to catch recycled content. But even with these changes, the number of misleading and disinformational videos on the platform is still alarming.
Besides the algorithms, there are some other reasons that TikTok has become vulnerable to false information. For one, the platform relies on pseudonymous accounts, which often lack biographical information and geographic information. This lack of critical metadata makes it difficult to find out who the video’s source is.
It’s also difficult to distinguish between truth and rumor. Despite TikTok’s efforts to limit misinformation, it continues to amplify and encourage the propagation of rumors and propaganda, particularly in Russia. Moreover, it’s important to remember that some state-controlled media accounts remain on the platform, although the EU has banned access to these types of accounts.