Home Tech The Rise of AI-Enhanced Misinformation on TikTok: New Report Reveals Coordinated Effort

The Rise of AI-Enhanced Misinformation on TikTok: New Report Reveals Coordinated Effort

The Rise of AI-Enhanced Misinformation on TikTok

In recent years, TikTok has become a breeding ground for misinformation, with users often falling victim to false narratives and conspiracy theories. However, a new report from NewsGuard reveals that the spread of misinformation on the platform has taken a more sinister turn, thanks to the use of AI tools by bad actors.

According to the report, at least 41 TikTok accounts have been posting false, AI-enhanced content in both English and French. These accounts have collectively posted 9,784 videos, amassing over 380 million views between March 2023 and June 2024. This equates to an average of one to four AI-narrated videos each day. What’s particularly alarming is that many of these videos used identical scripts, suggesting a coordinated effort to spread misinformation.

The content shared by these accounts covers a range of topics, including U.S. and European politics, as well as the Russia-Ukraine war. False narratives such as NATO deploying combat troops in Ukraine and the U.S. being responsible for a terrorist attack in Moscow have been propagated through these AI-enhanced videos. What’s even more concerning is that some of these accounts qualified for monetization through TikTok’s Creator Fund, allowing them to profit from their misleading content.

This isn’t the first time AI has been used to spread misinformation on TikTok. Last year, NewsGuard documented the rise of a network of TikTok accounts that used AI-facilitated text-to-speech tools to spread celebrity conspiracy theories. These accounts gained 336 million views and 14.5 million likes in just three months.

The latest report highlights a significant increase in AI-boosted content on TikTok, this time with political motivations. The rise of an incentivized AI content farm on the app is becoming more apparent. These content farms are defined as entities that generate large volumes of low-quality content to attract views and ad revenue. The efficiency and effectiveness of AI features make it easier for bad actors to spread misinformation and disinformation, making it a growing concern for TikTok and its users.

TikTok has taken notice of the issue and has pledged to label and watermark content that uses generative AI more effectively. However, the spread of political misinformation continues to persist, aided by the power of AI. It’s clear that more needs to be done to combat the rampant spread of misinformation on social media platforms like TikTok.

Furthermore, the problem of AI-facilitated misinformation extends beyond TikTok. The Justice Department recently announced the takedown of an AI-powered Russian bot farm that operated on X (formerly Twitter), using over 1,000 pro-Kremlin accounts. This highlights the extent to which AI technology is being used to manipulate public opinion and spread disinformation.

Even the United States itself has utilized AI technology and bot farms to spread its own counter narratives and disinformation campaigns. In 2020, an initiative was launched to curb foreign influence by spreading misinformation about COVID-19. As the upcoming election draws near, concerns about targeted disinformation and AI-boosted misinformation on social media platforms continue to grow.

In conclusion, the rise of AI-enhanced misinformation on TikTok is a pressing issue that demands attention. The coordinated efforts of bad actors to spread false narratives, combined with the power of AI, pose a significant threat to public discourse and democratic processes. As users, it is crucial to remain vigilant, fact-check information, and advocate for stronger measures to combat the spread of misinformation on social media platforms.

Exit mobile version