Home ai OpenAI Bans Iranian Influence Operation Using AI-Generated Content in US Election

OpenAI Bans Iranian Influence Operation Using AI-Generated Content in US Election

State-affiliated actors have once again been caught using artificial intelligence to spread misinformation and manipulate public opinion, this time during the U.S. presidential election. OpenAI, the organization behind ChatGPT, has recently banned a group of accounts linked to an Iranian influence operation. These accounts were generating AI-generated articles and social media posts, although they did not seem to gain much traction with the audience.

This is not the first time OpenAI has taken action against accounts using ChatGPT maliciously. In May, the company disrupted five campaigns that were attempting to manipulate public opinion. These incidents are reminiscent of previous election cycles, where state actors used social media platforms like Facebook and Twitter to influence voters. Now, it seems that similar groups, or perhaps the same ones, have turned to generative AI to flood social channels with misinformation.

OpenAI has adopted a whack-a-mole approach, similar to social media companies, by banning accounts associated with these efforts as they arise. Their investigation into this cluster of accounts was aided by a Microsoft Threat Intelligence report, which identified the group behind the operation as Storm-2035. According to Microsoft, Storm-2035 is an Iranian network that imitates news outlets and actively engages with U.S. voter groups on opposing ends of the political spectrum. Their messaging focuses on polarizing issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.

The goal of these state-affiliated actors is not necessarily to promote a specific policy or candidate but to sow dissent and conflict. They aim to create division among the population and undermine trust in institutions. OpenAI discovered that Storm-2035 operated through five website fronts, masquerading as progressive and conservative news outlets. These websites, with convincing domain names like “evenpolitics.com,” published ChatGPT-generated articles, including one that falsely claimed “X censors Trump’s tweets.” It’s worth noting that Elon Musk’s platform has not censored Trump’s tweets, and in fact, Musk has encouraged the former President to engage more on the platform.

On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by the operation. These accounts used ChatGPT to rewrite political comments, which were then posted on the platforms. One tweet falsely attributed Kamala Harris as linking “increased immigration costs” to climate change, followed by the hashtag #DumpKamala.

Despite their efforts, OpenAI found that Storm-2035’s articles were not widely shared, and the majority of their social media posts received little to no engagement. This is a common trend with these types of operations, as they are quick and inexpensive to set up using AI tools like ChatGPT. However, as the election approaches and online partisan bickering intensifies, we can expect to see more instances like this.

It is crucial for platforms like OpenAI, as well as social media companies, to remain vigilant and take swift action against these malicious actors. The spread of misinformation can have severe consequences for democracy and public discourse. By detecting and banning accounts associated with these influence operations, organizations like OpenAI play a crucial role in protecting the integrity of elections and combating the manipulation of public opinion.

Exit mobile version