(CTN News) – In a recent press briefing, OpenAI revealed it has identified and thwarted five covert influence operations in the past three months.
These operations, originating from China, Russia, Iran, and Israel, were detected using OpenAI’s artificial intelligence products to manipulate public opinion and shape political outcomes while concealing their true identities.
The report by OpenAI coincides with growing global concerns regarding the impact of AI on elections scheduled for this year.
According to the findings, these influence networks leveraged AI tools to generate vast amounts of text and images with fewer errors than human-generated content, aimed at deceiving the public more effectively.
Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, emphasized the significance of these findings: “Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI.
With this report, we really want to start filling in some of the blanks.
OpenAI Identifies Covert Influence Operations Using AI Tools
OpenAI defines its targets as covert “influence operations,” distinct from disinformation networks, which can disseminate factually correct information but in a deceptive manner.
While propaganda networks have traditionally used social media platforms, their utilization of generative AI tools represents a novel development.
Company noted that these tools were employed alongside more conventional methods, such as manually written texts or memes on major social media sites.
The identified operations included groups like the pro-Russian “Doppelganger,” the pro-Chinese network “Spamouflage,” and an Iranian operation known as the International Union of Virtual Media (IUVM).
Additionally, OpenAI flagged previously unknown networks from Russia and Israel.
The new Russian group, dubbed “Bad Grammar” by OpenAI, utilized the startup’s AI models and the messaging app Telegram to establish a content-spamming pipeline. This operation involved debugging code to automate posting on Telegram and generating comments across dozens of accounts.
In one example cited by OpenAI, a comment posted by an identified account argued against U.S. support for Ukraine, stating, “I’m sick of and tired of these brain damaged fools playing games while Americans suffer.”
OpenAI’s Insights and Future Plans
OpenAI identified some of the AI-generated content by recognizing common AI error messages included in the comments.
Despite their efforts, OpenAI reported that these operations generally failed to gain significant traction. Nimmo cautioned against complacency, noting that history shows such operations can unexpectedly gain momentum if undetected.
Nimmo acknowledged the possibility of other undetected groups using AI tools: “I don’t know how many operations there are still out there. But I know that there are a lot of people looking for them, including our team.”
ChatGPT stated that it is actively sharing threat indicators with industry peers and plans to release more reports in the future to aid in the detection and defense against such influence operations.
Other major tech companies, such as Meta Platforms Inc., have also regularly disclosed similar activities by influence operations.
The findings underscore the challenges posed by AI-driven manipulation in the digital age and highlight the ongoing efforts by organizations like ChatGPT to safeguard against misuse of AI technologies.