OpenAI shuts down influence networks in China and Russia

OpenAI said it has cut off five covert influence operations in the past three months, including networks in China, Russia, Iran and Israel that accessed the ChatGPT-maker’s artificial intelligence products to try to manipulate public opinion or shape political outcomes while obscuring their true identity.

The new report comes at a time of widespread concern about the role of AI in global elections slated for this year. In its findings, OpenAI listed the ways in which influence networks have used its tools to more efficiently deceive people, including using AI to generate text and images in larger volume and with fewer language errors than would have been possible by humans alone.

But the company said that ultimately, in its assessment, these campaigns failed to significantly increase their reach as a result of using OpenAI’s services.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” said Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, in a press briefing Wednesday. “With this report, we really want to start filling in some of the blanks.”

The company said it defined its targets as covert “influence operations” that are “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them”. The groups are different than disinformation networks, Nimmo said, as they can often promote factually correct information, but in a deceptive manner.

While propaganda networks have long used social media platforms, their use of generative AI tools is relatively new. OpenAI said that in all of the operations it identified, AI-generated material was used alongside more traditional formats, such as manually written texts or memes on major social media sites.

In addition to using AI for generating images, text and social media bios, some influence networks also used OpenAI’s products to increase their productivity by summarising articles or debugging code for bots.

The five networks identified by OpenAI included groups such as the pro-Russian “Doppelganger,” the pro-Chinese network “Spamouflage” and an Iranian operation known as the International Union of Virtual Media, or IUVM. OpenAI also flagged previously unknown networks that the start-up says it identified for the first time coming from Russia and Israel.

The new Russian group, which OpenAI dubbed “Bad Grammar”, used the start-up’s AI models as well as the messaging app Telegram to set up a content-spamming pipeline, the company said.

First, the covert group used OpenAI’s models to debug code that can automate posting on Telegram, then generated comments in Russian and English to reply to those Telegram posts using dozens of accounts.

An account cited by OpenAI posted comments arguing that the United States should not support Ukraine. “I’m sick of and tired of these brain damaged fools playing games while Americans suffer,” it read. “Washington needs to get its priorities straight or they’ll feel the full force of Texas!”

11:56

From India to China, how deepfakes are reshaping Asia politics

From India to China, how deepfakes are reshaping Asia politics

OpenAI identified some of the AI-generated content by noting that the comments included common AI error messages like, “As an AI language model, I am here to assist.” The company also said it’s using its own AI tools to identify and defend against such influence operations.

In most cases, the networks’ messaging did not appear to get wide traction, or human users identified the posted content as generated by AI. Despite its limited reach, “this is not the time for complacency”, Nimmo said.

“History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”

Nimmo also acknowledged that there were likely groups using AI tools that the company is not aware of.

“I don’t know how many operations there are still out there,” Nimmo said. “But I know that there are a lot of people looking for them, including our team.”

Other companies such as Meta Platforms Inc. have regularly made similar disclosures about influence operations in the past. OpenAI said it is sharing threat indicators with industry peers, and part of the purpose of its report is to help others do this kind of detection work. The company said it plans to share more reports in the future.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Chronicles Live is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – chronicleslive.com. The content will be deleted within 24 hours.

Leave a Comment