Concerns are rising over foreign actors, potentially originating from China and Iran, manipulating American artificial intelligence models for harmful activities like covert influence campaigns, as revealed in a recent OpenAI report.
The report details two incidents involving suspected Chinese actors who attempted to leverage AI models developed by OpenAI and Meta. One instance involved a banned ChatGPT account that produced content criticizing Chinese dissident Cai Xia. These comments, shared on social media by accounts claiming to be from India and the U.S., failed to gain significant traction.
The same actor also employed ChatGPT to create extensive Spanish-language news articles disparaging the U.S., which were subsequently published by established Latin American news outlets. These articles were attributed to an individual and, occasionally, a Chinese company. OpenAI believes this marks the first known case of a Chinese actor successfully disseminating long-form articles through mainstream media to target Latin American audiences with anti-U.S. sentiments.

Ben Nimmo, Principal Investigator on OpenAI’s Intelligence and Investigations team, revealed in a press briefing that at least one of the translated articles was labeled as sponsored content, indicating potential payment for its placement.
Furthermore, OpenAI disabled a ChatGPT account that generated tweets and articles later published on platforms linked to known Iranian influence operations. While these two operations appear distinct, the report raises concerns about potential collaboration between Iranian influence networks.

Another incident involved the banning of several ChatGPT accounts that used OpenAI models to translate and generate content for a romance scam network operating across platforms like X, Facebook, and Instagram. Meta's subsequent investigation suggested that this activity originated from a recently established scam operation in Cambodia.

Since last year, OpenAI has been at the forefront of publishing reports on preventing the misuse of AI by malicious actors. The company emphasizes the importance of information sharing between AI companies, hosting and software providers, and social media platforms to effectively combat these threats. OpenAI remains committed to identifying and disrupting attempts to exploit their models for harmful purposes.