The Generative AI Boom Could Fuel a New International Arms Race

News Room

Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology.

Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers.

Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Google’s BERT, a precursor to the more powerful and capable language models that power chatbots like ChatGPT.

“There’s no evidence of it being done right now,” says William Marcellino, an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. “Rather someone saying, ‘Here’s a path forward.’” He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. “Coming up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrative—I think that it’s qualitatively and quantitatively different,” Marcellino says.

Online information campaigns, like the one that Russia’s Internet Research Agency waged to undermine the 2016 US election, have been around for years. They have mostly depended on manual labor—human workers toiling at keyboards. But AI algorithms developed in recent years could potentially mass-produce text, imagery, and video designed to deceive or persuade, or even carry out convincing interactions with people on social media platforms. A recent project suggests that launching such a campaign could cost just a few hundred dollars.

Marcellino and his coauthors note that many countries—the US included— are almost certainly exploring the use of generative AI for their own information campaigns. And the wide accessibility of generative AI tools, including numerous open source language models anyone can obtain and modify, lowers the bar for anyone looking to launch an information campaign. “A variety of actors could use generative AI for social media manipulation, including technically sophisticated non-state actors,” they write.

A second report issued this week, by another tech-focused think tank, the Special Competitive Studies Project, also warns that generative AI could soon become a way for nations to flex on one another. It urges the US government to invest heavily in generative AI because the technology promises to boost many different industries and provide “new military capabilities, economic prosperity, and cultural influence” for whichever nation masters it first.

Like the RAND report, the SCSP’s analysis also draws some gloomy conclusions. It warns that generative AI’s potential is likely to trigger an arms race to adapt the technology for use by militaries or in cyberattacks. If both are right, we are headed for an information-space arms race that may prove particularly difficult to contain.

How to avoid the nightmare scenario of the internet becoming overrun with AI bots programmed for information warfare? It requires humans to talk with one another.

The SCSP report recommends that the US “should lead global engagement to promote transparency, foster trust, and encourage collaboration.” The RAND researchers recommend that US and Chinese diplomats discuss generative AI and the risks around the technology. “It may be in all of our interests not to have an internet that’s totally polluted and unbelievable,” Marcellino says. I think that’s something we can all agree on.

Read the full article here

Share this Article
Leave a comment