Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Leading artificial intelligence companies have agreed to allow governments including the UK, US and Singapore to test their latest models for national security and other risks before they are released to businesses and consumers.
Companies including OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft and Meta on Thursday signed a “landmark” but not legally binding document, closing a two-day AI safety summit in the UK.
The document was signed by governments that also included Australia, Canada, the EU, France, Germany, Italy, Japan and South Korea. China was not a signatory.
An international panel of experts will also set out an annual report on the evolving risks of AI, including bias, misinformation and more extreme “existential” risks such as aiding in the development of chemical weapons.
“I believe the achievements of this summit will tip the balance in favour of humanity,” said Rishi Sunak, UK prime minister and host of the inaugural event. “We will work together on testing the safety of new AI models before they are released.”
“We are ahead of any other country in developing the tools and capabilities to keep people safe”, Sunak said from Bletchley Park, the building that was home to second world war codebreakers.
US vice-president Kamala Harris and European Commission president Ursula von der Leyen were at the summit, which also gathered together other allied nations to discuss sensitive issues of national security.
Sunak, when asked whether the UK needed to go further by setting out binding regulations, said that drafting and enacting legislation “takes time”.
“It’s vital that we establish ways to assess and address the current challenges AI presents, as well as the potential risks from technology that does not yet exist,” said Nick Clegg, president of global affairs at Meta.
The US government issued an executive order on Monday in the administration’s broadest step in tackling AI threats. The US also said this week that it plans to set up its own institute to police AI. The UK agreed partnerships with the US AI Safety Institute, and with Singapore to collaborate on AI safety testing.
The AI risk report, agreed to by 20 countries, will be modelled on the Intergovernmental Panel on Climate Change, with the first to be chaired by Yoshua Bengio, professor of computer science at the University of Montreal.
A UK government AI task force ran a series of safety tests on the leading AI models to assess any potential flaws or risks of abuse, according to multiple people at one of the summit’s sessions. These included how the models could present a higher ability to spread disinformation, co-ordinate cyber attacks, or plan biological and chemical weapon attacks.
Jack Clark, a co-founder of AI start-up Anthropic, told the Financial Times that there needed to be an external, independent “referee” to test the safety of models in development.
“We’ll still do our own tests,” he said, “but I really want there to be a third party legitimate testing authority that we can throw tests to and hear results from.”
Sign up for the FT Future of AI summit on November 15-16
Read the full article here