Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is founder of Sifted, an FT-backed site about European start-ups
It is rapidly emerging as one of the most important technological, and increasingly ideological, divides of our times: should powerful generative artificial intelligence systems be open or closed? How that debate plays out will affect the productivity of our economies, the stability of our societies and the fortunes of some of the world’s richest companies.
Supporters of open-source models, such as Meta’s LLaMA 2 or Hugging Face’s Bloom that enable users to customise powerful generative AI software themselves, say they broaden access to the technology, stimulate innovation and improve reliability by encouraging outside scrutiny. Far cheaper to develop and deploy, smaller open models also inject competition into a field dominated by big US companies such as Google, Microsoft and OpenAI. These companies have invested billions developing massive, closed generative AI systems, which they closely control.
But detractors argue open models risk lifting the lid on a Pandora’s box of troubles. Bad actors can exploit them to disseminate personalised disinformation on a global scale, while terrorists might use them to manufacture cyber or bio weapons. “The danger of open source is that it enables more crazies to do crazy things,” Geoffrey Hinton, one of the pioneers of modern AI, has warned.
The history of OpenAI, which developed the popular ChatGPT chatbot, is itself instructive. As its name suggests, the research company was founded in 2015 with a commitment to develop the technology as openly as possible. But it later abandoned that approach for both competitive and safety reasons. “Flat out, we were wrong,” Ilya Sutskever, OpenAI’s chief scientist, told The Verge.
Once OpenAI realised that its generative AI models were going to be “unbelievably potent”, it made little sense to open source them, he said. “I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
Supporters of open models hit back, ridiculing the idea that open generative AI models enable people to access information they could not otherwise find from the internet or a rogue scientist. They also highlight the competitive self-interest of the big tech companies in shouting about the dangers of open models. These companies’ sinister intent, critics suggest, is to capture regulators, imposing higher compliance costs on insurgents and thus entrenching their own market dominance.
But there is an ideological dimension to this debate, too. Yann LeCun, chief scientist of Meta, which has broken ranks with the other Silicon Valley giants by championing open models, has likened rival companies’ arguments for controlling the technology to medieval obscurantism: the belief that only a self-selecting priesthood of experts is wise enough to handle knowledge.
In the future, he told me recently, all our interactions with the vast digital repository of human knowledge will be mediated through AI systems. We should not want a handful of Silicon Valley companies to control that access. Just as the internet flourished by resisting attempts to enclose it, so AI will thrive by remaining open, LeCun argues, “as long as governments around the world do not outlaw the whole idea of open source AI”.
Recent discussions at the Bletchley Park AI safety summit suggest at least some policymakers may now be moving in that direction. But other experts are proposing more lightweight interventions that would improve safety without killing off competition.
Wendy Hall, regius professor of computer science at Southampton university and a member of the UN’s AI advisory body, says we do not want to live in a world where only the big companies run generative AI. Nor do we want to allow users to do anything they like with open models. “We have to find some compromise,” she suggests.
Her preferred solution, gaining traction elsewhere, is to regulate generative AI models in a similar way to the car industry. Regulators impose strict safety standards on car manufacturers before they release new models. But they also impose responsibilities on drivers and hold them accountable for their actions. “If you do something with open source that is irresponsible and that causes harm you should go to jail — just like if you kill someone when driving a car,” Hall says.
We should certainly resist the tyranny of the binary when it comes to thinking about AI models. Both open and closed models have their benefits and flaws. As the capabilities of these models evolve, we will constantly have to tweak the weightings between competition and control.
Read the full article here