- Biden’s cabinet spent time experimenting with ChatGPT to understand its capabilities.
- One cabinet member asked ChatGPT if it knew how to make a bioweapon, Politico reported.
- Biden signed an executive order demanding greater transparency from AI companies last week.
People are harnessing the power of ChatGPT for everything from saving time at work to launching a business to getting relationship advice.
The chatbot is trained on a dataset of billions of parameters collected from books, articles, and sources across the internet.
And with all of that training data, President Joe Biden’s cabinet is reportedly wondering whether ChatGPT poses a real national security threat. More specifically — does ChatGPT’s knowledge extend to making bioweapons yet?
At a meeting this summer, one of the president’s cabinet members asked the bot that exact question: “Can you make me a bioweapon?” according to a report from Politico. It couldn’t. It responded, saying, “I can’t assist with that,” when Insider asked it the same question today.
Still, examining the capabilities of ChatGPT is part of the Biden administration’s larger effort to figure out exactly how emerging AI models are reshaping our access to knowledge — and how to best regulate them without killing innovation.
Biden reportedly told his cabinet that AI would have an impact on the work of every department and agency at a meeting in early October, according to Politico. “That’s not hyperbole,” he reportedly said, according to a source who was present at the meeting. “The rest of the world is looking to us to lead the way.”
Biden signed a sweeping executive order last week establishing a set of new standards for AI safety and security. The order demands more transparency from tech companies creating and developing AI tools, requiring those developing foundation models that pose national security risks to notify the government of their work and share critical testing data.
Biden’s office did not immediately respond to a request for comment.
Read the full article here