Google Expands Bug Bounty Program To Include Generative AI Vulnerabilities

News Room

In a blog published late last week, Google announced that it is expanding its Vulnerability Rewards Program to include bugs and vulnerabilities found in generative AI systems, marking the latest step Google is making toward securing the rapidly emerging technology.

The expanded program will incentivize security researchers to uncover potential issues with generative AI for Google’s own systems, including Google Bard and Google Cloud’s Contact Center AI. Google has released new guidelines outlining the types of vulnerabilities they want researchers to identify, including unfair bias, model manipulation, and misinterpretations of data.

“As we continue to integrate generative AI into more products and features, our Trust and Safety teams are taking a comprehensive approach to anticipate and test for these potential risks,” Laurie Richardson and Royal Hansen, Vice Presidents at Google working within trust and safety, wrote in a joint blog post.

“But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure.”

The move comes after Google joined other tech giants at a White House summit earlier this year, where they pledged to promote the discovery of AI vulnerabilities. It also follows Google’s involvement in a large-scale “Generative AI Red Team” event at the DEF CON hacking conference in Las Vegas, where researchers probed systems for flaws.

In addition to the VRP expansion, Google is applying measures to secure the AI supply chain itself. This includes leveraging SLSA and Sigstore, two open-source technologies that can help verify the integrity of AI components.

“Our hope is that by incentivizing more security research while applying supply chain security to AI, we’ll spark even more collaboration with the open source security community and others in industry, and ultimately help make AI safer for everyone,” Richardson and Hansen wrote.

Google’s Vulnerability Rewards Program (VRP) offers bug bounties to security researchers who find vulnerabilities in Google’s products and services. The program provides rewards to encourage the responsible disclosure of bugs that could compromise user privacy and data.

In 2022, Google’s VRP rewarded researchers over $4.8 million in rewards across over 700 submissions spanning Google services, including Android, Chrome, and Google Cloud.

The program has helped Google identify and fix thousands of security flaws before they could be exploited by hackers.

By expanding the VRP to include generative AI systems, like Bard, Google aims to further secure emerging technologies and collaborate with the security community to build trustworthy AI.

Experts say that uncovering vulnerabilities will be critical as generative AI becomes more ubiquitous. A recent survey, conducted by consulting firm KPMG, found that 93% of Canadian CEOs were worried that the emergence of AI will make them even more vulnerable to breaches.

The expanded VRP for AI is now live on Google’s website. Eligible vulnerabilities will qualify for bounty rewards of up to $30,000.

Read the full article here

Share this Article
Leave a comment