Jürgen Fälchle - stock.adobe.c

Google launches bug bounties for generative AI attack scenarios

Google expands its bug bounty programme to encompass generative AI and takes steps to grow its commitment to supply chain security as it relates to the emerging technology

Google is taking steps to address cyber risks associated with generative artificial intelligence (GenAI) by expanding its bug bounty scheme, the Vulnerability Rewards Program (VRP) to encompass attack scenarios specific to the generative AI supply chain.

Laurie Richardson, vice-president of trust and safety, and Royal Hansen, vice-president of privacy, safety and security engineering, said the firm believed taking this step would not only bring potential security issues to light quicker and make AI safer for everyone, but incentivise the wider community to do more research around AI safety and security.

“As part of expanding VRP for AI, we’re taking a fresh look at how bugs should be categorised and reported. Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data [or] hallucinations,” they said.

“As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks.

“But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure. In August, we joined the White House and industry peers to enable thousands of third-party security researchers to find potential issues at DEF CON’s largest-ever public Generative AI Red Team event.

“Now, since we are expanding the bug bounty programme and releasing additional guidelines for what we’d like security researchers to hunt, we’re sharing those guidelines so that anyone can see what’s ‘in scope.’ We expect this will spur security researchers to submit more bugs and accelerate the goal of a safer and more secure generative AI,” they said.

At the same time, Google is also introducing new measures to better secure the AI supply chain, announcing a number of enhancements to its Secure AI Framework (SAIF) – which it launched in June 2023.

The SAIF was designed to support the industry in creating trustworthy AI applications, with its core founding principle being the security of the critical supply chain components that enable them against threats such as tampering, data poisoning, and the production of harmful content.

In addition, Google is now expanding its open source security work and building on a prior team-up with the Open Source Security Foundation. Through this partnership, Google’s own Open Source Security Team (GOSST) will use the SLSA framework to improve resiliency in supply chains, and Sigstore to help verify that software in the AI supply chain is what it says it is. Google has already made available prototypes for attestation verification with SLSA and model signing with Sigstore.

“These are early steps toward ensuring the safe and secure development of generative AI – and we know the work is just getting started,” said Richardson and Hansen.

“Our hope is that by incentivising more security research while applying supply chain security to AI, we’ll spark even more collaboration with the open source security community and others in industry, and ultimately help make AI safer for everyone.”

Endor Labs security researcher Henrik Plate, who specialises in open source software (OSS) security and AI, commented: “Applying the same security principles and, where possible, tooling to AI/ML is a great opportunity to develop secure systems from the ground up.

 “Compared to the emerging AI/ML space, OSS or component-based software development exists for a longer time span, which sometimes makes it more difficult to bolt security onto well-established technologies without disrupting existing software development and distribution processes.

“There are many similarities between the production and sharing of software components and AI/ML artifacts: Training data can be compared with software dependencies, the trained model with binary artifacts, and artifact registries like Maven Central or PyPI with model registries like Hugging Face. And from the viewpoint of developers consuming third-party models, those models can be considered like any other upstream dependency.

“Some attacks are also very similar, e.g. the deserialisation of data from untrusted sources, which has haunted some OSS ecosystems for some time already: Serialised ML models can also contain malicious code that executes upon deserialisation (a problem of the prominent pickle serialization format already presented at BlackHat 2011). Hugging Face and open source projects try to address this by dedicated model scanners.”

Read more about generative AI risks

Read more on Web application security