With artificial intelligence the biggest thing in technology in years, attackers and other miscreants are increasingly using AI to improve their attack techniques. In response to the rising threat, Google LLC today announced an expansion of its Vulnerability Rewards Program to encompass threats specific to generative artificial intelligence.
Generative AI specializes in creating content that closely mimics human-generated output. The content generated can range from text and images to more complex patterns — handy for legitimate use cases but also a potent tool for attackers aiming to deceive systems or individuals. The capability of generative AI to produce convincing fake content opens the door to a host of potential cyberthreats, from deepfake videos to counterfeit textual data.
By broadening the scope of VRP to include generative AI, Google is signaling its recognition of these emerging challenges. The VRP traditionally rewards external security researchers for discovering and reporting potential security issues within Google’s ecosystem. From today, researchers who identify vulnerabilities or threats linked to generative AI models and applications will now be eligible for rewards. To tackle the risk of generative AI, Google is leveraging the collective expertise of the global security community to pinpoint and rectify weak spots in generative AI.
Google is also taking a fresh look at how bugs should be categorized and reported. Generative AI involves different concerns versus traditional digital security issues, such as the potential for unfair bias, model manipulation and the misinterpretations of data — also known as hallucinations. Google’s Trust and Safety teams are taking a comprehensive approach to the protections they build to anticipate and test for potential risks better as generative AI is integrated into more products and features.
The expansion of Google’s VRP is part of a broader trend where large tech companies and organizations are working to counter the novel challenges posed by AI. Earlier this year, Google, along with other leading AI companies, met at the White House to discuss and strategize on mitigating vulnerabilities inherent to AI systems.
Also announced today are two new ways to strengthen the AI open-source supply chain. Building on its existing partnership with the Open Source Security Foundation, Google is directing efforts to reinforce AI supply chain security. The Google Open Source Security Team is set to deploy two strategic tools: Supply-chain Levels for Software Artifacts, aimed at hardening the supply chain, and Sigstore, designed to ensure signature transparency.
By incorporating supply chain security into the machine learning development lifecycle now, while the industry is still determining AI risks, Google said, it aims to jumpstart work with the open-source community.
Eric Doerr, vice president of engineering and cloud security at Google, spoke with theCUBE, SiliconANGLE Media’s livestreaming studio, in September on the industry’s need to focus on simplifying security and enhancing the capabilities of security professionals amid the emergence of generative AI:
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.