Google LLC will require users to disclose if videos they upload to YouTube contain realistic-looking content generated with artificial intelligence tools.
YouTube product management executives Jennifer Flannery O’Connor and Emily Moxley detailed the new policy in a blog post published today. The policy, which is set to roll out in the coming months, will apply to all videos that contain realistic-looking synthetic content. This includes AI-generated content, or as well as real clips that have been digitally altered.
After the policy is implemented, users will have access to a set of new options in YouTube’s video upload tool. Those options will make it possible to specify if a clip contains synthetic content. According to Google, failing to indicate that a video contains such content could lead creators to face penalties.
“Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” Flannery O’Connor and Moxley wrote.
When a user indicates that a newly uploaded clip contains synthetic content, YouTube will add a disclosure to the clip’s description. If the synthetic content is about a sensitive topic, the Google unit will embed a more prominent disclosure directly in the video player. “Some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines,” Flannery O’Connor and Moxley added.
Google is rolling out the new policy alongside a mechanism for requesting the removal of AI-generated content. According to the company, YouTube will allow users to ask that a clip be taken down if it contains a realistic-looking synthetic depiction of a person. Music labels, in turn, will gain the ability to request the removal of clips that mimic an artist’s voice.
Google’s efforts to address the risks posed by synthetic content on YouTube also encompass its Dream Screen generative AI tool. Introduced in September, the tool allows YouTube users to create background images for videos with natural language instructions. Google will add disclosures to content generated with Dream Screen.
The company also intends to integrate guardrails into future additions to YouTube’s generative AI feature set. When Dream Screen was announced in September, Google disclosed plans to roll out an AI-powered video remixing feature down the road. The search giant is also working on another machine learning capability that will make it possible to generate new clips using text prompts.
The announcement of YouTube’s synthetic content policy comes a few months after Google introduced disclosure rules for AI-generated political ads. Under those rules, political organizations will have to “prominently disclose” if their ads contain realistic-looking synthetic content. Meta Platforms Inc., Google’s top competitor in the digital advertising market, recently previewed a similar policy for Instagram and Facebook that will go into effect next year.
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.