In the age of AI-generated content, where distinguishing between real and fake has become increasingly challenging, YouTube is taking a stand against the spread of misinformation and deep fakes on its platform. As a first step towards combating this issue, YouTube has implemented new measures requiring creators to disclose when a video has been altered or generated using artificial intelligence (AI) tools.
Policy Update and Transparency Measures
The announcement made via its blog post on Monday, follows YouTube’s update to its content policy in November 2023, aimed at creating transparency around AI-generated videos uploaded to the platform. In an effort to address concerns over misleading content, YouTube has introduced labels that indicate when a video has been significantly altered or synthetically generated, yet appears realistic.
Creator Disclosure Requirements
In a blog post published on Monday, YouTube unveiled its new tool designed to enforce transparency in content creation. Creators uploading videos will now be prompted to disclose whether their content has been meaningfully altered or generated using AI tools during the upload process. The disclosure section, titled “Altered Content,” includes three questions:
- Does the video make a person say or do something they did not say or do?
- Does the video alter footage of a real event or place?
- Does the video contain a realistic-looking scene that did not occur?
Creators must mark “Yes” if any of these elements are present in their video, and YouTube will automatically add a label in the description of the uploaded video.
Implementation and Enforcement
The disclosure tool will be integrated into the video-uploading workflow, appearing prominently on the first page alongside other important disclaimers. Once marked, the label will be added to the description of the video, indicating that the content has been altered or generated synthetically.
These new labels will be visible on both long-format videos and Shorts, with Shorts receiving a more prominent tag displayed above the channel’s name. Initially, the labels will be visible on the Android and iOS apps, with plans to expand to the web interface and TV platforms.
Penalties for Non-Compliance
Failure to disclose AI-generated content will result in penalties from YouTube, including content removal, suspension from the YouTube Partner Programme, and other punitive measures. While YouTube acknowledges that creators may need time to adjust to these new requirements, it emphasizes the importance of compliance in maintaining the integrity of content on the platform.
YouTube announced its updated content policy and focus on AI-generated content in November 2023 in light of rising instances of deepfakes. It highlighted that it will be introducing disclosure tools and an option for viewers to request the removal of “AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” A separate set of rules were also announced to protect the content of music labels and artists.