You can now request the removal of AI-generated content mimicking you on YouTube
The new YouTube policy allows you to report AI-generated video content that violates your privacy.
YouTube has always been clear about the protection of people (content creators and subscribers) who use its platform and their rights.
Now, with the prevalence of AI-generated content, YouTube has decided to hold to its word by rolling out a policy change that now allows you to request the removal of AI-generated or other synthetic content that mimics your face or voice.
This new policy has been included under YouTube's privacy guidelines, which is an expansion of its previously announced approach to responsible AI agenda, first introduced in November last year.
You can now report AI-generated video content that violates your privacy, although it will be treated differently from misleading content like a deepfake, for example, which is often categorized as a violation of YouTube's community guidelines.
After you have reported, YouTube will then make its judgment about your report based on a variety of factors.
For example, when you submit a report of this nature, YouTube will check to see if the content was made with AI and if the content can be considered satire, parody, or something of value that will be to the benefit of the public interest.
It will also check to see if the content contains any prominent figures engaging in criminal or violent activity. Once it is done and it has concluded that indeed the content violates your privacy, it will give the content uploader 48 hours to take the content down.
Once this happens, the uploader will have to fully remove your name and personal information from the video and take the video down completely without an option to make it private. If the content uploader refuses to do this within the allotted 48-hour period, YouTube will step in and review the video by itself.
There are no penalties for content uploaders who upload content that violates this privacy policy. However, YouTube has confirmed that it may take action against their accounts if the violations occur repeatedly.
Interestingly, YouTube did not make a public announcement about this policy change, although it made mention of it earlier in March this year when it launched a new content labeling tool to help users identify AI-generated content.
This underscores YouTube's stance on its strict privacy policies and guidelines that govern content and its over 2 billion users who use its platform per data from Statista.