Video-sharing giant YouTube has just released a statement about its upcoming safeguards and restrictions on AI-generated content. AI content has taken the world by storm this past year, but it’s also given way to major concerns. Deep fakes and disturbing imagery have been circulating around the internet for months now. Furthermore, lots of copyright concerns have arisen since the advent of GPT, DALL-E, and other generative AI tools. In a recent blog post, YouTube highlights measures it will be taking concerning AI going forward.
Labels for realistic AI content
YouTube’s Community Guidelines are going to get an update to accommodate the rise of AI content. Specifically targeting realistic depictions, YouTube will now ask creators to disclose AI content as synthetic. This includes videos or imagery of non-existent events or real people saying or doing something they never actually did. AI videos will have a discreet label identifying them as such. Videos about more sensitive topics are going to get even more prominent labels.
According to the blog post, this safeguard will be put in place to clearly label fake content as generated by AI. Creators who repeatedly fail to tag AI content as AI-generated will have their content removed. They may also be removed from the YouTube Partner Program in addition to other penalties. There will also be restrictions on certain types of AI content. Violent or gory depictions, for example, will not be tolerated and will be removed.
Removal requests for AI deep fakes
YouTube is also going to add a new safeguard; an option for you to request a video’s removal. These requests can be made for AI-generated deep fakes of people. YouTube says not all requests will be entertained, and content will only be removed in certain cases. If the video is of a public figure, different considerations will apply.
Artists will now be able to request the removal of videos simulating their voices in music. This safeguard will prevent people from making and uploading new content with someone’s voice without their permission. Voice actors will most likely also be able to make use of this feature, as many have begun generating voices of known actors recently. This will help prevent voice actors from being phased out of the industry, at least on YouTube.
AI moderation tools and safeguards
The blog post also mentions that YouTube is developing its own moderation services and AI tools. Similar to the copyright detection system YouTube employs today, these moderation services will be able to identify and remove content that violates the guidelines before it is published. YouTube is also working on AI tools that content creators will have access to. These tools will allow creators to more easily generate AI content right there on the platform.
AI technology is improving at a rapid pace, and many companies are trying to play catch-up before they become irrelevant. YouTube’s statement emphasizes the company’s drive to remain ahead of the competition and provide state-of-the-art tools to its creators.
The post YouTube is implementing safeguards and restrictions on AI content appeared first on Android Headlines.