Technology
YouTube Boosts AI Feature, Rolls Out New Tools To Enhance Content Creation
YouTube has announced a major expansion of its artificial intelligence (AI) capabilities, with CEO Neal Mohan identifying AI as one of the platform’s four strategic priorities for 2025 in a blog post.
The video platform is implementing new features focused on content creation, language translation, and age-appropriate content delivery.
New Features Target Creator Tools and Content Protection
The company plans to extend its auto-dubbing feature to all YouTube Partner Program participants this month, enabling creators to translate their content into multiple languages. This expansion builds upon YouTube’s existing AI-powered creator tools, which include assistance with video ideation, thumbnail creation, and language translation services.
The effectiveness of YouTube’s audio translation is best demonstrated by GLITCH’s success with the “Amazing Digital Circus” cartoon pilot on the platform, which features 20 audio tracks. Since its release in October 2023, the video has garnered over 367M views.
To address content authenticity concerns, YouTube is developing new safeguards for AI-generated content, expanding its collaboration with the Creative Artists Agency (CAA). This partnership focuses on technology identifying and managing AI-generated content featuring personal likenesses.
The initiative extends YouTube’s Content ID system, which previously focused on copyright protection, to now include the detection of AI-simulated faces and voices.
The platform is also introducing machine-learning technology to estimate user ages, aiming to deliver more appropriate content and recommendations to viewers. The platform has not disclosed specific details about the age estimation methodology or potential correction mechanisms for inaccurate determinations.
Beyond AI initiatives, Mohan’s letter outlines three additional strategic focus areas for 2025: positioning YouTube as a cultural hub, elevating creators to mainstream entertainment status, and emphasizing television-based viewing. The company notes that television has surpassed mobile devices as the primary viewing platform for YouTube content in the United States.
Community Response
A recent study by Radius commissioned by YouTube highlights how creators leverage AI in their work, its widespread adoption, and optimism about the technology’s potential to transform creative industries.
Key findings indicated that 92% of creators already used AI tools, with 74% reporting a great deal or fair amount of knowledge about the technology. Despite this high adoption rate, 90% of creators felt they were not using AI to its fullest extent, suggesting room for growth and education.
These developments come as platform users report increasing encounters with AI-generated content, particularly in YouTube Shorts.
Community feedback indicates growing concerns about content originality, with users noting patterns in AI-generated voices, thumbnails, and recycled content. Some argue that AI tools may compromise the platform’s educational and entertainment value, while others view them as production aids rather than creative replacements.
“Almost all shorts and videos I see now are made with a voice I have already heard, with a thumbnail I have already seen, and with reused content, I have already seen,” reports one user on Reddit. Another observes that “95% of shorts are just garbage AI videos,” noting that many AI-generated videos achieve “10M+ views.”
In a different thread, users predominantly express concerns about these developments. “Part of being on YouTube is failing, learning, getting over the fear and judgment,” argues one user, advocating for organic content development.
Others point to specific issues about AI implementation: “AI voiceover… takes away the relatability aspect, and it’s just lazy and takes away all aspects into what makes video making good.”
Still, there are those who stood in defense of AI-generated content: “Using AI in any form in a video does not mean the video was made without effort or that it’s bad. For instance, I have a small channel with animated videos on historical topics. Each video takes dozens of hours to create, even though their average length is 8–10 minutes. And yes, I use AI voiceovers because I’m not a native English speaker and simply cannot speak in English yet.”
AI Model Training Controversy
Major AI companies sought to harvest content from thousands of YouTube videos to train AI models without creators’ knowledge or consent, a WIRED investigation revealed last year.
Subtitles from 173,536 YouTube videos, sourced from over 48,000 channels, were utilized by prominent tech firms, including Anthropic, Nvidia, Apple, and Salesforce.
Dave Wiskus, CEO of Nebula, a streaming service partially owned by creators, expressed concern about the practice. “It’s theft,” he stated, adding that it’s “disrespectful” to use creators’ work without consent, especially given the potential for generative AI to replace artists.