YouTube Has Several AI Features. Here's Who They're Actually For.
Every few months, YouTube announces a new AI feature. Every few months, we're supposed to get excited. And to be fair, sometimes it's warranted. But lately I've been noticing something: YouTube's AI isn't one thing. It's a portfolio of features built for very different masters. Some are genuinely for you. Some are for creators. And a few are things nobody asked for, including the people whose content got edited without their knowledge.
So let's stop treating "YouTube's AI" as a monolith and actually look at what's happening, feature by feature. The helpful, the invasive, and the one that caused a full creator meltdown last August.
The One That's Actually Great: Auto-Dubbing
In February 2026, YouTube expanded auto-dubbing to all creators, now supporting 27 languages. The headline number: in December alone, YouTube was averaging over 6 million daily viewers watching at least 10 minutes of auto-dubbed content. That's not a test. That's a real behavior shift happening at scale.
What makes this AI feature different from most is that it's genuinely bilateral. Creators get discovery in new markets without having to record 27 separate voiceovers. Viewers get access to content they'd otherwise scroll past because of a language barrier. The Expressive Speech upgrade - now live in English, French, German, Hindi, Indonesian, Italian, Portuguese, and Spanish - tries to preserve the creator's original tone and cadence rather than generating a flat robot voice. YouTube is even testing lip sync technology to make the dubbed audio match mouth movements. That's a detail almost nobody notices until it's wrong, and then it's all they can see.
Importantly: creators keep full control. They can disable it, provide their own dub, or filter specific videos out. There's no negative algorithmic impact on the original. As the official announcement put it: "Auto dubs are all gain and no pain."
I believe them on this one. The incentives actually align. YouTube wants more global watch time. Creators want wider reach. Viewers want accessible content. When everyone's pulling in the same direction, the feature tends to be good. Auto-dubbing is YouTube AI doing what it should.
The One That's Useful But Quietly Unsettling: Age Verification AI
In August 2025, YouTube began rolling out an AI-powered age verification system in the U.S. The idea: instead of asking you for ID, the system infers whether you're under 18 by analyzing your viewing history, search patterns, and account behavior.
The intent is reasonable. YouTube is under real pressure to protect minors. Requiring ID uploads creates friction and its own privacy problems. So using behavioral signals makes some operational sense.
The system doesn't just check your age. It builds a profile of what age-related behaviors look like, and then maps you onto it. That's not age verification. That's continuous behavioral surveillance with age as the output variable.
But here's where it gets uncomfortable. Security researchers and privacy advocates have pointed out that the system goes well beyond simple verification. It analyzes vast behavioral datasets to create psychological profiles - not just what you watch, but how your viewing patterns correlate with age demographics. The model has an estimated two-year error window, meaning a 20-year-old can get flagged as potentially underage and suddenly have recommendations throttled, personalized ads disabled, and digital wellbeing features turned on.
The bigger issue is transparency. Critics have noted there's no public documentation of what data the model uses, how long it's stored, or how to challenge a wrong determination. You might find your YouTube experience quietly changed and have no idea why. That's the kind of AI deployment that makes people distrust the platform - not because the goal is wrong, but because the implementation is a black box.
This thumbnail is from Rhett Shull's video documenting the AI editing changes - the moment the secret experiment went public.
The One Nobody Asked For: Secret Shorts Editing
And then there's this.
In August 2025, music YouTuber Rick Beato noticed something off about one of his recent uploads. His hair looked strange. His skin appeared smoother than usual. As he looked closer, he said it almost looked like he was wearing makeup. He called fellow guitarist Rhett Shull. Shull had noticed the same thing in his own videos. Both traced it back to YouTube.
What had happened: YouTube was running a quiet experiment on select Shorts, using machine learning to denoise, unblur, and "enhance" the video during processing - without telling anyone. Reports by PetaPixel and SiliconANGLE confirmed that complaints had been circulating on Reddit since June. YouTube only acknowledged the experiment after Beato and Shull went public.
Rhett Shull's video breaking down the before/after of what YouTube's AI did to his footage - 600,000 people watched this. That's not a niche story.
YouTube's defense: it's not generative AI, just traditional ML. The same kind of thing your phone does when you take a photo. They promised an opt-out would be coming. They eventually reversed course after the backlash.
But Rhett Shull's quote captures why this hit differently than a normal product bug: "I did not consent to this. The most important thing I have as a YouTube creator is that you trust what I'm making... Replacing or enhancing my work with some AI upscaling system erodes that trust in YouTube."
He's right. The trust Shull is talking about isn't just emotional - it's the entire value proposition of a creator-platform relationship. When a creator posts a video, they're saying "this is what I made." If YouTube reserves the right to silently modify that after the fact, even with good intentions, the implicit contract breaks down. And that's not a philosophical concern. That's a business problem for YouTube.
"The most important thing I have as a YouTube creator is that you trust what I'm making." - Rhett Shull, on YouTube secretly altering his Shorts
The CEO's Vision vs. What Actually Shipped
In YouTube CEO Neal Mohan's 2026 annual letter, AI is framed as "a tool for expression, not a replacement." He compares it to synthesizers, Photoshop, and CGI - the kind of technologies that felt threatening at first and then became foundational. He's not wrong about the historical pattern. He's also not wrong that over 1 million channels used AI creation tools daily in December, or that more than 20 million users used the "Ask" tool to learn about videos they were watching.
But reading that letter alongside the Shorts editing scandal is a little uncomfortable. The letter emphasizes consent, transparency, creator control, mandatory disclosure of AI content. Those are the right values. They just weren't consistently applied when the engineering team decided to run an undisclosed enhancement experiment on creator Shorts.
YouTube's AI is not a unified strategy executed with consistent principles. It's a large company running different bets in parallel, some of which conflict. Auto-dubbing and secret Shorts editing both came from the same platform in the same year. That tells you something about how these decisions actually get made.
So: Helpful or Creepy?
Both. But the more useful question is: who is each feature actually designed to serve?
- Auto-dubbing: serves viewers (discovery), creators (reach), and YouTube (more global watch time). Aligned incentives. Works well. Good AI.
- AI creation tools (Generate Video, Ask, etc.): serves creators and curious viewers. Opt-in. Transparent. Also good AI.
- Age verification AI: serves YouTube's regulatory compliance, and arguably minors - but at the cost of profiling everyone else's behavior continuously. Opaque. Mixed verdict.
- Secret Shorts enhancement: served... YouTube's internal curiosity about ML enhancement? Nobody asked for this. Nobody consented. It got reversed. Bad AI deployment, regardless of the underlying tech.
The pattern: YouTube's AI is good when the people affected had a say, and sketchy when they didn't. That's not a technical principle. It's a consent principle. And it applies to every new AI feature that ships from here on out.
Mohan's letter is probably sincere. The "AI for expression, not replacement" framing is something the platform should hold itself to. The question is whether the team running Shorts enhancement experiments read it.
YouTube is building something genuinely powerful. The 6 million daily viewers watching dubbed content - that's real. The 20 million people using "Ask" to learn while they watch - that's real. This platform can be profoundly useful in ways it's never been before.
But platforms that want you to trust them can't run silent experiments on your content. They can't infer your age through behavioral surveillance without explaining how. They can't frame everything as being for you while some of it is clearly for them.
Helpful or creepy? Ask yourself who got to decide.
Try YouTube Bookmark Pro
Save videos with timestamps, notes, and folders. Free forever.
Add to ChromeFree forever · No signup required

Join the conversation