OpenAI Killed Sora. Here’s What It Means for AI
OpenAI Killed Sora. is an essential topic in modern AI workflows.
The App That Was Too Weird to Live
You know what? I still remember the first time I opened Sora. It was late September 2025, and I’d finally scored an invite after weeks of waiting. I fired it up on my phone, and there it was—a vertical feed of AI-generated videos that looked eerily like TikTok, except everything was fake. Deepfakes of celebrities, cartoon characters smoking weed, and yes, that infamous video of Sam Altman walking through a pig slaughterhouse asking if his “piggies” were enjoying their slop.
I couldn’t look away. Not because it was good, but because it was absolutely unhinged.
Six months later, OpenAI announced they’re shutting the whole thing down. No explanation, no detailed timeline—just a tweet saying “we’re saying goodbye” and a promise to share more details later. You know the drill. It’s the kind of corporate speak that tells you everything and nothing at once. I read that tweet and thought, “Yeah, sure, ‘more details later’—we all know how that goes.”
But here’s what fascinates me: this isn’t just one app failing. It’s a signal about where AI is actually heading versus where the hype wants it to go.
When the Hype Meets Reality
Let me be clear about something. The underlying Sora 2 technology? Absolutely incredible. The videos it generated were scarily realistic. The audio synchronization was miles ahead of competitors. From a pure technical standpoint, OpenAI had built something genuinely impressive.
So why did the social app fail?
Because nobody actually wanted an AI-only social feed. Think about it. When you scroll TikTok or Instagram, you’re looking for connection—for glimpses into real people’s lives, for authentic moments, for that feeling of “this person gets it.” Sora offered none of that. It was just an endless stream of synthetic content created by algorithms, moderated poorly, and populated mostly by people making deepfakes of public figures who never consented to be there.
The “cameo” feature (later renamed “characters” after Cameo.com sued and won—honestly, kind of hilarious they didn’t see that coming) let users scan their faces and generate realistic videos of themselves. Sounds cool in theory, right? In practice, it became a playground for creating unauthorized deepfakes of celebrities, politicians, and dead people. Martin Luther King Jr.’s daughter had to publicly beg users to stop making AI videos of her father. Robin Williams’ family faced the same nightmare.
I don’t know about you, but that doesn’t sound like the future I signed up for.
The Kentucky Woman Who Said No
Here’s where things get really interesting. While OpenAI was quietly killing its creepiest app, something else was happening in the real world.
An 82-year-old woman in Kentucky was offered $26 million to turn her farm into an AI data center. She said no. Not because she didn’t need the money—I’m sure she did—but because she recognized that something fundamental was being asked of her. Her land, her community, her say in what gets built next door.
That same company is now trying to rezone 2,000 acres nearby anyway. The tactics are predictable: if you can’t buy them, work around them.
But her refusal matters. It represents a growing resistance to AI’s physical expansion into our lives. We’re seeing it everywhere now. Communities pushing back against data centers that consume massive amounts of water and electricity. Artists suing AI companies for training on their work without consent. Workers organizing against AI-driven layoffs.
The AI industry has spent years telling us that resistance is futile—that this technology is inevitable, that adaptation is the only option. But the Kentucky woman proves that’s not true. Sometimes people just say no. And honestly? I’m here for it.
Why VCs Are Still Betting Billions
Now here’s the paradox that keeps me up at night. OpenAI is shutting down Sora. Meta’s Horizon Worlds VR platform is struggling despite massive investment. The consumer appetite for AI-generated social content seems limited at best.
And yet Kleiner Perkins just raised $3.5 billion to go all-in on AI. Other VCs are pouring billions more into the sector. The money has never flowed faster.
What gives?
I think we’re witnessing a fundamental shift in where AI creates value. The consumer social play—the “AI TikTok” model—is failing because it misunderstands what people want from social platforms. But enterprise AI, infrastructure plays, and vertical applications? Those are booming.
Look at the drone startups that are actually finding traction. Zipline raised another $200 million for medical supply delivery. Lucid Bots got $20 million for window-washing drones. Brinc launched a police surveillance drone that claims it can replace helicopters. These aren’t social apps. They’re solving real problems in specific industries.
The VCs aren’t stupid. They see what’s working and what isn’t. The next wave of AI investment isn’t going to consumer entertainment—it’s going to infrastructure, automation, and enterprise tools that actually make money.
The Deepfake Dystopia We Almost Got
Let me tell you about the content that dominated Sora’s feed in its final months. After users got bored of making Sam Altman steal Nvidia chips from Target (yes, that was a real trend—and honestly, sort of funny if it weren’t so weird), they pivoted to copyright infringement as performance art.
Mario smoking weed. Naruto ordering Krabby Patties. Pikachu doing ASMR videos. It was like watching copyright lawyers’ worst nightmares come to life in real-time.
The moderation was a joke. OpenAI claimed they didn’t allow public figures who hadn’t opted in, but the guardrails were trivial to bypass. Anyone with a few minutes and some creativity could generate whatever they wanted.
I found myself wondering: is this what we want AI to be? A tool for making slightly higher-quality shitposts? For creating fake videos of dead celebrities? For generating content that exists solely to dodge copyright enforcement?
The technology was impressive. The use case was depressing.
What This Means for Builders
If you’re building in AI right now, Sora’s failure is a gift. It’s a clear signal about what not to do.
Don’t build AI for AI’s sake. Build it to solve real problems for real people. The vertical applications—healthcare, logistics, manufacturing, creative tools for professionals—those are where the sustainable businesses live.
Don’t underestimate the moderation challenge. If your AI tool can generate harmful content, users will find a way to do exactly that. Plan for it from day one, not as an afterthought.
Don’t assume consumer social is the default path to success. TikTok works because it’s human. Instagram works because it’s human. Sora failed because it wasn’t. The lesson isn’t that AI can’t enhance social platforms—it’s that AI can’t be the platform.
And maybe most importantly: respect the “no.” The Kentucky woman who turned down $26 million isn’t an obstacle to overcome. She’s a signal that communities have agency, that consent matters, and that building the future requires bringing people along rather than working around them.
The Real Wave Is Just Starting
Here’s my honest take: Sora’s death isn’t a sign that AI is failing. It’s a sign that the first wave of AI consumer apps is maturing, and the market is getting smarter about what works.
The technology behind Sora—the video generation, the
📖 Related: Claude’s Paid User Surge: What It Means for You