OpenAI Says It Cares About Humanity. Its Critics Say Otherwise — Including Its Own Employees

person discussing while standing in front of a large screen in front of people inside dim-lighted room
Image by Teemu Paananen on Unsplash

Last week at the Elevate conference in Toronto, I had 20 minutes face-to-face with Chris Lehane, OpenAI’s VP of Global Policy. If you don’t know the name, Lehane is a seasoned political fixer — the guy you call when storms start brewing, and your whole brand is on the line. From working as Al Gore’s press secretary during the Clinton era to navigating regulatory nightmares at Airbnb, Lehane has built a career in damage control.

Now, he’s helping OpenAI walk an increasingly fine line: preaching the gospel of democratized artificial intelligence while the company faces backlash about copyright infringement, environmental use, ethics, and transparency. And that balancing act? It’s getting harder by the day.


How Sora Sparked a Firestorm

AI video generation
Image by BoliviaInteligente on Unsplash

This all kicked off with Sora 2, OpenAI’s powerful video-generation tool. Just days before our chat, it shot to the top of the App Store. People were creating digital versions of themselves, celebrities like Tupac, or cartoon icons like Pikachu. Sounds fun — until you ask where all the content came from.

That’s where it gets messy.

Initially, OpenAI let creators “opt out” of Sora’s training data — which isn’t typically how copyright law works. But once they noticed how much users loved playing with recognizable characters, OpenAI conveniently “evolved” to an opt-in model later. That’s not innovation. That’s testing boundaries.

And it’s pitting OpenAI against major rights holders, with lawsuits already piling up from publishers like The New York Times and Toronto Star. Lehane calls Sora “a general-purpose technology,” like the printing press. But when users are remastering the dead or creating AI-generated deepfakes, that metaphor starts to break down.


Can You Democratize AI Without Paying Creators?

When I asked Lehane why publishers weren’t benefitting from the content OpenAI is training on, he pointed to “fair use” — a U.S. legal doctrine meant to balance access with creator rights. He went so far as to describe fair use as the “secret weapon” of U.S. tech dominance.

But here’s the thing: while OpenAI’s business thrives on this legal grey area, human creators are being sidelined. AI tools are now capable of replacing the very people whose work trained them in the first place — journalists, artists, even actors. One painful example that hit home recently? Zelda Williams, daughter of the late Robin Williams, pleading on Instagram for people to stop sending her AI-generated videos of her father.

“You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”


AI Needs Power — And a Lot of It

AI digital ethics
Image by Roman Budnikov on Unsplash

Beyond the human toll, there’s a looming infrastructure question: the energy demand of AI is staggering.

OpenAI already runs a data center in Abilene, Texas, and is building another massive one in Lordstown, Ohio with Oracle and SoftBank. Each week, OpenAI consumes about a gigawatt of energy. For context: China added 450 gigawatts of energy capacity last year alongside 33 nuclear facilities. OpenAI’s answer to this? Democracy needs to keep up — or fall behind.

It’s a compelling argument. But what happens when communities like Lordstown are footing the bill, literally? Lehane didn’t promise lower utility bills — just modernized energy systems and reindustrialization. Inspiring, sure. But it doesn’t always trickle down.

Also worth noting: video generation is among the most power-hungry AI applications out there. So while Sora may be fun to play with, it’s not exactly eco-friendly.


Internal Cracks Are Starting to Show

Lehane’s polished on stage — he admits uncertainty, talks about sleepless nights, even acknowledges how high the stakes really are. But are those thoughtful pauses or just skillful PR?

Because at the same time I was talking to him in Toronto, OpenAI was making moves that suggest a far grimmer picture.

A nonprofit lawyer named Nathan Calvin — who works on AI regulation in California — got served a subpoena at his house by a sheriff’s deputy during dinner. The request came from OpenAI, demanding private conversations with college students, policymakers, and former OpenAI employees. Why? OpenAI reportedly suspected Calvin’s nonprofit was funded by Elon Musk (it isn’t). More importantly, Calvin had publicly supported SB 53, a California bill aimed at making AI safer — the very kind of regulation OpenAI claims to support on paper.

After the subpoena, Calvin called Lehane “the master of the political dark arts.” Whether that’s a compliment or a red flag depends on who you ask.


Even OpenAI Employees Are Speaking Out

Here’s the part that really caught my attention: OpenAI’s own people are starting to speak publicly about their concerns.

Boaz Barak, a researcher at OpenAI and Harvard professor, wrote that Sora 2 is “technically amazing” but warned that it’s too soon for celebration, especially given the dangers of deepfakes.

Then came something even more remarkable. Josh Achiam, OpenAI’s head of mission alignment, took to social media and said, “We can’t be doing things that make us into a frightening power instead of a virtuous one.” He added that he was aware his honesty might put his career at risk.

That’s not protesters on the outside. That’s someone on the inside, raising the alarm. When your own mission alignment lead is concerned that your company is becoming “a frightening power,” something is deeply off track.


So What Now?

AI regulation discussion
Image by Markus Winkler on Unsplash

Lehane admitted during our interview that there’s no rulebook for what OpenAI is trying to do. And he’s right. AI is racing ahead of regulation, ethics, and public understanding.

But in the absence of a guide, it’s a question of who gets to write the story — the users, the creators, the regulators, or the companies themselves.

Right now, OpenAI says they’re building AI that benefits all of humanity. But when employees, creators, and watchdogs all raise the same concerns — from data usage to harm to power — it’s fair to ask: who’s actually benefiting?

If even their own execs are worried, maybe it’s time the rest of us paid closer attention.


Keywords: OpenAI, Chris Lehane, Sora, AI Video Tool, Copyright Infringement, AI Policy, Digital Ethics, SB 53 California, Gigawatt Energy, AI Fair Use, Deepfakes, OpenAI Criticism, AI Regulation


Read more of our stuff here!

Leave a Comment

Your email address will not be published. Required fields are marked *