In 1939, Albert Einstein famously sent a letter that helped kick off the Manhattan Project by warning President Roosevelt that Germany might harness uranium for "extremely powerful bombs."
This convinced President Roosevelt to move forward with the Manhattan Project and starting the nuclear age.
But when the Nazi threat was eliminated in 1945, the scientists that helped create this power had a different opinion.
70 Manhattan Project scientists led by Leo Szilard signed a petition to then President Truman in July 1945, pleading that atomic bombs not be used against Japan without first giving them an opportunity to surrender.
To them, the original justification for building the bomb — defeating Nazi Germany — no longer existed. Joseph Rotblat, a Polish physicist and Manhattan Project scientist, later recalled:
"there was never any idea [among scientists] that [the bomb] would be used against Japan. We never worried that the Japanese would have the bomb. We always worried what [Werner] Heisenberg and other German scientists were doing".
The petition never reached Truman — it was classified and not made public until 1961.
In the shadow of Hiroshima and Nagasaki, atomic scientists struggled with the grave consequences of their own creation. They spent their remaining years establishing institutions like the International Atomic Energy Agency and the Russell-Einstein Manifesto, desperately trying to prevent the very arms race their creation had unleashed.
This legacy of conscience over profit stands in contrast to a lot of today's AI industry leaders, who sound similar alarms about extinction — yet simultaneously pursue massive military contracts and profits while doing little to stop the very dangers they claim these systems bring.
The Retirement Fund Test: Where's the Urgency?
Leading artificial intelligence researchers are sounding alarm bells about humanity's future, telling us we will have higher unemployment because of AI (we'll see), and some taking the drastic step of abandoning retirement savings because they believe the world won't exist long enough to need them (this seems silly).
👇 I recently wrote about AI job displacement and corporate retraining schemes here
Nate Soares, president of the Machine Intelligence Research Institute, told The Atlantic he no longer contributes to his 401(k). "I just don't expect the world to be around," he said.
Dan Hendrycks, director of the Center for AI Safety, told The Atlantic that by the time he reaches retirement age, he expects "everything is fully automated — that is, if we are still around".
Of course, media outlets love these dramatic quotes. They're more engaging than dry policy discussions. But even accounting for sensationalism, the broader pattern remains troubling.
Where is the organized advocacy?
Nuclear scientists didn't just worry privately about atomic weapons and drop quotes in the press about not saving for a future — they petitioned presidents, formed advocacy coalitions, and sacrificed lucrative industry relationships.
Some AI safety researchers do testify before Congress and publish academic papers, but the contrast in urgency and independence remains stark.
Follow the Money: Safety Theater Exposed
At this point you might be saying, researchers like Soares and Hendrycks work at independent safety organizations, not OpenAI or Google directly. So how much control can they really have?
As they say — follow the money.
These "independent" safety organizations are funded by the same billionaires who hold major stakes in the AI companies they're meant to oversee.
A conflict of interest we'll explore in detail later.
But, whatever their intentions, researchers like Soares and Hendrycks operate within a funding ecosystem that creates messed up incentives.
Their apocalyptic narratives can be weaponized by the very companies they critique to justify racing ahead with AI development — not slowing it down or regulating it.
When safety experts warn of extinction risks, they inadvertently argue that we need to move faster to "solve alignment" before a spooky other does.
Sound familiar?
Their fatalistic predictions — abandoning retirement savings, expecting full automation — create urgency that benefits the companies they claim to fear.
If these researchers truly believed their own warnings, they wouldn't just stop contributing to their 401(k)s. They'd be organizing development moratoria, demanding transparency, or building coalitions for immediate regulation.
Instead, they've created what critics call "safety theater" — the appearance of concern while the very people funding them maintain control of both the technology and the regulatory process.
The Lobbying Machine
While warning of apocalypse, AI companies have built a sophisticated influence machine in Washington D.C.
They spent $36 million on federal lobbying in just the first half of 2025. Politico reported that OpenAI's spending jumped 30% year-over-year to $620,000 in Q2 2025, while Anthropic's spiked to $910,000 — a massive increase from $150,000 the previous year.
Politico also found that nearly 500 different entities lobbied on AI policy in Q2 2025 alone.
These lobbyists prioritize their companies and VC funding over public safety. If they truly cared about safety, they'd just build less risky systems and stop marketing AI as job replacement technology.
But, you don't get their valuations by just being a new Microsoft Office suite.
Also, according to The Grey Hoodie Project, 88% of AI faculty and 97% of ethics faculty have received funding from or been employed by Big Tech.
Companies like Meta have been caught gaming safety benchmarks to make their models appear safer than they are. Meanwhile, research shows that 59% of papers published in top journals addressing AI ethics include at least one author with financial ties to Big Tech companies, and many safety benchmarks highly correlate with general capabilities, potentially enabling "safetywashing" — where capability improvements are misrepresented as safety advancements.
Companies fund the researchers, influence the politicians, and control the narrative.
But the most damning contradiction isn't in Washington — it's on the battlefield.
Building the Terminator While Warning About Skynet
Then there the whole Terminator vibes part of it.
A lot of executives warning of AI-driven human extinction are simultaneously building the military applications that could fulfill their darkest predictions.
Unlike nuclear scientists who opposed weaponization after witnessing atomic devastation, AI leaders embrace militarization using unproven AI systems while manufacturing fear about their own technology.
Geoffrey Hinton warns of a 10–20% chance of AI extinction within decades, yet Google — where he previously worked — now holds Pentagon contracts to integrate AI into "warfighting domains".
Hinton's warnings would carry more weight if he used his influence to challenge Google's military partnerships instead of just issuing dire predictions from the sidelines.
Sam Altman calls for regulation "like nuclear weapons," but OpenAI actively develops battlefield AI and lobbies against meaningful oversight that might threaten profitability.
AI is Already Out Here Killing People
The military applications are already deadly real. Israel uses AI for "real-time targeting" in Gaza, while Ukraine deploys AI-powered systems against Russian forces.
Autonomous weapons like the Israeli Harop drones have killed at least 11 people in documented attacks, operating independently for hours after launch before selecting and engaging targets.
These systems represent what experts call a preview of 'the autonomous future of warfare,' where machines make life-and-death decisions with minimal human oversight. Pentagon officials acknowledge that AI is 'speeding up the execution of kill chain' and reducing human involvement in targeting decisions.
I, for one, don't want AI systems deciding who gets killed.
Rather than working to prevent an AI arms race, AI industry leaders are actively fueling and profiting from it. The Pentagon's 2025 budget requests billions for "autonomous wingman fighter drones" and AI weapons development.
Companies compete for military contracts while warning that their own technology could end civilization.
How is this possible? How can the same people warning of extinction simultaneously profit from building weapons of mass destruction?
The answer lies in who's really pulling the strings
The Billionaire Puppet Masters
Behind the AI safety movement lie serious conflicts of interest I mentioned above.
According to 80,000 Hours' analysis of AI safety funding, most philanthropic AI safety funding — over 50% — comes from one source: Good Ventures, via Open Philanthropy, funded by Facebook co-founder Dustin Moskovitz. The same billionaires funding "AI safety" organizations also have major investments in the AI companies they're supposedly regulating.
This doesn't make researchers like Soares and Hendrycks villains — many genuinely believe in their work. But it does raise questions about whether truly independent AI safety research is possible when the field depends so heavily on tech billionaire philanthropy.
Effective Altruism billionaires like Jaan Tallinn, Dustin Moskovitz, and others fund both AI companies AND the safety organizations meant to regulate them.
Imagine if the International Atomic Energy Agency had been funded by weapons manufacturers while warning of nuclear apocalypse.
Misdirection: Future Fears vs. Present Harm
While nuclear scientists addressed the immediate, documented devastation they had witnessed, AI leaders engage in a clever misdirection. By focusing on hypothetical extinction scenarios, they avoid accountability for the documented current harms their systems already cause.
This allows them to appear concerned about humanity's future while ignoring the measurable damages happening today.
This isn't to say AI safety research is worthless — there are genuine technical challenges around alignment and control. But the conflicts of interest make me think they aren't taking safety very seriously.
The nuclear scientists showed us a different way. They proved that scientific integrity could triumph over profit, that researchers could choose humanity over their own financial interests.
It's time to demand the same courage from those building tomorrow's weapons of mass destruction.
For the full oral history of the Manhattan Project told through the scientists' own voices, including their moral struggles and advocacy efforts, check out "The Devil Reached Toward the Sky" — a multicast audiobook production I edited.
Wesley is a 20+ year media professional with Grammy and Emmy nominations who investigates AI's impact on creative industries.