📬 This article was first published on The Nov Tech newsletter 🔗 with early access to paid subscribers. Subscribe for exclusive analysis. Medium non-members can read 2 chapters over there, including the introduction, for free.
Three weeks ago, Elon Musk posted two messages on X that broke the internet.
The first: "We have entered the singularity."
A few hours later, he doubled down: "2026 is the year of the singularity."
The timing is fascinating because these declarations came right after xAI, his artificial intelligence company, closed a $20 billion funding round. And if you connect this to what happened last week at Davos, you'll understand why this claim deserves serious attention, even if you're skeptical.
Because Elon Musk isn't the only AI CEO talking about the singularity anymore.
I intended this article to deviate from my standard content. I will offer you tangible signs that will confirm our approach to this landmark moment.
What the Singularity Actually Means
The singularity isn't Terminator. But it's not Silicon Valley marketing either.
It's a precise concept that mathematician and science fiction author Vernor Vinge formalized in 1993 in a paper titled "The Coming Technological Singularity." His idea was simple: when superhuman intelligence is created, the era of human dominance will end. He compared this moment to the event horizon of a black hole, that boundary beyond which you literally can no longer see what's happening.
Then, Ray Kurzweil popularized the concept in 2005 with his book "The Singularity Is Near." His definition is more technical. For him, it's the moment when artificial intelligence can improve itself, then that improved version improves itself again, and progress accelerates so fast that human life is irreversibly transformed.
Kurzweil predicted it would happen around 2045. Elon Musk says it's now.
What's interesting is that the singularity isn't just about the level of intelligence. It's about the speed of improvement. When AI progresses faster than we can understand it, we're in the singularity.
Why Is Musk Making This Claim Now?
The answer partly lies in what's happening at xAI, his AI company.
On January 6th, the company announced it had raised $20 billion, far exceeding the initial $15 billion target. NVIDIA, Cisco, Fidelity, Qatar's Investment Authority, Abu Dhabi's sovereign wealth fund — all these financial giants are putting colossal sums behind Musk's ambitions.
xAI's valuation hovers around $230 billion, placing it at the same level as OpenAI and Anthropic.
And with this money, they're building something unprecedented: the Colossus supercomputer. Located in the Memphis region, it already has over one million graphics cards, or H100 equivalent GPUs. The expansion project targets 1.5 million graphics processors. To give you an idea, the computing power needed to run all this could power 1.5 million homes.
Grok 5, their next model planned for Q1 2026, will have 6 trillion parameters. ChatGPT-4, which amazed everyone at its release, had only about 1.8 trillion.
What makes xAI particularly dangerous for competitors is its access to real-world, real-time data. Millions of Teslas circulating collect driving data continuously. Platform X provides a real-time information feed on what's happening in the world. OpenAI and Google don't have access to this kind of data.
The Numbers That Give Weight to Musk's Claims
There's a benchmark test for AI called GPQA Diamond, composed of 298 doctoral-level questions in biology, chemistry, physics, and mathematics. These are questions that only experts with years of study can solve.
Claude Opus 4.5 scored around 87%. GPT-5.2 Pro from OpenAI hit 93%. Gemini 3 Deep Thinking from Google reached 93.8%.
We're talking about doctoral-level scores on questions designed to differentiate experts from non-experts.
In programming, the SWE-bench benchmark evaluates actual software engineering tasks, rather than academic exercises. Actual problems that computer scientists encounter today. In 2024, the best models plateaued at 50%. With 80.9%, Claude 4.5+ has surpassed ChatGPT's 80% performance today. In one year, there was a 30-point jump.
GPT-5.2's performance is compared to that of human professionals across 44 jobs in OpenAI's GDP-Eval benchmark. In this benchmark, AI equals or surpasses the best experts in the sector in 71% of tasks. Lawyers, accountants, analysts, marketers. All professions that thought themselves safe because they required a degree.
In mathematics, the best models now get 5% on the AIME 2025, the exam reserved for the country's most brilliant students.
What remains difficult for AI, however, is scientific discovery. On Nobel-level research benchmarks, scores hover around 11%. AI hasn't yet replaced researchers making fundamental breakthroughs.
But the trajectory is what really matters here. Two years ago, AI failed basic programming job interviews. Today, it surpasses senior engineers.
What Happened at Davos Should Concern Everyone
Dario Amodei, CEO of Anthropic, made declarations that shook the assembly. According to him, AI will replace almost all software developer work in the next 6 to 12 months. Models will reach Nobel level in several domains by 2026 or 2027. And 50% of junior white-collar jobs could disappear in the next one to five years.
He revealed that at Anthropic, engineers barely write code by hand anymore. AI does everything, and humans review and adjust.
Facing him was Demis Hassabis, CEO of Google DeepMind, Google's AI branch. He's more cautious, but admits a 50% probability of reaching AGI before 2030.
Sam Altman at OpenAI recently wrote that they now know how to build AGI as it's always been understood, and that OpenAI is now focusing on superintelligence.
These leaders don't agree on everything. There are dissenting voices worth listening to. Yann LeCun, deep learning pioneer and Turing Award winner, thinks current LLMs will never lead to human-level intelligence. For him and other researchers in the industry, a completely different approach is needed. He partly left Meta because his skepticism about this technological path made him unpopular.
Hassabis himself says there are still one or two breakthroughs missing before true AGI, notably the ability to learn from a few examples and reason over the long term.
Then there's Musk's past optimistic forecasts that don't end up happening on schedule. Tesla's full self-driving was supposed to be ready years ago. Sure, it's already very advanced. Cars drive themselves already in some parts of the US, but it's not yet democratized worldwide.
The academic argument of easy opportunities already picked also deserves attention. The easy discoveries may have already been made. We see this in pharmaceutical development, where each new drug costs more and more. AI could also hit a plateau.
Four Signals That Will Tell You We're Actually There
If you want to know if we're really approaching the singularity, here are the indicators to watch.
- First signal: the economy. One definition of the singularity is economic growth exceeding 20% annually. The most dynamic economies today grow by 5 to 7%. Watch AI investment flows and productivity figures.
- Second signal: AI self-improvement. The singularity assumes AI can make itself more intelligent without human intervention. AI already helps design next-generation chips and optimize neural network architectures, but we don't yet have totally autonomous loops.
- Third signal: benchmark saturation. When AI scores close to 100% on all tests we can submit to it, including those requiring creative reasoning, it's a powerful sign. We're almost there in math, progressing quickly in science and code.
- Fourth signal: brain-computer interfaces. Kurzweil's complete vision implies a fusion between human and AI. Neuralink is already working on this. And there's Optimus, Tesla's humanoid robot. Musk predicts that in three years, robots will surpass surgeons. When AI has a human form with hands to manipulate the physical world, we'll be in truly unfamiliar territory.
What This Actually Means for Your Life
If all this materializes, the consequences on our lives will be massive.
McKinsey estimates that up to 30% of the global workforce could be displaced by automation by 2030. While quite speculative, some experts project an increase of as much as 47% by 2034.
These aren't just factory workers. These are lawyers, accountants, radiologists, and programmers. Jobs we thought were protected because they required degrees.
The optimistic version says we're entering an era of abundance. If AI does the work, humans can work less. This allows for increased time for creativity, family, and leisure activities. AI could also solve our biggest challenges: diseases, climate change, and food production.
The pessimistic version says that those who control AI will become extraordinarily powerful while others are left behind.
Both scenarios can coexist. What will make the difference is the choices we make now as a society.
What You Can Actually Do About This
First, understand the technology. Use ChatGPT, Claude, Grok, all these tools. Test them. See what they can do. I say this all the time, but those who refuse to engage will quickly be left behind.
Think about your skills. What do you do that AI can't do today? Creativity, empathy, leadership, and building human relationships. Once you've identified that, double down on what's profoundly human.
Stay flexible. The job you do today may not exist in five years, or at least not in its current form. But that's not a reason to panic. It's a reason to stay adaptable.
AI leaders, whether optimistic like Amodei or much more cautious like LeCun, all converge on similar timelines. Something major is coming. Whether you call it the singularity, AGI, or the AI revolution doesn't matter. The direction is clear. Capabilities are improving exponentially.
In every major technological transition in history, there have been winners and losers. Those who saw the internet coming. Those who understood mobile. And those who waited until it was obvious to everyone.
The particularity of this one is that it's moving so fast that waiting to see is no longer an option.
Thanks for reading. Are you preparing for this shift or hoping it slows down? Let me know in the comments.
(Originally published at https://www.thenovtech.com)
Sources: Vernor Vinge, Reuters, Colossus, Data center, GPQA Diamond benchmark, Diamond, SWE, WEF AI panel, CNBC, DeepMind / Google DeepMind blog, McKinsey Global Institute, Neuralink.