I had just been accepted to the university for my dream bachelor's degree programme, and with the high hopes I attended, but despite the happiness, the thing I heard most was "AI is here, we might not need as many developers as in previous years." In fact, I knew that AI can generate some code, but we need to know the fundamentals to work with it. Otherwise, developing a real solution from a simple prompt alone isn't possible. I discovered this out firsthand, without knowing anything, and I tried it and failed miserably.
How did all of this get started?
In 2019, Forbes predicted that software engineering jobs might increase by 22% by 2029. The US Bureau of Labour Statistics also predicts a 15% increase from 2024 to 2034. These figures are significantly higher than growth in any other industry. Flexible hours, competitive pay, and strong benefits were key factors attracting people to pursue this career path. Statistics don't lie. According to the US News & World Report:
For the second year in a row, software developer takes the №1 spot as the Best Job overall.
But, despite these figures pointing to a strong career path, there was much news about the mass layoffs from the local and international companies, such as IFS, Microsoft, Google, Meta, and Oracle. According to the PwC B2B SaaS Strategy Publication in November 2024, companies were predicted to replace staff with AI within 3–5 years.
Is AI actually sustainable?
It's 2026 now, and the scene has changed beyond what we expected. OpenAI, the company that started the whole AI bubble, reported a projected annual loss of $14 billion starting in 2026. OpenAI's product isn't the worst one; in fact, its weekly user count is 800 million, which means it works. The cost is because of how AI works for traditional software; you need better developers to write cleaner code. These are human costs, but AI works on Scaling Laws; these are the mathematical rules that determine how powerful your AI model is. Here's how it works: if you need more power for your AI model, you need to pay, and it's not that straightforward. Even if you want to double the intelligence of your model, you have to pay exponentially for that. Simply put, the cost grows far beyond double. Reports suggest GPT-4 has between 1 trillion and 1.8 trillion parameters, and CEO Sam Altman confirmed it cost more than $100 million to train.

To train AI models, OpenAI needs chips like Nvidia B200 since this is a niche one. A chip costs around $30,000 to $40,000, and one chip doesn't match the demand; you need many of these chips, and you cannot keep them for 20 years. You have to replace them as soon as the new chip model arrives on the market.
Again, this is only for one specific component; other than that, there are other necessities for a computer, the main one being power. A single chip consumes 1,000W. The entire data centres in the US consume approximately 176 terawatt-hours (TWh) in 2023 alone, and 10%-20% of this is for AI data centres.
The other necessity is the cooling; with the high energy consumption, these chips need massive cooling, and they cannot use the off-the-shelf coolers we are using. Currently, data centres pump liquid through or immerse chips in coolant to manage heat. A Stanford report states that 60 data centres in Phoenix, Arizona, consume about 670 million litres of fresh water (177 million gallons) a day. This is from a single state, and in the same report, they say this is less than what farmers use for agriculture, but agriculture is for food, and it's essential for all living beings, and communities risk losing access to clean drinking water.
Several states have banned the construction of new data centres because of their resource consumption. And with the details we discussed, AI is not a sustainable solution on the business end for me; it's a money pit.
AI is going bad
Why do I talk about the sustainability of AI? Because without understanding the disaster, we cannot give our focus to what happened after. Now starts the plot twist. The MIT Nanda Centre report states that generative AI has failed to deliver a measurable return on investment. Since the companies labelled these layoffs as a cost-cutting method, it has now become a liability for them; it's just an investment they made to lose money for nothing. AI coding is usually simpler, more repetitive and less structurally diverse. It cannot make a system more robust.
Some may say AI reduces coding time, but that's the biggest misunderstanding here. The thing is, while AI may help a junior developer finish a basic task 35% faster initially, the long-term outcome is different; the final product becomes harder to maintain. Instead of creating elegant and reusable logic, AI just clones existing blocks. This creates layers of code that work, but nobody knows why. When this breaks, it's impossible to debug the code.
And the security risk that AI code brings is also a matter that we should consider before jumping to any conclusion. Veracode 2025 GenAI Code Security Report states that 45% of code samples failed security tests, and Java was the riskiest language, with a 72% failure rate. Studies found that using AI slows down developers by around 19%.

CodeRabbit concluded that "AI-authored code contains worse bugs than software crafted by humans." AI-generated pull requests contain an average of 10.8 issues, while human-written ones contain 6.4. The main issue is that AI code is syntax accurate, but the logic is broken, so developers spend around 11 hours per week debugging AI-generated code.
The real problem
The above proves that there's still a high demand for a senior developer in the IT field. But the real issue is that companies are no longer accepting junior employees for the new positions; they are trying to replace junior developers with AI and hire only senior ones. This is essentially killing the pipeline for future senior developers. To become a senior developer, you must first work as a junior, writing boilerplate code to build a foundational understanding, but companies now expect candidates to arrive with senior-level skills already.
There has also been approximately a 9% decline in tech worker salaries, as the market is flooded with developers from recent layoffs. Management is also justifying that "AI is doing the heavy lifting, you only have to oversee the project, and we cannot justify that for 2022 salaries."
There's also a huge problem when handing over the responsibilities for the AI. Previously, this was for a self-driving vehicle; if the car made a mistake while in self-driving mode, who takes the blame? This was raised similarly when Zoom tried to bring an agent for the meeting on behalf of the real person. The issue was if AI took an imaginary responsibility, who would be responsible? The same applies to generative AI. Antigravity AI made a real-world mistake. A user asked the AI to delete the project cache, but the AI deleted the entire drive. AI's response was "I made a catastrophic error in judgment." Without confirmation, deleting the production drive and not having any accountability doesn't make any sense.
So, does this mean we should avoid AI entirely?
Throughout this article, I have focused on the challenges of AI. AI is useful, I use AI as a thinking partner — a way to get a second perspective when I need one. It allows me to think through problems from a different angle. Working along with the AI is the best option, and also handling big data is a major advantage of AI. Since we cannot remember each pattern of something, we should hand it over to AI and understand what's happening in real time.