I was invited to a podcast some weeks ago and was given a topic to discuss: the dependency on AI. This really caught my attention because it is something that should be addressed in our present world. Our world has evolved, endorsing and promoting the rapid growth and advancement of various technologies. In a span of 100–200 years, the world has gone from ancient to mechanical to technological and a machine-based way of living. What would take hours to work on, like drying your clothes, editing grammar in your write-ups, or searching for books, is now much easier and faster to do. The development of numerous highly intelligent technologies or machine-based systems is making our way of handling things much easier and faster. But over time, it seems like these systems are starting to replace us. I'll be talking more about the aspect of AI.

According to IBM, Artificial Intelligence (AI) is a technology that enables computers and machines to simulate human learning, comprehension, problem-solving, decision-making, creativity, and autonomy.

It is meant to assist us in handling tedious tasks at a much faster rate, right? But why does it look like it's replacing us or doing what we should do? Could it be how we are using it?

In my sophomore year, there was this lecturer who was so fierce and was known to be one of the most feared lecturers in the whole faculty. He would always give the toughest questions to drill our minds. One question could take hours to solve, and sometimes you won't even arrive at the correct answer. I vividly remember a time he gave us about twenty tough questions to solve, to be submitted the next week. Those questions were freaking tough. I had to consult textbooks, videos, and other resources and materials to understand and solve each question. I had spent hours solving one question, filling up four pages while still working on it. It was really crazy. At some point, I was tempted to use AI, but each time I consulted it, I was given different answers for a single question, making me confused and unsure of which answer to go for. I later found out that 80–90% of the students who did the assignment consulted AI, and close to 80% of their answers to the questions were wrong, not because they used AI or it isn't good to use it, but because it is limited and prone to errors.

Well, of course, not everything works perfectly as expected. AI was created by humans. So it runs on certain limits and can't always give us everything that we want.

Major Disadvantages of Over-Dependency on AI

In this article, I'm going to outline four major disadvantages of over-dependency on AI that could be threatening to our society.

1. Diminished Human Decision-Making Skills

I agree that AI can process data and provide recommendations faster than humans, but when people lean too heavily on it, our natural decision-making skills begin to erode. For generations, humans have thrived on problem-solving, debating, and weighing consequences, which has helped in tackling and providing solutions to pressing problems for ages. Why should we limit our thinking to what a machine says? If every decision is outsourced to algorithms, there is a high chance that people may lose the ability to think critically, which could in turn lead to serious consequences.

Imagine workplaces or companies where employers make use of AI to recruit employees through merely scanning CVs or résumés, without carrying out traditional interviews to deeply understand individual personalities and abilities to work — this could lead to a catastrophe in the field.

All I'm trying to point out here is that humans should always be included in every system of technology. AI can be good at raising suggestions and answering questions, but there should always be a human in the loop, taking charge of critical decisions in respective fields.

2. Vulnerability to AI Limitations and Errors

No AI system is perfect because it's only as good as the data and algorithms that shape it. Bias in data, flaws in coding, or unexpected scenarios can all lead to errors. The danger comes when humans blindly trust AI outputs without oversight. Take, for instance, the example I gave earlier concerning the students who chose to rely on AI to answer complex questions, and it couldn't get everything correct. Just as we, the creators of AI, aren't perfect, neither is AI. Mistakes and errors are always bound to happen.

Imagine a doctor who analyzes a disease or illness using an AI application without properly looking into it, but rather depending on the system, he could miss some minor or deadly spots. The key takeaway is that AI should be supervised by humans who understand its limitations and can step in when errors occur.

3. Reduced Human Creativity and Innovation

This was one of the disadvantages I had to point out during the podcast because gradually, depending on these systems, could reduce human creativity, authenticity, and innovation. AI can generate text, music, designs, and even research insights at impressive speed. While this is beneficial, over-reliance risks stifling human creativity. If we let AI handle all artistic or innovative tasks, humans may become complacent and lose the drive to imagine beyond the machine's output.

For instance, students who copy-paste AI-generated essays may graduate with weak writing and reasoning skills. Artists and writers using AI art or writing tools might skip the effort of developing unique styles. Businesses that rely solely on AI for product ideas risk producing generic, uninspired work that lacks a human touch. It all has its own disadvantages in certain areas.

Moreover, AI's creativity is derivative — it recombines patterns from data it has seen before. True innovation, however, often comes from intuition, cultural context, or even accidents — things machines don't experience. The airplane, the theory of relativity, or the invention of the internet all came from bold human leaps, not data-driven extrapolation.

Remember this: AI is created to assist us and raise recommendations or suggestions when necessary, and not to replace us.

4. Privacy, Security, and Ethical Concerns

The last and most concerning: Privacy, Security, and Ethical Concerns. If I say that AI runs or gives us answers or recommendations based on the data input into it, ever wondered where the data comes from? Who accesses the data? And whose data is being used? Well, it's our data. This makes it quite concerning in terms of security and the handling of sensitive information.

AI thrives on data, and the more we depend on it, the more personal and sensitive information we hand over. This creates significant risks:

  • Privacy: AI-driven apps (like recommendation systems or smart assistants) collect massive amounts of personal data. If misused or hacked, it exposes users to surveillance and exploitation.
  • Security: Cybercriminals are increasingly weaponizing AI to create deepfakes, phishing emails, or automated hacking attempts. Heavy reliance on AI could make systems more vulnerable to sophisticated attacks.
  • Ethics: AI decisions lack moral reasoning. For instance, an autonomous drone programmed to eliminate "targets" might do so without weighing ethical consequences. Similarly, an AI hiring system might filter applicants purely on numbers, ignoring the human stories behind them.

So we are mostly trading our data to get what we want from it. I say that as AI integrates deeper into society, we must enforce transparency, accountability, and strong ethical boundaries.

In Conclusion

After the assignment was submitted, it was revealed that not many students performed as well as expected, especially since the lecturer valued the work being done the way he had taught in class to ensure we truly understood his lessons. I was glad that I didn't depend on AI, like some other students did, but instead chose to solve the problems and apply critical thinking myself. This effort later made the exam questions feel less difficult.

I am not condemning AI, but rather addressing certain harmful issues it could impose on society, and how over-reliance on it can lead to more severe consequences.

AI is powerful, but over-dependence weakens our judgment, creativity, and control, and takes more of our data. The more we let machines think for us, the less capable we become of thinking for ourselves.

Let AI assist, not replace. Trust it, but always question it. Because if we surrender too much, we risk losing the very human insight that makes technology meaningful.

The warning is clear: AI should empower us — not make us powerless.

What are your thoughts on AI dependency? I'd love to hear your perspectives in the comments. If you found this valuable, consider sharing it with your network and following for more insights👍.