Early in my journey into technology, I still remember sitting in a controlled Parikshak environment for an internal evaluation at NCST (now CDAC). While the problem statement itself has faded, the experience remains vivid. In a span of 45 to 60 minutes, I had to understand a data-related problem, design a solution, write the code, test it, deploy it, and submit it.
Once submitted, the code was evaluated automatically against six or seven different test cases, each designed to validate correctness across various permutations and combinations. If all test cases passed, I moved forward. If not, it meant a backlog.
That one hour defined everything — clarity of thought, problem-solving ability, understanding of fundamentals, and the ability to perform under pressure.
When I saw that my code had passed all the test cases, I jumped out of my chair. I looked at my friends with a mix of relief and pride. For the first time, I clearly felt it: wow, I can do this. That moment changed something inside me. From there, my love for programming evolved… and evolved… and evolved — bringing us to a present where we describe what we want, and systems produce results in minutes.
The World That Shaped How We Build
Around that same time, I was introduced to the Software Development Life Cycle — the classic Plan, Analyze, Design, Build, Test, Deploy model. These weren't just steps in a process; they were foundational lessons. We were taught that this was the correct and responsible way to build software.
Each phase had defined deliverables. Requirements documents, design artifacts, test plans, deployment guides — every artifact had a purpose. Over time, the names of these artifacts changed. Methodologies evolved. Tools improved. But the underlying reason for their existence remained the same.
The SDLC existed because of humans.
Humans played different roles across different phases. Humans joined and left teams. Humans needed shared understanding, continuity, and clarity. Documents were created by humans, for humans, so that other humans could understand what needed to be built, tested, deployed, operated, and scaled.
These artifacts were not accidental. They were created to help humans collaborate across time, roles, and responsibilities.
The Inflection Point
Today, we stand at a clear inflection point — one where the fundamentals of how software is built have changed
The 60 minutes that once defined an engineer's capability have now shrunk to seconds or minutes. Code and designs are generated almost instantly. Change is no longer planned weeks in advance — it happens continuously.
We moved from the Waterfall model to Agile. Then to Hybrid Agile. But what we are seeing now does not fit neatly into any of these categories. This is not Agile in sprints. This is Agile every day, every hour — sometimes every minute. Systems can adapt to new inputs almost immediately.
As large language models become cheaper and more accessible, this pace will only accelerate. More people will use AI to build code and entire products. The barrier to creation is collapsing.
Which brings us to a critical inflection point.
Do we still need to produce and maintain the same artifacts for months or years? How often are these documents actually referenced? And by whom?
In a world where answers are a click away, where context can be reconstructed instantly, and where systems can explain themselves, static documentation begins to lose its central role.
Four Lenses: People, Process, Technology, and Containment
To understand where we are heading, this transformation must be viewed through four lenses: people, process, technology, and containment.

Among these, people are the most critical.
People: From Builders to Governors of Autonomous Systems
Today, with web-based coding, low-code platforms, and AI-assisted development, almost anyone can become a programmer. Millions already build and share code on platforms like GitHub. With AI-driven development, that number could scale tenfold or even hundredfold.
Business analysts, product owners, and domain experts can now assemble features and build solutions end to end. This democratization is powerful — but it also introduces risk.
When you place a person inside an automated, autonomous system — where everything appears to work seamlessly — what happens when something fails?
AI can do many things, but complete dependence on AI is neither realistic nor safe. There will always be edge cases, unexpected failures, and ambiguous states where automation alone is insufficient.
Imagine placing someone who does not know how to drive inside an autonomous car. As long as everything works perfectly, the journey is smooth. But if the system fails and requires human intervention, that person may be unable to respond. What could have been a minor issue can quickly escalate into a serious failure.
The same applies to autonomous software systems.
As we move forward, we will need people who deeply understand how these systems work — not to do everything manually, but to govern, supervise, debug, and recover autonomous agents. These individuals will play a critical role in ensuring reliability, safety, and resilience.
Process: From Prescriptive Workflows to Adaptive Feedback Loops
From a process perspective, as we move into an AI-first SDLC, the very nature of software development changes. Code is no longer produced in discrete cycles or scheduled sprints. It is generated continuously — sometimes every minute. Teams no longer wait for handoffs; they work on refining intent in real time. Code and tests are generated in parallel, not sequentially.
Much of this execution happens autonomously.
In this new world, traditional process checkpoints lose relevance. What becomes critical is not controlling each step, but observing the system as it operates end to end. The focus shifts from managing phases to monitoring workflows — from intent intake, through agent execution, validation, and deployment.

This makes process visibility the single most important control mechanism.
Dashboards become the new process layer. They are no longer status trackers for phases; they are real-time observability systems that reflect how agents behave, how intent is interpreted, how validation evolves, and where risk accumulates.
An effective process dashboard in an AI-first SDLC must:
- Monitor the full lifecycle of agent execution
- Surface intent fulfillment progress, not task completion
- Highlight validation confidence and drift
- Expose guardrail breaches and policy violations
- Track stability indicators such as rollback readiness, token usage, and hallucination corrections
The role of process, therefore, is no longer to slow systems down or enforce rigid gates. Its role is to ensure stability, predictability, and trust in a continuously running, autonomous environment.
In an AI-first SDLC, process does not disappear — it evolves into orchestration, observability, and governance, enabling humans to supervise autonomous systems with confidence.
Technology: From Center Stage to Enabler
Over the last six decades, enterprise technology has evolved in distinct waves — beginning with centralized mainframe computing, moving through client–server architectures and the internet-driven dot-com era, and then transitioning into distributed, API-driven systems powered by cloud computing. This shift enabled service-based models such as SaaS, PaaS, and Security as a Service, fundamentally changing how software is built, deployed, and scaled.
Each phase reshaped how software was built.
And then came Generative AI.
This is not just another step in the same evolutionary path. GenAI has disrupted how we think about software itself. The focus is shifting from how software is built to what needs to be achieved.
This raises an important question: does technology really matter anymore?
At the end of the day, what organizations want is a working product — one that performs well, can withstand load, scales efficiently, and can be ported easily across cloud providers, whether AWS, GCP, or Azure.
As we look ahead, the ways to build software will only multiply. Java, .NET, Python, TypeScript — the options are already vast and will continue to expand. What truly matters is not the technology itself, but how quickly it enables outcomes.
Technology choices should be driven by:
- Speed of development
- Scalability
- Deployment efficiency
- Measurable return on investment
In the future, we may reach a point where switching from one tech stack to another can be done in minutes. Technology becomes transient. Intent remains durable.
What matters is how well we meet the intent, how effectively we scale, how we perform under load, and how we measure success.
Containment: Redefining Control in an AI-First SDLC
In the traditional SDLC world, the word containment had a very specific and limited meaning. It usually referred to containing defects — preventing defect leakage, enforcing code reviews, addressing review comments, and putting mitigation mechanisms in place to ensure issues stayed within acceptable limits.
Containment was reactive. It was localized. And it was largely technical.
In an AI-first SDLC, containment takes on a much broader and more critical meaning. It is no longer about keeping metrics within thresholds. It is about defining and enforcing boundaries for autonomous systems.

In this new world, containment is proportional to capability. As AI systems become more powerful, more autonomous, and more deeply embedded in the software lifecycle, the scope of what must be contained expands dramatically.
Requirement Containment
The first dimension of containment starts at the requirements level.
AI systems must not invent requirements beyond what has been explicitly approved. They should operate strictly within a defined domain of intent. Any interpretation, expansion, or derivation must remain bounded by organizational constraints and business approval.
Without this, systems risk drifting — optimizing for outcomes that were never intended or approved.
Data Containment
From a data perspective, containment is non-negotiable.
AI systems must not leak sensitive information, training data, or proprietary knowledge. They must only access and reason over the knowledge base they are explicitly allowed to see. The boundary between permitted knowledge and restricted data must be clear, enforced, and auditable.
This is not just a compliance requirement — it is foundational to trust.
Model Containment
Model behavior itself must be contained.
AI models should not behave unpredictably. They must remain aligned to their defined purpose and constraints. Correctness, consistency, and bounded behavior are essential. An intelligent system that occasionally behaves unexpectedly is not intelligent — it is dangerous.
Tool Containment
As AI agents gain access to tools, containment becomes even more critical.
Agents should not have unrestricted tool access. They must be limited to a defined, approved set of tools. They should never execute actions blindly or invoke dangerous capabilities outside their scope.
Tool access must be intentional, governed, and constrained.
Code and Artifact Containment
From a code generation perspective, AI must operate within policy-driven boundaries.
Any code or artifact generated should follow organizational standards that have been built and refined over years for good reason. This includes:
- Lint checks
- Security scans
- Code reviews
- Controlled merges
AI must never bypass these engineering guardrails. Automation does not mean exemption from discipline.
Test and Validation Containment
Testing introduces another critical containment dimension.
When AI generates tests and validation outcomes, it must not hallucinate correctness. Validation must be accurate, transparent, and reproducible. As discussed earlier, the validation or assessment score becomes a primary KPI — but only if it is trustworthy.
Humans will trust and act on these results. That trust must be earned through accuracy and explainability.
Deployment Containment
Deployment is where containment becomes most visible — and most risky.
AI systems must never push unsafe changes to production. Deployments must be:
- Controlled
- Reversible
- Governed by human approval where required
There should be clear mechanisms for rollback, automated safeguards, and human-in-the-loop checkpoints. Even in highly autonomous systems, production remains sacred.
Agent Guardrails
At the agent level, containment means guardrails.
Agents should not run endlessly. There must be limits on:
- Execution steps
- Token usage
- API calls
Infinite loops, runaway costs, or uncontrolled execution are unacceptable. Autonomy without limits is instability.
Observability and Auditability
Every AI decision must be observable, explainable, and auditable.
There should be full traceability of:
- Decisions made
- Actions taken
- Data accessed
- Tools invoked
Without observability, containment cannot be verified. Without auditability, trust cannot exist.
Organizational Containment
Finally, containment must exist at the organizational level.
Today, teams use AI in wildly different ways. Some follow best practices — context engineering, validation, hallucination checks. Others do not. This inconsistency creates risk.
Organizations need enterprise-wide AI capabilities:
- AI policy engines
- Role-based AI access
- Standardized guardrails
- Reusable governance patterns
These lessons must be applied consistently across every agentic application built within the organization.
What Containment Really Means
To summarize, containment in an AI-first SDLC is about boundaries.
Boundaries that define:
- What AI can know
- What it can do
- How far it can go
- When humans must intervene
AI must operate inside these boundaries — not outside them. And critically, humans must own and control these boundaries. Containment is no longer about limiting defects. It is about enabling autonomy safely, responsibly, and predictably.
In an autonomous future, containment is not a constraint on innovation. It is what makes innovation sustainable.
Rethinking SDLC: From Phases to Intent
So what does SDLC look like in this new world?

It begins with a shift from process-driven development to intent-driven execution.
You no longer start with detailed requirement documents. You start with intent.
For example, in a retail organization, the intent might be simple:
Increase sales by 5% this quarter.
Now imagine a system with a persistent knowledge base — one that understands the product end to end, across years of evolution. When this intent is provided to an agentic AI system, it can correlate historical data, customer behavior, architecture, and past outcomes to derive business requirements automatically.
From there, the system can act.
It may retrain recommendation models, improve personalization, optimize customer journeys, or adjust offerings based on shopping trends. Each agent learns continuously, understanding customers better over time.
Code evolves rapidly. Models are retrained. Configurations adapt.
And testing no longer waits for coding to finish.
Because requirements are derived upfront from intent, test cases and validation scenarios can be generated in parallel. Validation becomes continuous, not a downstream phase.
A New Measure of Success
In this new SDLC, success is no longer defined by completing phases or producing artifacts.
The primary KPI becomes the validation or assessment score — a measurable indicator of how well the system fulfills the original intent.
The higher the score, the greater the confidence that:
- The intent was correctly interpreted
- The system adapted appropriately
- The business outcome was achieved
This marks a fundamental shift.
The SDLC is no longer a linear sequence of steps. It becomes a living loop — one that continuously interprets intent, adapts systems, validates outcomes, and improves itself over time.
This is not just an evolution of software development practices. It is a redefinition of how software is conceived, built, governed, and trusted in an autonomous world.
And through all this change, one truth remains constant: while technology accelerates exponentially, human understanding, oversight, and responsibility remain irreplaceable.