Introduction
Each year, Gartner releases its "Top Strategic Technology Trends," offering a high-level view of where enterprise technology is heading. But the real signals, the ones that actually change how security teams operate, tend to sit deeper in its "Predicts" research.
One of those signals is easy to miss, but hard to ignore once you see it: third-party cyber risk management (TPCRM) is quietly breaking under the AI era, rather than evolving to meet it.
The prevailing model, built on vendor questionnaires, periodic assessments, and the assumption that risk can be understood upfront, is being stress-tested by two forces at once:
- increasingly dynamic, interconnected supply chains
- and the rapid adoption of generative AI on both sides of the assessment process
The result is something more dangerous than mere inefficiency; it's a growing gap between how confident organizations feel about third-party risk and how well they actually understand it.
In its piece, Predicts 2026: Third-Party Cybersecurity Risk Management Evolves for the AI Era, Gartner points to many of the underlying shifts driving this gap, but taken together, they suggest a more fundamental conclusion:
AI is accelerating the failure of the old model of third-party risk management, not fixing it.
To understand what comes next, we need to look beyond faster questionnaires and incremental improvements; we need to rethink what "managing third-party risk" actually means in an AI-driven environment.
The Core Challenge: Speed Without Insight
Third-party cyber risk management is facing a fundamental crisis. Organizations are experiencing a surge in breaches originating from their vendor ecosystems, yet the tools they rely on haven't evolved to match the threat landscape. According to Gartner, 62% of organizations still place excessive trust in due diligence questionnaires to inform their risk decisions.
This presents a problem, because those questionnaires are increasingly AI-generated and being evaluated by AI systems, so the entire process is becoming faster while simultaneously becoming less reliable.
This creates a dangerous illusion of progress. Organizations believe they're becoming more efficient and data-driven, when in reality, they may be making decisions based on increasingly unreliable inputs.
The implication is clear: speed is improving, but insight isn't keeping pace.
The AI Acceleration Paradox
Generative AI is rapidly transforming how third-party risk assessments are completed and reviewed. Gartner predicts that by 2028, 70% of organizations and their vendors will be using GenAI on both sides of the questionnaire process. Vendors will use it to generate responses, and security teams will use it to analyze them.
On the surface, this looks like progress, and in one narrow sense, it is.
Questionnaires that once took weeks to complete can now be turned around in hours. Security teams can process responses from hundreds of vendors without the same operational bottlenecks. Throughput increases, backlogs shrink, metrics improve.
But as we said, although the speed of assessment may be improving, the quality of insight remains largely unchanged.
Third-party questionnaires remain what they have always been: self-reported, point-in-time representations of a vendor's security posture.
AI doesn't change that underlying limitation. It just makes the process more efficient.
And when both sides rely on AI, vendors generating responses and enterprises analyzing them, the process risks becoming increasingly detached from reality. What looks like a richer signal is often just faster synthesis of the same underlying assumptions.
The real danger is that AI amplifies existing problems while masking their impact.
Organizations see improved cycle times and assume they're making better risk decisions, but in reality, they may just be moving faster through a model that was never designed to capture how third-party risk actually behaves.
Faster onboarding doesn't mean better risk management; it just means you can be wrong at scale.
The Output Degradation Problem
Perhaps the most concerning trend Gartner identifies is what they call "output degradation": a phenomenon that occurs when AI-generated content is analyzed by AI systems, creating a cycle where errors compound over time.
Think of it like a photocopy of a photocopy. Each generation introduces subtle distortions.
When vendors use AI to generate questionnaire responses, those responses contain patterns, assumptions, and artifacts. When security teams then use AI to analyze those responses, those same patterns can be reinforced, misinterpreted, or amplified.
Over time, this creates a gradual loss of signal, a growing disconnect between what the system reports and what is actually happening in a vendor's environment.
This is where the risk becomes operational.
Decisions about vendor onboarding, risk acceptance, and remediation are increasingly based on outputs that may be internally consistent but externally inaccurate.
In other words, the process becomes not just faster but self-referential, and that's far more dangerous than being slow.
Where AI Actually Adds Value
Despite these risks, the answer is to apply AI more deliberately, not to avoid it.
The highest value of AI in third-party risk management lies in enabling continuous visibility, detection, and response.
Used well, AI can help organizations:
- Scale monitoring across large vendor ecosystems
- Detect control drift after onboarding
- Surface weak signals and emerging patterns
- Prioritize investigation and response efforts
This shifts human effort away from repetitive documentation tasks toward higher-value work: incident response planning, dependency mapping, and real-time decision-making during vendor incidents. AI should be used to improve awareness, not merely accelerate documentation.
This is also where many organizations are starting to rethink their tooling.
Instead of relying solely on vendor-provided information, they're complementing it with externally validated, continuously updated signals; observing how third-party code behaves in real environments, how dependencies change over time, and where new exposures emerge without waiting for a reassessment cycle.
This shift, from self-reported posture to observed behavior, is what makes continuous monitoring meaningful, rather than just more frequent data collection.
The Convergence of Cyber GRC and Third-Party Risk
Another major shift is the breakdown of silos between cyber GRC (Governance, Risk, and Compliance framework) and third-party risk management.
Historically, these functions have operated separately, with different tools, workflows, and reporting structures. That separation made sense when third-party risk was treated as a discrete process, but it no longer does.
Third-party risk is now inseparable from overall enterprise risk. When a vendor is compromised, the impact spans systems, data, compliance obligations, and business operations.
Maintaining separate systems for these domains creates:
- fragmented visibility
- slower response times
- unclear ownership
Integration changes that. When TPCRM and GRC are aligned, organizations gain:
- a unified view of risk exposure
- faster incident response coordination
- clearer accountability across teams
More importantly, they can understand not just whether a vendor is risky, but how that risk propagates across the enterprise.
From Prevention to Resilience
The most important shift is philosophical. Traditional TPCRM is built on a prevention mindset: the idea that thorough due diligence can stop third-party incidents before they happen. That assumption no longer holds.
Modern supply chains are too complex, too dynamic, and too interconnected for upfront assessments to provide lasting assurance. Organizations need to shift to a resilience-based model which accepts that:
- vendors will be compromised
- control environments will change
- new risks will emerge after onboarding
You can no longer expect to prevent every incident, but you can detect issues early, respond effectively, and minimize their impact.
This requires investment in continuous monitoring, clear response workflows, and cross-functional coordination.
Prevention still matters, but it's no longer the center of the strategy.
The Path Forward
Gartner's recommendations point toward a fundamentally different approach:
- Stop automating outdated processes
- Invest in continuous monitoring (particularly approaches that provide independent visibility into third-party behavior, not just refreshed vendor inputs)
- Integrate TPCRM with broader cyber GRC
- Build for resilience, not just prevention
- Apply AI where it improves insight, not just speed
Taken together, these shifts represent a different operating model, one that reflects how third-party risk actually behaves in the real world.
Conclusion
Gartner's message is clear: the traditional, questionnaire-driven model of third-party risk management is under strain, but the deeper implication is more uncomfortable: