A write-up on driver abuse, HVCI, and why signed kernel code remains one of the hardest defensive problems on Windows
Over the last several months, I have been working through a driver-focused security research project in a controlled lab and authorized testing environments. The work started as a familiar bring-your-own-vulnerable-driver problem, but it eventually evolved into something more interesting: a practical case study in how a single signed driver flaw can support multiple exploitation strategies depending on the target system's protection posture.
I am intentionally not naming the driver, publishing hashes, sharing IOCTL values, or releasing operational details in this article. The goal is not to hand a recipe to malicious actors. The goal is to explain the research process, the defensive implications, and why "just enable HVCI" is an important mitigation, but not the end of the story.
The short version is this:
A signed, trusted kernel driver exposed a dangerous kernel memory primitive. Once that primitive existed, the rest of the work became an exercise in adapting to the environment.
On systems without HVCI, kernel memory access could be turned into kernel execution. On systems with HVCI, kernel execution was meaningfully constrained, but data-only tampering remained a serious concern. That distinction matters for defenders, vendors, and anyone responsible for hardening Windows fleets.
Background: BYOVD is not new, but the details matter
Bring-your-own-vulnerable-driver attacks are well known. An attacker with sufficient privileges loads or abuses a signed driver with unsafe behavior, then uses that driver to interact with kernel memory or privileged kernel APIs. This can be used to disable security tools, bypass protected process boundaries, manipulate kernel objects, or create a path to deeper persistence and control.
That description is simple. The reality is not.
Not all BYOVD cases are equal. Some drivers expose narrow bugs that are difficult to weaponize. Others expose broad primitives that are functionally equivalent to arbitrary kernel read/write. The difference between those two categories is massive.
In this research, the driver behaved like the second category. It exposed a direct kernel virtual-address read/write primitive through an exposed driver interface. That means a user-mode caller with the right level of local privilege could ask the driver to read from or write to kernel addresses without the type of validation that should exist around such operations.
From a defensive perspective, that is the critical moment. Once an attacker can reliably read and write kernel memory, the system's trust boundaries change dramatically.
The research path: three generations of the same idea
This project evolved through three broad phases.
Phase 1: Classic BYOVD chain
The first approach was the most traditional. Use the signed vulnerable driver as a kernel memory primitive, temporarily modify code-integrity state, load a small secondary helper driver, perform the privileged operation from kernel context, then restore the modified state.
This worked in the lab, but it had an obvious weakness: loading an unsigned helper driver is a loud event. Modern security products are very interested in driver-load activity, especially when it intersects with code integrity, protected processes, and endpoint protection components.
This phase proved impact, but it also exposed a telemetry problem. If I were defending against this, driver-load behavior would be one of the first places I would look.
Phase 2: Eliminate the secondary driver
The second approach removed the helper driver entirely.
Instead of using the vulnerable driver to enable a new unsigned driver, I used the vulnerable driver as the execution vehicle. At a high level, the idea was to repurpose the already-loaded signed driver's own execution path, stage a small kernel routine into appropriate executable driver memory, route a controlled request into that routine, and then clean up afterward.
This removed several classic signals:
- no second driver image
- no unsigned driver load
- no dependency on a helper service
- no need to leave a new kernel module behind
It also made the technique much more environment-dependent. Kernel code execution in this style collides directly with HVCI and kernel data protection mechanisms. On systems where those protections are enabled and enforced, this path becomes much harder or simply fails.
That was a useful result by itself. It showed that HVCI materially changes the exploitation landscape.
Phase 3: HVCI-compatible, data-only impact
The third approach asked a different question:
What if I do not need to execute attacker-controlled kernel code at all?
HVCI is very effective at disrupting the jump from "I can write kernel memory" to "I can execute my own kernel code." But arbitrary kernel write is still dangerous even when code execution is blocked. Many security-relevant decisions depend on mutable kernel data. If those fields can be altered safely and predictably, an attacker may still be able to degrade protections without introducing new executable code.
The HVCI-compatible path focused on data-only tampering. At a high level, it demonstrated that certain protection and callback-related kernel data structures could be modified in ways that weakened security controls around protected processes and security software.
This did not make HVCI useless. Far from it. HVCI blocked an entire class of techniques. But it also showed that HVCI should be treated as one layer in a broader defense strategy, not as a complete answer to vulnerable driver risk.
The important lesson: HVCI changes the game, but arbitrary kernel write is still severe
One of the most important takeaways from this research is that HVCI deserves credit.
On non-HVCI systems, a strong kernel write primitive can often be adapted into kernel execution. That opens the door to techniques like loading helper code, repurposing signed driver execution paths, or manipulating dispatch flow.
With HVCI enabled, that transition becomes significantly harder. Attempts to modify or repurpose executable kernel memory can be blocked below the normal page-table layer. In practical terms, HVCI can break the cleanest and most direct code-execution paths.
However, this does not mean the original driver flaw becomes harmless.
If the primitive still allows kernel data writes, defenders must consider data-only attacks. These may include tampering with process protection metadata, object callback behavior, access-control decisions, or other mutable kernel structures that security products and Windows itself rely on.
That leads to a more accurate risk statement:
HVCI can prevent some kernel-code-execution chains, but arbitrary kernel read/write remains a critical vulnerability class because data-only kernel tampering can still degrade security boundaries.
Why signed drivers are such a hard problem
Windows has to trust kernel drivers. That is the entire point of the driver model. Drivers need privileged access to hardware, memory, interrupts, DMA, and operating system internals. But that trust becomes dangerous when a signed driver exposes unsafe interfaces to user mode.
The problem is not simply "a bad driver exists." The problem is that signed drivers often carry several defensive blind spots:
- They may be trusted because of their signature.
- They may be obscure enough that defenders do not monitor them closely.
- They may not be present in vulnerable-driver blocklists yet.
- They may come from legitimate software packages.
- They may expose device interfaces that few organizations inventory.
- Their IOCTL surfaces are rarely reviewed at enterprise scale.
In this project, the driver was not widely discussed publicly. That made the research more valuable, but it also made the defensive gap more obvious. If a driver is signed, obscure, and not blocklisted, it can become a powerful post-exploitation tool even if it was never designed for malicious use.
Defensive thinking: how I would try to catch this
My background includes years of defensive security work, and that shaped the research. At each phase, I asked the same question:
If I were on the defensive side, what would I detect?
For the first phase, the answer was straightforward: driver-load activity, code-integrity manipulation, helper driver artifacts, and security-process termination.
For the second phase, the signals changed. There was no helper driver, but there were still meaningful opportunities: unusual driver interaction patterns, kernel memory mutation, dispatch-path anomalies, and security processes terminating in ways that did not match normal administrative behavior.
For the third phase, the detection problem became more subtle. Data-only tampering may not look like code injection or driver loading. Instead, defenders would need to think about:
- security driver callbacks becoming ineffective
- protected processes unexpectedly becoming accessible
- security service heartbeat gaps
- abnormal access to protected process handles
- changes in kernel object or process metadata
- suspicious IOCTL usage against signed but uncommon drivers
That is the broader lesson. If detection strategy focuses only on driver loads or unsigned code, it will miss the more interesting cases.
Practical defensive recommendations
No single control fully solves this class of issue. The right answer is layered defense.
1. Enable HVCI where possible
HVCI meaningfully raises the bar. It can break common transitions from kernel write to kernel execution and should be a baseline hardening control for systems that can support it.
2. Use WDAC or strict driver allowlisting
The most effective way to reduce BYOVD risk is to prevent unknown or unnecessary drivers from loading in the first place. Hash-based blocklists help, but they are reactive. Allowlisting is stronger.
3. Monitor vulnerable and uncommon driver loads
Even signed drivers deserve scrutiny. Monitor new kernel driver installation, service creation, device-node creation, and uncommon driver usage across the fleet.
4. Inventory device interfaces and permissions
A driver that exposes a world-accessible or overly permissive device interface should be treated as suspicious until reviewed. Access control around device objects matters.
5. Watch for unusual IOCTL patterns
Most organizations do not monitor IOCTL usage deeply, but driver abuse often leaves interaction patterns. A rare driver receiving unusual request sequences from unexpected user-mode processes is worth investigating.
6. Treat protected-process anomalies as high signal
If protected security processes become accessible, suspended, terminated, or stop producing telemetry, that is a major signal. Even if the root cause is not immediately visible, the behavioral outcome matters.
7. Validate EDR self-protection assumptions
Security tools should not assume that "protected process" status alone is enough. Kernel-level tampering can target callback mechanisms, process metadata, or access-control paths.
Responsible handling
Because this research involves a true vulnerable signed driver and practical post-exploitation impact, I am intentionally withholding:
- driver name
- file hash
- IOCTL values
- vulnerable code-path details
- commands
- proof-of-concept code
- target-specific operational steps
That is a deliberate choice. The value of the research is in the lesson, not in publishing a turnkey method.
The right place for the full technical detail is a responsible disclosure process with the affected vendor and relevant platform stakeholders.
Final thoughts
The biggest lesson from this project is that driver trust is still one of the hardest problems in endpoint security.
A signed driver with an unsafe user-mode interface can collapse the boundary between local administrator and kernel authority. HVCI can block important exploitation paths, but it does not erase the risk of arbitrary kernel data access. Vulnerable-driver blocklists are useful, but they are reactive. EDR self-protection helps, but it can itself depend on mutable kernel structures.
Defenders need to think about signed drivers as active attack surface, not passive dependencies.
For offensive security research, this project reinforced something I have come to believe strongly:
The most useful research is not just about proving that something can be exploited. It is about understanding which assumptions failed, which controls helped, which controls did not, and how the defensive model needs to change afterward.
That is the story here.
A signed driver exposed kernel memory access. Multiple exploitation paths became possible. HVCI stopped some of them. Data-only tampering remained concerning. And the ultimate lesson is simple:
Signed does not mean safe.