I really dislike the term "confidential computing."

Not because the goal is wrong. But because it's a conclusion pretending to be a feature. When someone says "confidential computing," they're claiming confidentiality has been solved.

It hasn't.

What you actually get when you buy an AMD SEV-SNP or Intel TDX CPU is the ability to have VMs with hardware-backed memory encryption with (bootstrapped launch) attestation. That's it. And pretending that equals confidentiality is how bad threat models get rubber-stamped.

A SEV-SNP VM Is Not a Confidential VM

When you buy an AMD SEV-SNP CPU, deploy Ubuntu on the host and guest with all the required patches, and boot it up , you don't get a confidential VM.

You get exactly what you paid for.

A SEV-SNP VM with encrypted memory running a Linux distribution designed for trusted hosts. The OS thinks the hypervisor is friendly. The storage stack assumes the host helps. The network stack hands plaintext packets to virtio-net. Everything except RAM still operates in the old trust model.

Somehow, in the narrative, we made "SEV-SNP VM" and "Confidential VM" synonymous.

They're not.

So What Is a SEV-SNP VM, Then?

AMD SEV-SNP encrypts VM memory using ASID-tagged keys. The Reverse Map Table (RMP) prevents the hypervisor from remapping your pages. The Platform Security Processor (PSP) measures and attests the initial VM image loaded into memory. You get cryptographic proof that this specific blob booted on genuine AMD hardware.

Intel TDX encrypts memory via a dedicated crypto engine. SEPT enforces isolation boundaries. The SEAM module measures and attests the initial Trust Domain state. You get cryptographic proof of what loaded.

That's not confidential computing. That's a primitive:

A cryptographic boundary around memory, plus a receipt proving how it was initialized at launch time.

Useful. Powerful. Necessary.

But not sufficient.

What's Missing? Almost Everything.

Let's walk through what's actually in a typical "confidential VM" address space today, and observe what's missing.

Attestation that stops at launch.

SEV-SNP measures the initially loaded software into the VM's address space. In practice, that means firmware , often UEFI. What about the rest? Maybe you use measured boot with a TPM or DICE to extend measurements to initrd. But what about rootfs? What about everything that loads after?

And while attestation can be invoked at runtime, it reflects launch-time measurements. You can prove what booted. You can't prove nothing bad happened since. You're on your own for actual runtime attestation.

Firmware you don't control and can't verify

In public cloud, firmware is provided by the cloud provider, the same party we're supposedly not trusting. Who writes it? Who updates it? What's the expected measurement? What does any of this mean for security?

That firmware is dominated by UEFI , a massive attack surface from an era when we trusted the host. Think about it: UEFI in cloud isn't needed except for legacy compatibility. We're dragging decades of complexity into an environment that doesn't require it.

Worse: firmware was written assuming a trusted hypervisor. ACPI tables, for instance, come from the host. Your guest parses them. The "BadAML" attack (CCS 2025) demonstrated that host-supplied AML bytecode can be weaponized to compromise CVMs. The guest is interpreting untrusted code from the host, inside your "confidential" VM.

Disk I/O in plaintext to the host

Virtio-blk doesn't encrypt blocks. Your LUKS keys unseal based on TPM PCRs, not attestation measurements. LUKS was designed for a threat model that assumes hostile disk, trusted host. That threat model is now backwards. Even the TPM itself, in most implementations, is provided by the host that we no longer trust.

Network traffic visible to the hypervisor

Virtio-net is transparent. TLS terminates inside your VM, but the guest-to-host boundary exposes everything. Metadata, timing, packet sizes, all visible.

Virtio devices assuming a trusted host

Every MMIO read, every port I/O, every PCI config access goes through interfaces the hypervisor controls. The "Heckler" attack (USENIX Security 2024) showed that just injecting interrupts breaks both confidentiality and integrity on SEV-SNP and TDX. No memory snooping required, just interrupt injection.

Runtime is a black hole

What about all those agents running in your VM? Monitoring agents, SSH daemons, update services. Do we trust them? Do we redesign them? Do we move to an ephemeral model with attestable, immutable AMIs where nothing persists and nothing mutates?

No way to really interpret measurements

You get a hash. A blob of hex. What does it mean? Is this the right kernel? The right initrd? The right OVMF version with the right patches? Measurement interpretation is a nightmare. Reference values are scattered, undocumented, version-dependent.

In short: yes, memory is encrypted.

Everything else is naked.

And Yet, This Is Exactly What We Needed

This is where I stop complaining and start celebrating.

If AMD and Intel had tried to "solve" confidential computing, we'd be screwed. Proprietary firmware stacks. Vendor-locked boot chains. Architectural decisions made by silicon companies who've never written an init system.

Instead, they made a wise choice. The right choice They gave us primitives:

  • A cryptographic boundary (encrypted memory)
  • Architectural enforcement (RMP/SEPT)
  • Privilege separation within the guest (VMPLs)
  • A way to bootstrap attestation at launch time

That's the foundation. Clean. Auditable. Extensible.

Now the real work begins.

Diagram showing a ‘Confidential VM’ with only Memory protected (green). Six components exposed (red): Firmware, Storage, Network, Devices, Attestation, Runtime.

We Need Confidential-Native Software Stacks

The guest stack must be rebuilt. Not patched. Rebuilt.

We need confidential-native firmware

Not UEFI. Not "hardened UEFI." Firmware that doesn't interpret host-provided data at all.

ACPI tables? The hypervisor supplies them. Your guest parses them. That's the host injecting code into your TCB. Measured direct kernel boot exists — UEFI is optional now. We need to eliminate or sandbox every interface where the guest interprets host-supplied bytecode, configuration, or tables.

In the cloud, UEFI exists for legacy compatibility. We should question whether that compatibility is worth the attack surface.

We need confidential-native boot

Boot chains designed for adversarial environments. Minimal, declarative code paths. No optional features. Every component measured before execution.

We need confidential-native operating systems

Not your average Linux distribution. Actually redesigned distributions.

Something more like Ubuntu Core. Immutable root filesystems. Verified boot chains. Hardened interfaces. Explicit trust boundaries.

We need confidential-native storage

LUKS wasn't designed for this. Keys need to seal to attestation measurements, not just TPM PCRs. Unsealing should require proving you're in a genuine attested environment. Cryptographic binding between disk identity and VM identity. End-to-end I/O encryption, not just at-rest.

We need confidential-native I/O

Virtio was brilliant for cooperative virtualization. Efficient, low-overhead paravirtualization. But it assumes the host is helping.

That assumption is now explicitly wrong.

TDISP (TEE Device Interface Security Protocol) is the answer taking shape. Devices authenticate themselves. Traffic encrypts on the PCIe fabric. Configurations attest to the guest. AMD's SEV-TIO implements it. Intel's TDX 2.0 supports it. NVIDIA's Blackwell GPUs do TDISP/IDE for encrypted GPU-CPU data paths.

We need attestation beyond boot.

We need to extend the initial launch measurement beyond UEFI, all the way up to the root filesystem.

And we need actual runtime attestation, not just invoking launch attestation at runtime, but attestation of the security state of the VM as it runs. Continuous verification. The ability to prove not just what loaded, but what's running now.

We need interpretable measurements.

Reference values. Signed manifests. A way to know that hash 0x7a3f... means "Ubuntu 24.04 with kernel 6.8 and these specific packages."

This is the non-glamorous work that makes confidential computing actually usable. The good news? This is all software. And you can build anything with software.

The confidential computing consortium is one such great place where a lot of people who share this goal come together and try to solve these challenges for everyone.

Why Bother With This "Niche"?

The Accessibility Principle

When architects design buildings for wheelchair access, everyone benefits. Parents with strollers. Delivery workers with carts. People with temporary injuries. The elderly. The constraint forces better design.

When web developers build for screen readers, everyone benefits. Better semantic structure. Cleaner navigation. Content that works across devices and contexts.

Security works the same way.

When we design operating systems for the constraint "the hypervisor is trying to attack you," we get systems that are more secure against everyone trying to attack them. Supply chain compromises. Insider threats. Physical attackers. Nation states.

The confidential computing threat model is extreme. It assumes the infrastructure operator is adversarial. Most environments don't face that specific threat.

But the defenses we build for that threat model, minimal TCBs, verified boot, encrypted I/O, runtime integrity, attestation-based provisioning — make sense everywhere.

CVMs aren't a niche. They're the accessibility ramp that upgrades the entire building.

"Confidential computing" is a golden opportunity to fix what should have been fixed a decade or two ago. Let's not miss it.

The Industry Is Noticing

Large hyperscalers, especially in the age of AI, want to make confidential VMs the default.

Last year at the Confidential Computing Summit, Mark Russinovich presented a framework of three levels for Azure Confidential VMs. Why? I guess because Azure's team too recognized the lack of precision in how we talk about these deployments.

The three-level model makes explicit the security assumptions customers might be making, and often getting wrong. I think this is excellent posture to take.

None
https://azure.microsoft.com/en-us/blog/protecting-azure-infrastructure-from-silicon-to-systems/

I am still going to be greedy though and say that the default terminology for this new generation of hardware-backed secure virtualization to not have "confidential computing" in the name. We'll see how that goes.

What Should We Call Them?

If "Confidential VM" overpromises, what's the right terminology?

I think we should stop describing VMs and start describing threat models.

"Memory-encrypted VM with boot attestation" is accurate but unwieldy. "Hardware-isolated VM" captures the mechanism. "TEE-backed VM" connects to the broader trusted execution environment concept.

But maybe the real answer is: describe what you're actually defending against.

  • "Hypervisor-isolated workload"
  • "Infrastructure-operator-excluded compute"
  • "Attestable execution environment"

The marketing term "Confidential VM" implies a solved problem. We should use terms that imply a configured security posture.

This approach also gives us the opportunity to discuss more explicitly the various implementations. AWS, notably, doesn't offer memory encryption on most instances , they've bet on Nitro's isolation model instead. Different threat model, different tradeoffs, different conversation.

Precision in language forces precision in thinking.

Disclaimer: The views expressed here are my own and do not necessarily represent those of Canonical. I just happen to work there and have thoughts.

In the next blog post, we will discuss how various open source projects are addressing each of the gaps identified above