Quantum computing has a way of turning vague discomfort into very pointed questions.

If you're using format-preserving encryption (FPE) today, especially through a commercial or closed-source vendor, someone is going to ask:

"Are we quantum-safe?"

And if the answer is something like:

"We use FF1 and it's NIST approved."

That's not enough.

Because "quantum-safe" isn't a sticker you slap on a cipher name. It's the sum of your design choices: key size, deployment model, determinism, domain size, tweak strategy, and whether your ciphertext domain is enumerable.

The 90-second reality check: FF1 and Quantum

FF1 is standardized by NIST in SP 800–38G as a format-preserving encryption mode, a mode of operation for an underlying approved symmetric-key block cipher algorithm. [1]

On the quantum side, people usually wave around two algorithms:

  • Shor's Algorithm: breaks public-key crypto (RSA/ECC)
  • Grover's Algorithm: gives a square-root speedup for generic brute-force search

You'll often hear "Grover halves security" (n → n/2). Useful worst-case mental model, but the real-world cost story is messier: NIST explicitly says Grover may provide little or no advantage in attacking AES, that AES-128 will likely remain secure for decades, and that AES-192/256 should be safe for a very long time. [2]

So yes: key size matters. If you want easy margin, using AES-256 under FF1 is the obvious move.

But in many FPE deployments, key size is only one term in the equation, and it's not the term that dominates risk.

Quick reality check on FF3 and FF3–1

The original FPE standard (2016) approved two methods: FF1 and FF3. [1] After a "total break" attack was found on FF3, NIST tried to fix it with FF3–1.

It wasn't enough. In the February 2025 revision (SP 800–38G Rev. 1), NIST made it official: FF3 and FF3–1 are gone. [3] A weakness in the tweak schedule made them unsuitable for the long term.

As of 2026, FF1 is the only NIST-approved method left standing. If your vendor is still running on FF3–1, they aren't just behind… they're officially using a method NIST has withdrawn. It's time to ask for their migration roadmap.

Why FPE exists at all (and what it was never meant to be)

FPE wasn't adopted because "keeping numbers as numbers" is inherently more secure.

It was adopted because legacy systems break when using encryption.

Mainframes, fixed-width schemas, brittle validation logic, ancient ETL/reporting jobs, systems that assumed:

  • numeric-only values
  • fixed length
  • "refactoring is impossible"

So FPE became a compatibility encryption tool.

The mistake is when teams treat the most constrained version, strict numeric-only (or alphabet matching "radix") FPE, as the "correct" or "most secure" way to do FPE long after those constraints stopped being real. This is an engineering reality I see repeatedly in production deployments.

Strictness is not security. Strictness is usually just constraints, and constraints often shrink your effective domain. Which is exactly what you don't want if your new anxiety is "do we have enough margin?"

The real risk isn't quantum, it's small domains

Most values people encrypt with FPE have tiny domains:

  • SSNs
  • phone numbers
  • short account IDs
  • various "identifiers" that are not truly random

If your domain is small enough, the attacker doesn't need to break AES. They can brute-force the domain and use leakage, validation rules, access patterns, or data joins to confirm guesses. With any verification path (API access, downstream validation rules, partial plaintext knowledge, cross-dataset joins, or leaked samples), small domains quickly become economically enumerable.

This is the part people miss when they fixate on "quantum safe key sizes."

Even if you do everything "right" at the cipher level, AES-256, clean implementation, perfect key storage: A small domain is still a small domain. Quantum doesn't change that fundamental math. It mostly changes how urgently people ask the question.

So this is where the FPE conversation should pivot from cipher trivia to system design.

Tweaks, entropy, and referential integrity: how FF1 is actually used

In crypto-theory world, you want:

  • strong, variable tweaks
  • non-determinism (or strict domain separation)
  • minimal correlation leakage

In production world, people want:

  • joins
  • analytics
  • "the same ID looks the same everywhere"
  • referential integrity across services and warehouses

That pushes teams toward deterministic behavior.

And here's the tension:

  • variable tweaks reduce correlation
  • but variable tweaks also break referential integrity (same input → different output)

So what happens in practice is predictable: many organizations choose determinism because the business wants stable joins, and the tweak becomes fixed/implicit/empty because stability wins.

That choice is often reasonable. But once you choose determinism, you're no longer living in a "pure cryptography" threat model. You're now in an architectural threat model.

The pivot: determinism leaks, so domain design becomes your lever

If you're using FF1 deterministically with a fixed (or empty) tweak, assume you're leaking:

  • equality (same plaintext -> same ciphertext)
  • frequency patterns
  • cross-system correlation

And if the plaintext domain is small, you're also increasing pressure from offline guessing/enumeration.

Now here's the real-world part: a lot of organizations choose determinism because they need referential integrity for analytics and joins. In those environments, the question isn't "are tweaks perfect?" It's:

If determinism is required, how do we make guessing and enumeration economically infeasible?

That's where domain design becomes a first-class security control. This is often more important than debating "AES-128 vs AES-256" in isolation.

Most systems don't actually need "digits in → digits out." They need "fits the column-ish" and "doesn't break downstream." Once you separate those constraints, you unlock the practical upgrade: preserve length-ish, expand the radix, increase effective entropy.

The practical move: preserve length-ish, expand the radix, increase entropy

If you keep your output numeric-only, you're voluntarily constraining yourself to radix 10.

That's often a security tax.

A length-9 numeric space has entropy around:

  • 9 digits, radix 10 → about 30 bits of domain size

If your schema can tolerate alphanumeric output, you can preserve length and still massively expand the search space:

  • 9 chars, radix 36 (0–9 + A–Z) → about 46 bits
  • 9 chars, radix 62 (add lowercase) → about 54 bits

Same length. Same referential integrity. Much harder to enumerate.

This does not eliminate equality leakage. It does not replace tweaks cryptographically.

But it does change the economics of attack — especially in the exact scenario most companies are in:

  • deterministic output required for analytics
  • tweaks constrained or fixed for stability
  • domains that would otherwise be enumerable

This is what practical "quantum-safe" thinking looks like in real systems: not just "bigger keys," but more margin in the whole system. Domain size is margin. Radix is margin. Removing unnecessary constraints is margin.

(And yes: still use sane keys. The key matters. It's just not the only term.)

The "For the Nerds" Sidebar

Note on the math: The bit values above are rounded estimates for clarity. If you're bored or building a spreadsheet, the formula for domain entropy is:

H=log2​(Radix^Length)

Or, if you want to make it easier on your calculator:

H=Length×log2​(Radix)

This is why shifting from Radix 10 to Radix 62 isn't a linear upgrade — it's an exponential leap in the work required for an attacker to enumerate your data.

Important line in the Sand

Expanding radix / increasing domain entropy:

  • does not make deterministic encryption "perfect"
  • does not remove correlation leakage
  • does not replace well-designed tweak strategies

What it does do is:

  • raise brute-force cost dramatically
  • reduce practicality of enumeration
  • buy real-world margin where teams actually live

If a vendor sells "strict numeric-only FPE" as inherently better, that's often a product limitation disguised as best practice.

How mature systems balance this

The best deployments don't treat FPE as a religion. They combine controls:

  • deterministic FPE where analytics truly require it
  • expanded radix / domain growth where schema allows it
  • strict access control on decrypt operations
  • logging + anomaly detection around access
  • role-based transformations (not everyone needs the same view)
  • stronger tweaks where referential integrity is not required

This is system design.

And it's also why "quantum-safe" isn't a single knob. You don't buy it by upgrading a key size alone. You get it by designing a system with margin.

If you're using FPE today, ask your vendor these questions

If your FPE solution is proprietary or closed-source, these are due diligence questions, not nice-to-haves.

  1. Which FPE method are you actually using: FF1, FF3, or FF3–1? NIST's later draft revision work discusses weaknesses affecting FF3 and FF3–1, but not FF1, and removes FF3 in the second public draft. If FF3 or FF3–1 is still in use, ask what standard baseline is being tracked and what the migration plan to FF1 looks like. [3]
  2. What block cipher is used underneath, and what key sizes are supported? At the standards level, FPE modes operate over an approved symmetric-key block cipher algorithm, most commonly AES. Ask which cipher is used, which key sizes are supported, and whether any substitutions occur internally. [1]
  3. What key sizes do you recommend for long-lived data? NIST's post-quantum guidance states that AES remains appropriate, that AES-128 is likely secure for decades, and that AES-192 and AES-256 should be safe for a very long time. [2]
  4. How is determinism handled, and what inputs control it? If deterministic output is supported for referential integrity, ask what controls that behavior. Is determinism driven by fixed configuration, implicit defaults, or inputs such as tweaks or context values? What leakage should customers expect as a result?
  5. What minimum domain sizes do you enforce or recommend? If the system allows FPE over very small domains simply because "it works," that is an architectural risk. Ask how domain size guidance is communicated and enforced. [3]
  6. Can I expand the radix while preserving length? If not, customers may be forced into unnecessarily small domains that are easier to brute-force and enumerate.
  7. How do you mitigate domain enumeration in real systems? Ask about rate limiting, access control, monitoring, tenant isolation, auditability, and any assumptions the design makes about attacker verification paths.

Notice how only one of these questions is about key size. The rest are architectural questions, because architecture is where most FPE deployments gain or lose real security margin.

Final takeaway

Stop asking:

"Is FF1 quantum safe?"

Start asking:

"Have we designed our FPE usage so brute-force and correlation attacks are impractical, even when determinism is required for specific use cases?"

If your FPE design isn't accounting for small domains, determinism leakage, and format constraints, you don't need to worry about the "quantum future." The risk is already here today.

Sources

[1] NIST SP 800–38G Update 1 Recommendation for Block Cipher Modes of Operation: Format-Preserving Encryption https://csrc.nist.gov/pubs/sp/800/38/g/upd1/final

[2] NIST Post-Quantum Cryptography FAQ Guidance on Grover's algorithm and symmetric key security https://csrc.nist.gov/projects/post-quantum-cryptography/faqs

[3] NIST SP 800–38G Rev.1 — Second Public Draft Discussion of FF3 / FF3–1 weaknesses and removal of FF3 https://csrc.nist.gov/pubs/sp/800/38/g/r1/2pd

[4] On the Practical Cost of Grover for AES Key Recovery Fifth NIST Post-Quantum Cryptography Standardization Conference https://csrc.nist.gov/csrc/media/Events/2024/fifth-pqc-standardization-conference/documents/papers/on-practical-cost-of-grover.pdf

[5] Methods for Format-Preserving Encryption Comment on the 2PD of SP 800–38G Rev. 1 Official NIST announcement of technical changes (Feb 2025) https://csrc.nist.gov/news/2025/comment-on-the-2nd-draft-of-sp-800-38g-rev-1