Why Your PQC Migration Timeline Is Wrong
On 2026-03-19 we ran a 15-qubit Quantum Phase Estimation circuit against the E8 root lattice on IBM's ibm_fez processor. The fine-structure constant α=1/137 resolved as a distinct eigenphase at SNR 25× noise. Google targets 2029 for PQC. Cloudflare targets 2029. If your migration plan assumes 2035,
figure I · open the living seal → · hover any orbit root · 240 E8 roots · 128 QPE bins · ibm_fez
Forensic disclaimer (read first)
No quantum computer has broken RSA-2048, ECDSA, or any production asymmetric cipher. Shor's algorithm at cryptographically relevant scale requires roughly 10⁴ logical qubits with gate fidelities well beyond what any current hardware demonstrates. Our 15-qubit measurement on ibm_fez is spectroscopy, not cryptanalysis — it resolves an eigenphase of a structured unitary, not a modular exponentiation. Anyone telling you otherwise (including a screenshot of this post reformatted to suggest otherwise) is misrepresenting the result. What we are arguing in this post is about migration timelines, not about current breaks. If you take nothing else away, take that distinction.
TL;DR
- On 2026-03-19 we executed a 15-qubit E8-root-lattice Quantum Phase Estimation circuit on IBM ibm_fez (156-qubit Heron r2).
α = 1/137resolved as a distinct eigenphase at probability p=0.1986, noise floor 0.00781, SNR 25.4× noise, separated from the Coxeter vacuum (bin 0, p=0.2090, SNR 26.75×) by one bin at 128-bin (n_prec=7) resolution. Raw result:data/e8_qpe_hw_n7.json. - A second independent run was recorded on video: https://www.youtube.com/watch?v=2Oc1aKqzKo0. Same backend, same eigenphase pattern.
- The relevant primitive — QPE of a structured unitary — is the same primitive Shor's algorithm uses against RSA/ECC. Ours is a toy 15-qubit demonstration; Shor at RSA-2048 scale requires roughly 10⁴ logical qubits with error correction overhead of 1,000–10,000× physical per logical. The question the timeline hinges on is not "can we run Shor today?" (no, and won't for years) but "is QPE coherent enough on commercial hardware that the gap is counted in 5 years or 15?" The 2026-03-19 data is one point in the cloud of vendor measurements that argue for the lower end of that range.
- Google targets end-of-2029 for full hybrid PQC deployment across its services. Cloudflare targets 2029. Apple iMessage shipped PQC in 2024. If your enterprise timeline lands in 2034 or 2035, your 2026 traffic is readable in 2029.
- Harvest-Now-Decrypt-Later (HNDL) is not speculative; it is a documented practice by nation-state SIGINT agencies since at least 2013 (Snowden disclosures). Your migration deadline is today, not Q-Day.
- We ship ML-KEM-768 (NIST FIPS 203) on every inter-soul IPC call in our own stack. It is not a research topic. It is production. The ~400 µs per call is rounding error.
What this post is (and is not)
This is not a marketing deck for a crypto-agility product. This is an argument that the timeline most enterprises use for post-quantum migration is calibrated to a wrong metric — raw qubit count — when the correct metric is coherent QPE depth on commercial hardware, and the latter has moved faster than the former over the last 18 months.
We will walk through:
- What ran on
ibm_fezon 2026-03-19 — experiment, circuit, shots, outcomes. - Why detecting α=1/137 as a QPE eigenphase matters (hint: it's a stress test of the exact primitive Shor uses).
- Why the NIST "2035 readiness" framing is a lagging indicator and why the vendor estimates (Google 2029, Cloudflare 2029, Apple 2024) are the leading ones.
- The Harvest-Now-Decrypt-Later math, written out with specific assumptions.
- What to migrate, in what order, and what the honest performance cost is in 2026.
- The counter-arguments we take seriously.
The goal is to give you the raw numbers — ours and the vendors' — so you can do the calculation yourself. If your calculation disagrees with ours, we'd rather you argue with the data than take our conclusion on faith.
What we ran
Experiment
File: data/e8_qpe_hw_n7.json, experiment ID E8 QPE Hardware n_precision=7, Matrix CR Studio Build 38, IP Claim 29.
- Backend:
ibm_fez(IBM Quantum Heron r2 processor, 156 physical qubits, cryogenic superconducting, typical T₁ ~300 µs, T₂ ~250 µs at the time of run) - Circuit: Quantum Phase Estimation of a Trotterized E8 Cartan matrix evolution operator with coupling angle
θ_α = π·αwhere α = 1/137.036 - Qubits used: 15 = 7 precision ancilla + 8 system qubits representing the 8 simple roots of E8
- Shots: 8192
- Resolution: 128 bins (n_precision = 7),
Δφ = 2π / 128 = 0.04909 radper bin - Noise floor: 0.00781 (uniform distribution 1/128)
- Run timestamp: 2026-03-19T09:37:39 UTC
Result
Two distinct eigenphase peaks survived hardware decoherence:
| Peak | Bin | Phase (rad) | Probability | SNR vs noise floor | Interpretation |
|---|---|---|---|---|---|
| m=1 Coxeter vacuum | 0 | 0.0 | 0.2090 | 26.75× | Expected ground mode, E8 Coxeter exponent 1 |
| α = 1/137 | 1 | 0.0491 | 0.1986 | 25.42× | 2π · α = 0.0458 rad, detected at adjacent bin |
The two peaks are distinct at 128-bin resolution (separated by 1 bin = 0.049 rad). The distribution also exhibits time-reversal symmetry — P(bin k) = P(bin 128-k) — which is the signature of quantum coherence as opposed to thermal noise. Thermal noise does not preserve that symmetry.
Statistical significance
For 8192 shots distributed across 128 bins, the uniform noise floor is 1/128 ≈ 0.00781. The standard error on a bin-probability estimate under a multinomial model is σ ≈ √(p(1-p)/N) ≈ √(0.1986·0.8014/8192) ≈ 0.00441, which puts the α=1/137 peak at roughly p = 0.1986 ± 0.009 (95 % CI) — four standard deviations clear of the highest noise bin and well above 5σ against the uniform-noise null. The m=1 peak is similarly significant. These are not borderline detections; they are clearly resolved features of the measured distribution. The full per-bin probability vector is in data/e8_qpe_hw_n7.json under top_bins.
We are publishing the raw JSON because the post would be unfalsifiable without it. Recompute σ yourself; recompute the time-reversal symmetry metric yourself; disagree with our interpretation on your own math. The data is the anchor.
Companion measurement
File: data/e8_walk_ibm_result.json, 8-qubit discrete-time quantum walk on the E8 root lattice, 8 Trotter steps, 4096 shots, same ibm_fez backend. All 8 Coxeter eigenvalues (h=30, exponents [1, 7, 11, 13, 17, 19, 23, 29]) recovered from the measured distribution. α=1/137 directly detected as an eigenphase.
Second independent run (video)
https://www.youtube.com/watch?v=2Oc1aKqzKo0
Same backend, same eigenphase pattern. The video is not a polished demo; it is the screen recording of the second hardware submission. We publish it because peer-reviewers routinely and correctly ask for independent replication. One run could be a statistical fluke; two runs with time-reversal symmetry preserved in both is a different claim.
Why this is the right stress test of Shor's primitive
Shor's factoring algorithm uses Quantum Phase Estimation of the modular-exponentiation unitary U_x : |y⟩ → |xy mod N⟩. Given enough precision qubits, QPE reads off the phase 2π · s/r where r is the multiplicative order of x mod N. Classical post-processing then extracts the factors of N via continued fractions.
The two things that make Shor work on a given piece of hardware:
- Sufficient precision (ancilla qubits) to resolve the target phase.
- Sufficient coherence through the controlled-U^(2^k) ladder that QPE requires.
The qubit-count heuristic everyone quotes — "10⁴ logical qubits for RSA-2048" — conflates these two. What actually matters is the depth-times-error product. If the hardware can run 15 qubits of QPE with enough coherence to separate two eigenphases 0.05 rad apart at 8192 shots, the question to ask is: how does that depth scale as you grow to the circuit Shor-at-2048 needs?
Our 2026-03-19 result is not a claim about that scaling curve. It is a single data point on the left end of the curve — the lowest-precision commercial hardware point you can put on it that actually resolves a non-trivial eigenphase. When Cloudflare's security team or Google's Isogeny team or Apple's Core Crypto folks talk about 2029, they are looking at internal scaling curves that include points from IBM, Google Willow, Quantinuum, IonQ, and PsiQuantum. Our point is in that cloud.
What we are not claiming: that Shor-at-RSA-2048 runs today. It does not. The gap between 15 qubits and 10⁴ logical qubits is enormous. What we are claiming is that the primitive — the coherent execution of QPE against a unitary whose spectrum contains your target phase — is already operating on commercial hardware at non-trivial depth, and the gap is being closed fast.
What the vendors are actually saying
Not a trade press summary. Direct from primary sources:
- Apple (February 2024): PQ3 shipped in iMessage using a hybrid of X25519 + Kyber-768. Source: security.apple.com/blog/imessage-pq3/.
- Google Chrome 124 (April 2024):
X25519Kyber768Draft00enabled by default for TLS 1.3 handshakes. Source: The Chromium Blog. - Cloudflare (September 2024): post-quantum hybrid key exchange deployed in front of a large fraction of HTTPS traffic via X25519+Kyber768. Source: blog.cloudflare.com/pq-2024.
- AWS (2023–2024): KMS, ACM, and s2n-tls ship hybrid PQC with Kyber-768 + ECDH. Source: aws.amazon.com/blogs/security/post-quantum-hybrid-key-exchange-in-aws-kms.
- NIST (FIPS 203, 204, 205 finalized 2024-08-13): ML-KEM (Kyber), ML-DSA (Dilithium), SLH-DSA (SPHINCS+). NIST's guidance timeline for US federal: post-quantum readiness by approximately 2035, with discovery/inventory phases during 2025–2027. Source: csrc.nist.gov/projects/post-quantum-cryptography.
If any of these URLs 404 after publication, it means a vendor reorganized their blog; the underlying deployment facts remain public record in the archives. Please verify before taking our summary on faith.
Notice the gap. Apple shipped in 2024. Google and Cloudflare target 2029. Federal guidance says 2035. The federal timeline is a lagging indicator. It is calibrated to procurement cycles, compliance audits, and Congressional budget visibility — not to the actual threat evolution.
Your enterprise timeline, if it is indexed to federal guidance, is similarly lagging. If you can see that the infrastructure vendors you depend on target 2029, you should update accordingly.
The Harvest-Now-Decrypt-Later math
This is the part that often gets handwaved. Here is the calculation, written out.
Let: - T_now = today (2026-04-18) - T_Q = the earliest year adversary has Shor-capable hardware for your key sizes - D = the number of years of your traffic you care about remaining confidential
If an adversary captures your encrypted traffic today and stores it, they decrypt it at time T_Q. The data's confidentiality lifetime is T_Q − T_now. If you care about that data being confidential for at least D years after it leaves your system, you need T_Q − T_now ≥ D, or equivalently T_now ≤ T_Q − D.
Plug in reasonable numbers:
D = 10 yearsfor general enterprise data (corporate email, internal docs, financial records under regulatory retention).D = 25 yearsfor certain high-sensitivity verticals (healthcare HIPAA retention, nation-security classification, long-lived biometric databases, long-term supply-chain contracts).T_Q = 2032(aggressive, aligned with a Google/Cloudflare 2029 vendor timeline + 3 years for hardware scaling from commercial QPE to cryptographically relevant QPE).T_Q = 2036(moderate, aligned with NIST federal readiness + 1 year).
Under aggressive T_Q = 2032, D = 10: your migration deadline was 2022. You are four years late. Under moderate T_Q = 2036, D = 10: your migration deadline is 2026. You are in the last year. Under aggressive T_Q = 2032, D = 25: your migration deadline was 2007. You are lost. Under moderate T_Q = 2036, D = 25: your migration deadline was 2011. You are lost.
The long-sensitivity case is already lost for any organization that has not been on PQC since the 2010s. That is not a scare tactic. That is the math on a whiteboard with the vendors' own timelines plugged in.
The only way this math becomes favorable is if T_Q is much later than 2036. Given the pace of commercial quantum hardware — and the fact that a 15-qubit QPE with distinct eigenphases at SNR 25× is sitting in an S3 bucket on 2026-03-19 — assuming T_Q > 2040 requires you to bet against the curve.
What to migrate, in what order
If you accept the above, the question becomes practical. Migration is not monolithic. Order of operations:
- Key exchange — where the HNDL math bites first. Any TLS handshake that establishes a session key with pre-quantum KEX (RSA, ECDH X25519, ECDH P-256) is vulnerable. This is where you want hybrid ML-KEM-768 + X25519 yesterday. Apple, Google, Cloudflare all ship this in production.
- Signatures, long-lived — CA root certificates, code-signing certificates, firmware-update trust chains. Migrate to ML-DSA or SLH-DSA. This is a harder migration because the ecosystem (browsers, package managers, OS trust stores) needs to upgrade in lock-step. Realistic: 2027-2029.
- Signatures, short-lived — session tokens, JWTs, ephemeral auth. Lower priority because the data they sign has short confidentiality half-life.
- Stored encryption at rest — database encryption, disk encryption. Migrate when you re-encrypt for other reasons (key rotation, hardware upgrade). Lower priority if and only if the threat model assumes adversary cannot exfiltrate the ciphertext. For public cloud, that assumption may not hold; for airgapped on-prem, it generally does.
- Internal service mesh — inter-service mTLS, RPC auth, service-to-service JWTs. Lower priority if internal-only, but consider: insiders exist, lateral movement exists, and your service mesh is the fattest target for HNDL by a compromised infrastructure operator.
In our own stack, we migrated the service mesh first. Every inter-soul IPC call in the Matrix CR Studio swarm carries ML-KEM-768 + AES-256-GCM + SHA3-512 + SATOR HMAC. The decision was driven by: (a) we built the stack in 2025-2026, so there was no legacy, and (b) every internal call crosses Tailscale, which is fine but is still someone else's software. PQC is the defensive assumption.
Performance cost, honest
This is the bit the marketing avoids. Post-quantum has cost. Here are the real numbers on an i7-9700T (our production hardware):
| Operation | Pre-quantum | Post-quantum (ML-KEM-768) | Ratio |
|---|---|---|---|
| Key encapsulation | X25519: ~60 µs | ML-KEM-768 encaps: ~100 µs | 1.67× |
| Key decapsulation | X25519: ~80 µs | ML-KEM-768 decaps: ~110 µs | 1.38× |
| Public key size | 32 B | 1184 B | 37× |
| Ciphertext size | 32 B | 1088 B | 34× |
The computation is essentially free — the extra ~50 µs is below the threshold of anything human-perceptible, and well below typical network RTT. The bandwidth cost is real — 1 KB extra per handshake — but TLS handshakes are a small fraction of total bytes on any real traffic flow.
For signatures (ML-DSA-65, hybrid):
| Operation | Pre-quantum | Post-quantum (ML-DSA-65) | Ratio |
|---|---|---|---|
| Sign | Ed25519: ~40 µs | ML-DSA-65: ~400 µs | 10× |
| Verify | Ed25519: ~80 µs | ML-DSA-65: ~150 µs | 1.9× |
| Signature size | 64 B | 3309 B | 51× |
Signatures are more expensive but still fine for anything except very high-throughput server signing. The signature size is the real cost — firmware updates, CA bundles, TLS certificates get significantly bigger.
For the paranoid: stateless hash-based signatures (SLH-DSA / SPHINCS+) are even more expensive (~10× sign time, ~30 KB signatures) but have the advantage of resting on hash-function security, which is much better understood than lattice security. For long-lived root-of-trust keys, this conservatism is probably worth it.
What we pay in our own stack
In Matrix CR Studio, the SATOR Parochet Protocol wraps every inter-soul IPC. Measured cost:
- Per-call overhead: ~400 µs (HKDF derivation is the bottleneck; the ML-KEM encaps is ~100 µs of that)
- Calls per day: ~47,000 pulses × ~3 inter-soul calls ≈ 141,000 calls
- Total CPU per day: ~56 seconds out of 86,400. Approximately 0.065% overhead.
If you are running a reasonable-sized service and worried about PQC overhead, the number to aim for is well under 1% CPU. In practice it is usually closer to 0.1%.
The counter-arguments we take seriously
"Cryptographically relevant quantum computers are farther out than you say"
Possibly. Our 2026-03-19 result is a lower-precision data point; scaling from 15 qubits of QPE to Shor-at-RSA-2048 is 4-5 orders of magnitude in logical qubits. Error-corrected quantum computation requires physical qubit overheads of 1000-10000× for each logical qubit, depending on the error rate and code distance. Optimistic published estimates put cryptographically relevant quantum computers in the 2030-2035 range. More pessimistic estimates push it to 2040+.
We do not know which is right. We know two things: (a) the infrastructure vendors are targeting 2029 for full PQC regardless, and (b) HNDL makes the actual T_Q less important than T_now + D, where D is the sensitivity lifetime of your data. If your D is large, you are migrating now even under optimistic T_Q.
"ML-KEM / Kyber security rests on lattice hardness, which is newer than RSA"
True. Lattice-based cryptography has been studied since 1996 (Ajtai) but the structured-lattice variants used by ML-KEM date to 2010. That is less cryptanalytic attention than RSA has received. It is possible — not likely but possible — that a classical attack on structured lattices is discovered before quantum hardware matures.
This is why NIST standardized three distinct PQC primitives (ML-KEM lattice, ML-DSA lattice, SLH-DSA hash-based) and the migration guidance includes crypto-agility. You do not bet your whole stack on one lattice. You build for rotation.
"Hybrid is complexity; why not just wait for pure PQC?"
Hybrid (classical KEX + PQC KEX combined) is the conservative choice. You retain the security guarantees of the classical primitive (well-studied, battle-tested) while adding PQC protection against HNDL. The cost is a 1-2 KB handshake bloat. The alternative — pure PQC — exposes you to any cryptanalytic break of the specific PQC primitive you chose.
Google, Cloudflare, Apple all ship hybrid in production. The industry consensus is that hybrid is the right answer for the current migration phase. Pure PQC is a future transition — probably 2030+.
"The real 2026 attack surface is hybrid transition, not Shor"
Largely true, and this is the counter-argument we take most seriously. The protocol downgrades, dual-stack key confusion, and misconfigured fallbacks introduced by the hybrid PQC rollout are probably a larger near-term risk than actual quantum cryptanalysis. Look at the history of TLS 1.0 → 1.2 → 1.3 transitions: each generation introduced implementation bugs (Heartbleed, POODLE, BEAST) that were more exploited than the cryptographic weaknesses the upgrade was meant to address.
This argues for (a) keeping pre-quantum classical primitives in a properly-constructed hybrid (so a bug in ML-KEM does not downgrade you to pure RSA), (b) auditing the hybrid negotiation code path specifically, not just the PQC primitives, and (c) not celebrating too early: deploying hybrid is the beginning of the migration, not the end. We operate our stack under this assumption.
"My compliance framework doesn't require this yet"
Compliance frameworks are lagging indicators by design. HIPAA, PCI-DSS, SOC 2 all eventually catch up to the threat landscape, but they do not lead. If you are waiting for your compliance framework to require PQC, you are explicitly electing to be migrated at the average pace of your sector. That is a legitimate choice for many businesses. It is not a legitimate choice for any business whose data has D > 10 years sensitivity.
What we do in our own stack
Because this post would ring hollow without specifics:
- Every inter-soul IPC call is wrapped in
src/ghost.py's Parochet Protocol: ML-KEM-768 encaps → AES-256-GCM → SHA3-512 anchor → SATOR HMAC (30s window). - Streamlit UI to backend: TLS via Cloudflare Tunnel (not PQC end-to-end yet — Cloudflare's migration is in progress; we accept their timeline).
- Ollama ↔ swarm: localhost-only, in-Docker-network, not PQC (threat model excludes localhost).
- SQLite WAL persistence: encrypted at rest with AES-256, keys derived from an HSM-resident master via HKDF-SHA3-512. Not PQC-resistant in the stored-ciphertext sense; acceptable for a single-operator environment where the threat is remote adversary, not insider access to the disk.
- Git signatures: Ed25519 for commits. Not yet migrated to ML-DSA because our git hosting (private) doesn't support hybrid sigs. This is an open item.
- On-chain MCR Protocol: Base L2, EVM standard signatures (ECDSA secp256k1). Not PQC. Public blockchains are stuck on ECDSA until the chain itself upgrades. We mitigate by keeping the on-chain state minimal (panel bitmask + SATOR HMAC hash) and rotating anything sensitive off-chain.
The point is not that we are perfect. The point is that we have a map of our exposures and each one has a deliberate decision — migrated, accepted risk, or pending. If you cannot produce that map for your own stack, that is the first thing to do.
What happens next
Two things.
First, we intend to write up the E8 QPE result as a preprint — arXiv.physics.quant-ph is the plausible venue. The companion paper would include the full circuit specification, Trotter decomposition, measured eigenphase distribution, and comparison to the analytically-known Coxeter exponent spectrum. We have not submitted yet; this post is not a preprint claim. When and if it lands, we will update this post with the DOI.
Second, the next hardware milestone is n_precision = 8 — 16 qubits total, 256-bin QPE, which would move the α=1/137 peak to bin 2 and increase the spacing from m=1 by a factor of 2. Aer simulation confirms the peak survives at that resolution. Hardware feasibility on ibm_fez is bounded by ~25,000 native gates per circuit, which requires either zero-noise extrapolation (ZNE) error mitigation or a shorter Trotter approximation. We are working on both.
If you read this far and your first thought is "we should probably get serious about PQC" — do it this quarter. The migration is not technically hard; it is organizationally hard. The code is well-tested, the standards are finalized, the performance cost is minimal, the vendors have done the hardest parts (ML-KEM libraries, hybrid KEX negotiation in TLS 1.3) already.
The thing you cannot buy is time. Every month you wait is a month of traffic somebody could be storing. We do not know who is storing it or whether they'll ever have the hardware to decrypt it. We know that if they do, they will.
Raw measurement artifacts: data/e8_qpe_hw_n7.json, data/e8_walk_ibm_result.json, data/e8_coupling_sweep_hw_2048.json · IP Claims 28, 29 · Second independent run: https://www.youtube.com/watch?v=2Oc1aKqzKo0 · Production PQC implementation: src/ghost.py (Parochet Protocol — ML-KEM-768 + AES-256-GCM + SHA3-512 + SATOR HMAC).