Quantum Computing's time-travel problem
When encryption breaks, it breaks backwards – and the collection has been running for years
It has been an extraordinary twelve months in quantum computing. A sequence of papers has compressed the estimated resources needed to break modern encryption by three orders of magnitude. What was thought to require twenty million qubits in 2019 was revised to under a million in May 2025, under 100,000 in February 2026, and – in two papers published within forty-eight hours of each other at the end of March – to as few as 10,000.
These are not fringe claims. They come from Google Quantum AI, Caltech, Stanford, the Ethereum Foundation. They were published on arXiv, coordinated with the US government, and covered in Nature under the headline “’It’s a real shock.’” Bas Westerbaan, a mathematician at Cloudflare – the company that routes a significant share of global internet traffic – was quoted saying, “It’s a real shock for us too.”
This would be alarming enough if the threat were purely forward-looking. It is not. Nation-states – principally the United States and China, but not exclusively – have been intercepting and archiving encrypted communications for years, on the reasonable assumption that quantum computers will eventually be able to read them. The strategy is called “harvest now, decrypt later,” and it means the data is already taken. What has not yet happened is the reading. And when it does happen, the readers will not be short of analytical capacity – the same period that compressed the quantum timeline also produced AI systems capable of processing, cross-referencing, and extracting intelligence from datasets at a scale that would have required entire agencies a generation ago. The archive will not be read by humans sitting at desks. It will be read by machines, very quickly.
Post-quantum cryptography can protect what is transmitted from today onwards. It cannot protect what has already been collected.
This is the kind of question that sits across multiple domains – quantum physics, cryptography, geopolitics, financial regulation, intelligence tradecraft – and cannot be answered from inside any one of them. At CanaryIQ, we track this as a cross-cutting risk with cascade impacts across financial services, defence, critical infrastructure, and healthcare. This briefing synthesises what has changed in the past twelve months, why the threat is retrospective rather than prospective, and what the implications are for anyone whose data needs to remain confidential beyond 2030.
1. The Repricing
In 1994, the mathematician Peter Shor published a paper demonstrating that a quantum computer could, in principle, break the encryption systems that protect virtually all modern digital communications. For three decades, this remained a theoretical concern. The quantum hardware required to run Shor’s algorithm at scale was so far beyond existing capability that most practitioners treated it as a problem for a future generation.
That distance has been collapsing. Here is the sequence.
In May 2025, Craig Gidney at Google Quantum AI published a paper showing that a 2048-bit RSA key – the encryption standard protecting most of the internet’s secure communications – could be factored with fewer than one million physical qubits, running for less than a week. His own previous estimate, published in 2019, was twenty million qubits running for eight hours. The qubit count fell by a factor of twenty, and the key innovations were all in how the computation is organised – better arithmetic, denser storage of idle qubits, cheaper error correction – rather than in the physical hardware. The machines did not change. The mathematics found a shorter path through them.
In February 2026, a Sydney-based startup called Iceberg Quantum pushed further, claiming the number could fall below 100,000 qubits using a more efficient error-correction architecture. Scott Aaronson’s assessment, on his blog, was characteristically measured: “I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno.” But he added the formulation that cuts through the noise: when you need to scale up qubit count by a thousand times while maintaining quality, “it becomes important to ask, well, how many years? Three? Four? Five?”
Then came the March papers.
The Google/Stanford/Ethereum Foundation paper targeted the elliptic curve cryptography that protects Bitcoin, Ethereum, and most digital signature systems. Their optimised implementation of Shor’s algorithm could break 256-bit ECC with fewer than 1,200 logical qubits and 90 million Toffoli gates – roughly a twenty-fold reduction in resources over the prior best estimate. Translated to hardware: fewer than 500,000 physical qubits, running for approximately nine minutes.
The Caltech/Oratomic paper, published the following day, proposed a new error-correction architecture for neutral-atom quantum computers. Their claim: the same class of computation could be performed with as few as 10,000 physical qubits. The approach exploits the ability of neutral atoms, held in place by laser tweezers, to be physically shuttled across the array and entangled over long distances. This cuts the ratio of physical qubits needed per error-corrected logical qubit from roughly 1,000-to-1 down to approximately 5-to-1. John Preskill – the Caltech physicist who coined the term “quantum supremacy” – is on the founding team. “I’ve been working on fault-tolerant quantum computing longer than some of my coauthors have been alive,” he said. “Now at last we’re getting close.”
I want to be precise about what these numbers mean. Every one of these results was an algorithmic or architectural improvement, not a hardware breakthrough. Nobody built a new machine. Researchers found more efficient ways to use the machines we already have blueprints for. And because algorithmic improvements compound with hardware improvements, the distance between where we are and a cryptographically relevant quantum computer – what the field calls a CRQC – is shrinking from both ends simultaneously.
The largest demonstrated neutral-atom qubit array is 6,100 atoms, assembled in Manuel Endres’s Caltech lab in September 2025 and published in Nature. The Oratomic paper says 10,000 would suffice for Shor’s algorithm. In raw qubit count, the gap between what exists and what is needed has gone from four orders of magnitude to less than one. The gap in engineering capability – achieving the error rates, gate fidelities, and sustained coherence that fault-tolerant computation requires – is larger, and nobody should confuse a trapped-atom count with a working cryptographic attack. But the direction is clear, and the distance is compressing on both axes.
Nobody knows when a CRQC will be built. The honest range is somewhere between five years and twenty, with the weight shifting towards the shorter end with every paper. But the question of when is the wrong question. The right question is what has already been taken.
2. The Harvest
There is a strategy in intelligence collection with a name that is also its explanation: “harvest now, decrypt later.” Adversaries intercept and store encrypted communications today, on the assumption that quantum computers will be able to decrypt them in the future. The encrypted data sits in archives – government data centres, private cloud environments, tape storage – for years, decades if necessary. It does not need to be read now. It just needs to exist.
This is not a theoretical concern. It is an active, acknowledged, ongoing intelligence operation conducted by multiple nation-states simultaneously.
The mechanics are mundane. Tap undersea fibre-optic cables. Record VPN traffic at network boundaries. Exfiltrate encrypted database backups. Copy email archives and cloud storage snapshots. None of this requires breaking the encryption. The attacker needs bandwidth and disk space, not a quantum computer. Storage costs have fallen roughly 95 per cent since 2010. A petabyte of archived ciphertext costs less than a mid-range car. For a nation-state with a strategic intelligence mandate, the economics are trivial: store everything, wait, decrypt later.
A clarification worth making: the quantum threat targets public-key cryptography – the RSA and ECC systems used for key exchange and digital signatures. Symmetric encryption like AES-256 is not meaningfully weakened by quantum computers. But almost all encrypted communication relies on public-key cryptography to establish the session in the first place. Break the key exchange and the symmetric layer is irrelevant – you have the key.
I want to draw out why this changes the shape of the threat, because most commentary on quantum computing focuses on the wrong date. The question everyone asks is “when will a quantum computer break encryption?” The question they should be asking is: “how long does my data need to remain confidential, and has it already been copied?”
The critical variable is the gap between data sensitivity lifespan and time-to-decryption. Diplomatic cables retain their value for decades. Weapons system designs remain sensitive for the lifetime of the platform. Intelligence source identities are sensitive permanently. Health records, biometric data, trade secrets, negotiating positions – all of these have confidentiality requirements measured in years or decades, not days. If a communication was intercepted in 2024, and a CRQC comes online in 2032, the eight-year gap is irrelevant. The data is just as compromising.
Post-quantum cryptography – the set of encryption algorithms designed to resist quantum attack – cannot retroactively protect data that has already been captured. PQC protects the future. It does nothing for the past. Every day that an organisation continues transmitting sensitive material under classical encryption is another day of plaintext-in-waiting added to an adversary’s archive.
NIST published its first post-quantum cryptographic standards in August 2024. The algorithms exist. They work. The migration path is understood. The typical estimate for PQC migration in a large enterprise is seven or more years. The sooner migration begins, the smaller the window of retroactive exposure – and for most organisations, that window is already wider than they realise.
3. The Geopolitics of the Archive
We track several risk scenarios related to quantum cryptographic disruption, including the cascade effects on financial infrastructure, government communications, and critical systems. The geopolitical dimension is where the risk compounds most dangerously.
The United States and China accuse each other of large-scale data harvesting, and both accusations are almost certainly accurate.
The former FBI director described China’s hacking programme as “bigger than every other major nation combined.” China’s Ministry of State Security responded by accusing the NSA of “systematic” attacks to steal Chinese data. Neither denial is credible. The intelligence logic is identical on both sides: if your adversary’s encrypted traffic will eventually become readable, and the cost of collecting and storing it is low, the rational strategy is to collect everything and sort it out later. Both the United States and China are rational.
Russia, Iran, and several other states are running the same playbook at smaller scale. The Five Eyes alliance has the most extensive signals intelligence infrastructure for collection. China has the most aggressive state-backed cyber espionage capability for targeted exfiltration. The result is a global intelligence competition in which the prize is not today’s secrets, but tomorrow’s ability to read today’s secrets.
China’s Dual-Track Strategy
Beijing’s position reveals the strategic logic most clearly, because it is doing both things at once: building quantum computers to attack others’ encryption, and building its own post-quantum defences to protect its own.
In February 2025, China launched the Next-Generation Commercial Cryptographic Algorithms Programme – its own PQC initiative, independent of NIST’s standards. This is not duplication. It is strategic distrust. Beijing is unwilling to rely on American-designed cryptographic standards to protect Chinese state communications, and may be concerned that NIST’s standards contain weaknesses that Western intelligence agencies could exploit. The submission deadline for new algorithms is June 2026. China expects to have its own national PQC standards within three years.
The quantum computing ecosystem backing this is deeper than most Western commentary acknowledges. Origin Quantum’s 72-qubit Wukong processor handled over half a million computing jobs in its first year of commercial operation. China Telecom’s Tianyan platform provides cloud access to superconducting systems totalling 880 qubits across four machines. QuantumCTek, now controlled by a state-owned enterprise, supplies the hardware for China’s national quantum key distribution backbone: a fibre-optic network connecting Beijing to Shanghai that has been operational for years.
US export controls on quantum technology were designed to slow China down. In our assessment, the effect has been the opposite: a crash programme in domestic manufacturing across dilution refrigerators, cryogenic electronics, and photonic components, with funding pouring into every major qubit modality simultaneously. We see the same pattern in our semiconductor export controls analysis – restrictions that hand domestic manufacturers a political mandate and a captive market.
The dual-track logic is clean. Defend your own future communications with quantum-resistant cryptography. Hoard everyone else’s current communications for future decryption. That is not paranoia, it is game theory.
The Asymmetry That Matters
Here is the strategic point that most analysis misses: China does not need to build the first CRQC to benefit from the harvest. It only needs to build one eventually. The archive is patient. Every year of delay in Western PQC adoption is another year of retroactively decryptable material added to the pile.
The inverse is also true. Every year of Western intelligence collection against Chinese targets creates an archive that a future American or allied CRQC could unlock. The race is not merely to build the machine. It is to have accumulated the largest, most strategically valuable archive by the time someone does.
4. The Classification Problem
There is a historical precedent that makes this story more uncomfortable, not less.
Public-key cryptography (the mathematical system that underpins virtually all modern encryption), was invented by Whitfield Diffie and Martin Hellman in 1976, and implemented as RSA by Rivest, Shamir, and Adleman in 1977. This was the accepted history until 1997, when GCHQ declassified documents revealing that British mathematician Clifford Cocks had described an equivalent system in an internal memo in 1973. Three years before the academic world “invented” public-key cryptography, a British intelligence agency already had it and kept it secret.
The gap between what a nation-state knows and what it discloses is determined by strategic advantage, not scientific norms.
Google’s ECDLP paper, published in March 2026, contains a line that deserves more attention than it has received: “It is conceivable that the existence of early CRQCs may first be detected on the blockchain rather than announced.” This is Google Quantum AI stating that a state actor might achieve cryptographic quantum capability and choose not to tell anyone. The first sign would not be a press conference. It would be Bitcoin moving out of wallets whose private keys should be unknowable. There are roughly 6.9 million Bitcoin in addresses with exposed public keys. At current prices, that is a very large incentive.
Scott Aaronson, reading the March 2026 results, reached for the comparison that matters:
When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold?
He was asking whether quantum computing was approaching that threshold – the point at which a result is too dangerous to publish in full.
The cryptography community pushed back. Their position, grounded in decades of vulnerability disclosure practice: you publish. If publishing causes people still running quantum-vulnerable systems to panic, then perhaps that is exactly what needs to happen right now.
Both sides are probably correct.
Google chose a middle path. It published its ECC results via a zero-knowledge proof – a cryptographic technique that lets independent researchers verify the result without revealing the attack circuit. This is the first time a major mathematical result has been announced this way. The team coordinated with the US government before publication and is proposing this framework – responsible disclosure via zero-knowledge verification – as the norm for future quantum vulnerability research. The cryptography community is, in real time, developing its own classification culture.
That alone should tell you how close the field believes we are.
5. What This Means For You
The leaders have already moved. Chrome, Signal, and iMessage already use post-quantum key exchange. Google has set a 2029 deadline for completing its full internal post-quantum cryptography migration. Cloudflare – which handles over 65 per cent of human web traffic – announced in April 2026 that it is matching that 2029 target, explicitly citing the March papers as the reason for accelerating. NSA’s CNSA 2.0 mandates quantum-resistant cryptography for all new national security systems by January 2027. NIST’s published timeline calls for quantum-vulnerable algorithms to be deprecated after 2030 and disallowed after 2035.
The consumer platforms and infrastructure providers are migrating. The question is whether enterprises, governments, and financial institutions – the organisations holding the data that intelligence agencies actually want – are keeping pace. Most are not. If you are behind these timelines, you are behind the organisations whose threat modelling you should be matching.
The calculation is blunt. Take the number of years your most sensitive data must remain secret. Add the number of years it will take to complete PQC migration. If that sum exceeds the number of years until a CRQC becomes available, you are already exposed – and every month of inaction widens the window.
If you hold long-lived secrets – government agencies, defence contractors, financial institutions, healthcare organisations, critical infrastructure operators – begin cryptographic inventory immediately. Identify which systems rely on RSA, ECC, or other quantum-vulnerable algorithms. Prioritise migration based on data sensitivity lifespan. Start with key exchange protocols, where PQC can be deployed in hybrid configurations alongside classical encryption, and work outward.
If you are a policymaker, the structural observation is this: the quantum threat has inverted the traditional timeline of cybersecurity risk. Normally, the attack comes first and the defence follows. Here, the collection has come first. The attack will follow whenever the hardware matures. Regulatory frameworks that treat quantum risk as a future concern are mispricing the present.
If you are an investor, the signal is the pace of algorithmic compression. The qubit estimates for breaking RSA-2048 have dropped by roughly twenty times per publication cycle. The estimates for ECC have dropped faster. The companies and sectors most exposed are those holding large quantities of classically encrypted long-lived data – financial services, healthcare, government contracting, critical infrastructure – and those that have not begun PQC migration planning. In our view, the market has not priced this.
Conclusion
Quantum computing has spent three decades as a future problem. In the past twelve months, a series of papers from Google, Caltech, Stanford, and others have repriced the distance between now and the point at which modern encryption breaks. The estimates moved by orders of magnitude, and they moved in one direction.
But the repricing is not the threat. The threat is the archive – the collection of encrypted data that has been building for years, waiting for the moment it becomes readable. Post-quantum cryptography can protect what is transmitted from today onwards. It cannot protect what has already been taken. And every day of continued transmission under classical encryption adds to the pile.
The averaged expert estimate of a cryptographically relevant quantum computer emerging within the next ten years was between 28 and 49 per cent in 2025 – the highest in the seven years that figure has been tracked – and will likely be higher still this year as a result of the recent publications. The infrastructure companies have set their deadlines. The standards bodies have published their timelines. Governments have been filling their archives for a decade.
What cannot be undone is already done. What can still be controlled is what happens from here. The standards exist. The migration paths are published. The engineering is understood. The only thing missing – and it is the only thing that matters now – is the decision to start.
CanaryIQ maintains analytical positions on quantum computing, post-quantum cryptography, semiconductor supply chains, and AI infrastructure, along with risk scenarios covering technology export controls, signals intelligence escalation, and cryptographic disruption cascades. This briefing draws on that work.

