<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[CanaryIQ Public Briefings]]></title><description><![CDATA[Public Briefings from CanaryIQ - the emerging technologies platform.]]></description><link>https://briefings.canaryiq.com</link><generator>Substack</generator><lastBuildDate>Mon, 11 May 2026 15:25:56 GMT</lastBuildDate><atom:link href="https://briefings.canaryiq.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Simon Minton]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[canaryiq@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[canaryiq@substack.com]]></itunes:email><itunes:name><![CDATA[Simon Minton]]></itunes:name></itunes:owner><itunes:author><![CDATA[Simon Minton]]></itunes:author><googleplay:owner><![CDATA[canaryiq@substack.com]]></googleplay:owner><googleplay:email><![CDATA[canaryiq@substack.com]]></googleplay:email><googleplay:author><![CDATA[Simon Minton]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Quantum Computing's time-travel problem]]></title><description><![CDATA[When encryption breaks, it breaks backwards &#8211; and the collection has been running for years]]></description><link>https://briefings.canaryiq.com/p/quantum-computings-time-travel-problem</link><guid isPermaLink="false">https://briefings.canaryiq.com/p/quantum-computings-time-travel-problem</guid><dc:creator><![CDATA[Simon Minton]]></dc:creator><pubDate>Mon, 27 Apr 2026 11:05:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!X37B!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e37932d-787d-4e34-905d-a867f8b61822_2160x2160.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It has been an extraordinary twelve months in quantum computing. A sequence of papers has compressed the estimated resources needed to break modern encryption by three orders of magnitude. What was thought to require twenty million qubits in 2019 was revised to under a million in May 2025, under 100,000 in February 2026, and &#8211; in two papers published within forty-eight hours of each other at the end of March &#8211; to as few as 10,000.</p><p>These are not fringe claims. They come from Google Quantum AI, Caltech, Stanford, the Ethereum Foundation. They were published on arXiv, coordinated with the US government, and covered in Nature under the headline <a href="https://www.nature.com/articles/d41586-026-01054-1">&#8220;&#8217;It&#8217;s a real shock.&#8217;&#8221;</a> Bas Westerbaan, a mathematician at Cloudflare &#8211; the company that routes a significant share of global internet traffic &#8211; was quoted saying, &#8220;It&#8217;s a real shock for us too.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://briefings.canaryiq.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading CanaryIQ Public Briefings! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This would be alarming enough if the threat were purely forward-looking. It is not. Nation-states &#8211; principally the United States and China, but not exclusively &#8211; have been intercepting and archiving encrypted communications for years, on the reasonable assumption that quantum computers will eventually be able to read them. The strategy is called &#8220;harvest now, decrypt later,&#8221; and it means the data is already taken. What has not yet happened is the reading. And when it does happen, the readers will not be short of analytical capacity &#8211; the same period that compressed the quantum timeline also produced AI systems capable of processing, cross-referencing, and extracting intelligence from datasets at a scale that would have required entire agencies a generation ago. The archive will not be read by humans sitting at desks. It will be read by machines, very quickly.</p><p>Post-quantum cryptography can protect what is transmitted from today onwards. It cannot protect what has already been collected.</p><p>This is the kind of question that sits across multiple domains &#8211; quantum physics, cryptography, geopolitics, financial regulation, intelligence tradecraft &#8211; and cannot be answered from inside any one of them. At CanaryIQ, we track this as a cross-cutting risk with cascade impacts across financial services, defence, critical infrastructure, and healthcare. This briefing synthesises what has changed in the past twelve months, why the threat is retrospective rather than prospective, and what the implications are for anyone whose data needs to remain confidential beyond 2030.</p><h2>1. The Repricing</h2><p>In 1994, the mathematician Peter Shor published a paper demonstrating that a quantum computer could, in principle, break the encryption systems that protect virtually all modern digital communications. For three decades, this remained a theoretical concern. The quantum hardware required to run Shor&#8217;s algorithm at scale was so far beyond existing capability that most practitioners treated it as a problem for a future generation.</p><p>That distance has been collapsing. Here is the sequence.</p><p>In May 2025, Craig Gidney at Google Quantum AI <a href="https://arxiv.org/abs/2505.15917">published a paper</a> showing that a 2048-bit RSA key &#8211; the encryption standard protecting most of the internet&#8217;s secure communications &#8211; could be factored with fewer than one million physical qubits, running for less than a week. His own <a href="https://arxiv.org/abs/1905.09749">previous estimate</a>, published in 2019, was twenty million qubits running for eight hours. The qubit count fell by a factor of twenty, and the key innovations were all in how the computation is organised &#8211; better arithmetic, denser storage of idle qubits, cheaper error correction &#8211; rather than in the physical hardware. The machines did not change. The mathematics found a shorter path through them.</p><p>In February 2026, a Sydney-based startup called Iceberg Quantum pushed further, <a href="https://arxiv.org/abs/2602.11457">claiming the number could fall below 100,000 qubits</a> using a more efficient error-correction architecture. Scott Aaronson&#8217;s assessment, <a href="https://scottaaronson.blog/?p=9564">on his blog</a>, was characteristically measured: &#8220;I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno.&#8221; But he added the formulation that cuts through the noise: when you need to scale up qubit count by a thousand times while maintaining quality, &#8220;it becomes important to ask, well, how many years? Three? Four? Five?&#8221;</p><p>Then came the March papers.</p><p>The <a href="https://arxiv.org/abs/2603.28846">Google/Stanford/Ethereum Foundation paper</a> targeted the elliptic curve cryptography that protects Bitcoin, Ethereum, and most digital signature systems. Their optimised implementation of Shor&#8217;s algorithm could break 256-bit ECC with fewer than 1,200 logical qubits and 90 million Toffoli gates &#8211; roughly a twenty-fold reduction in resources over the prior best estimate. Translated to hardware: fewer than 500,000 physical qubits, running for approximately nine minutes.</p><p>The <a href="https://arxiv.org/abs/2603.28627">Caltech/Oratomic paper</a>, published the following day, proposed a new error-correction architecture for neutral-atom quantum computers. Their claim: the same class of computation could be performed with as few as 10,000 physical qubits. The approach exploits the ability of neutral atoms, held in place by laser tweezers, to be physically shuttled across the array and entangled over long distances. This cuts the ratio of physical qubits needed per error-corrected logical qubit from roughly 1,000-to-1 down to approximately 5-to-1. John Preskill &#8211; the Caltech physicist who coined the term &#8220;quantum supremacy&#8221; &#8211; is on the founding team. &#8220;I&#8217;ve been working on fault-tolerant quantum computing longer than some of my coauthors have been alive,&#8221; he said. &#8220;Now at last we&#8217;re getting close.&#8221;</p><p>I want to be precise about what these numbers mean. Every one of these results was an algorithmic or architectural improvement, not a hardware breakthrough. Nobody built a new machine. Researchers found more efficient ways to use the machines we already have blueprints for. And because algorithmic improvements compound with hardware improvements, the distance between where we are and a cryptographically relevant quantum computer &#8211; what the field calls a CRQC &#8211; is shrinking from both ends simultaneously.</p><p>The largest demonstrated neutral-atom qubit array is <a href="https://www.nature.com/articles/s41586-025-09641-4">6,100 atoms</a>, assembled in Manuel Endres&#8217;s Caltech lab in September 2025 and published in Nature. The Oratomic paper says 10,000 would suffice for Shor&#8217;s algorithm. In raw qubit count, the gap between what exists and what is needed has gone from four orders of magnitude to less than one. The gap in engineering capability &#8211; achieving the error rates, gate fidelities, and sustained coherence that fault-tolerant computation requires &#8211; is larger, and nobody should confuse a trapped-atom count with a working cryptographic attack. But the direction is clear, and the distance is compressing on both axes.</p><p>Nobody knows when a CRQC will be built. The honest range is somewhere between five years and twenty, with the weight shifting towards the shorter end with every paper. But the question of when is the wrong question. The right question is what has already been taken.</p><h2>2. The Harvest</h2><p>There is a strategy in intelligence collection with a name that is also its explanation: &#8220;harvest now, decrypt later.&#8221; Adversaries intercept and store encrypted communications today, on the assumption that quantum computers will be able to decrypt them in the future. The encrypted data sits in archives &#8211; government data centres, private cloud environments, tape storage &#8211; for years, decades if necessary. It does not need to be read now. It just needs to exist.</p><p>This is not a theoretical concern. It is an active, acknowledged, ongoing intelligence operation conducted by multiple nation-states simultaneously.</p><p>The mechanics are mundane. Tap undersea fibre-optic cables. Record VPN traffic at network boundaries. Exfiltrate encrypted database backups. Copy email archives and cloud storage snapshots. None of this requires breaking the encryption. The attacker needs bandwidth and disk space, not a quantum computer. Storage costs have fallen roughly 95 per cent since 2010. A petabyte of archived ciphertext costs less than a mid-range car. For a nation-state with a strategic intelligence mandate, the economics are trivial: store everything, wait, decrypt later.</p><p>A clarification worth making: the quantum threat targets public-key cryptography &#8211; the RSA and ECC systems used for key exchange and digital signatures. Symmetric encryption like AES-256 is not meaningfully weakened by quantum computers. But almost all encrypted communication relies on public-key cryptography to establish the session in the first place. Break the key exchange and the symmetric layer is irrelevant &#8211; you have the key.</p><p>I want to draw out why this changes the shape of the threat, because most commentary on quantum computing focuses on the wrong date. The question everyone asks is &#8220;when will a quantum computer break encryption?&#8221; The question they should be asking is: &#8220;how long does my data need to remain confidential, and has it already been copied?&#8221;</p><p>The critical variable is the gap between data sensitivity lifespan and time-to-decryption. Diplomatic cables retain their value for decades. Weapons system designs remain sensitive for the lifetime of the platform. Intelligence source identities are sensitive permanently. Health records, biometric data, trade secrets, negotiating positions &#8211; all of these have confidentiality requirements measured in years or decades, not days. If a communication was intercepted in 2024, and a CRQC comes online in 2032, the eight-year gap is irrelevant. The data is just as compromising.</p><p>Post-quantum cryptography &#8211; the set of encryption algorithms designed to resist quantum attack &#8211; cannot retroactively protect data that has already been captured. PQC protects the future. It does nothing for the past. Every day that an organisation continues transmitting sensitive material under classical encryption is another day of plaintext-in-waiting added to an adversary&#8217;s archive.</p><p>NIST <a href="https://csrc.nist.gov/projects/post-quantum-cryptography">published its first post-quantum cryptographic standards</a> in August 2024. The algorithms exist. They work. The migration path is understood. The typical estimate for PQC migration in a large enterprise is seven or more years. The sooner migration begins, the smaller the window of retroactive exposure &#8211; and for most organisations, that window is already wider than they realise.</p><h2>3. The Geopolitics of the Archive</h2><p>We track several risk scenarios related to quantum cryptographic disruption, including the cascade effects on financial infrastructure, government communications, and critical systems. The geopolitical dimension is where the risk compounds most dangerously.</p><p>The United States and China accuse each other of large-scale data harvesting, and both accusations are almost certainly accurate.</p><p>The former FBI director described China&#8217;s hacking programme as &#8220;bigger than every other major nation combined.&#8221; China&#8217;s Ministry of State Security responded by accusing the NSA of &#8220;systematic&#8221; attacks to steal Chinese data. Neither denial is credible. The intelligence logic is identical on both sides: if your adversary&#8217;s encrypted traffic will eventually become readable, and the cost of collecting and storing it is low, the rational strategy is to collect everything and sort it out later. Both the United States and China are rational.</p><p>Russia, Iran, and several other states are running the same playbook at smaller scale. The Five Eyes alliance has the most extensive signals intelligence infrastructure for collection. China has the most aggressive state-backed cyber espionage capability for targeted exfiltration. The result is a global intelligence competition in which the prize is not today&#8217;s secrets, but tomorrow&#8217;s ability to read today&#8217;s secrets.</p><h3>China&#8217;s Dual-Track Strategy</h3><p>Beijing&#8217;s position reveals the strategic logic most clearly, because it is doing both things at once: building quantum computers to attack others&#8217; encryption, and building its own post-quantum defences to protect its own.</p><p>In February 2025, China <a href="https://thequantuminsider.com/2025/02/18/china-launches-its-own-quantum-resistant-encryption-standard-bypassing-us-efforts/">launched the Next-Generation Commercial Cryptographic Algorithms Programme</a> &#8211; its own PQC initiative, independent of NIST&#8217;s standards. This is not duplication. It is strategic distrust. Beijing is unwilling to rely on American-designed cryptographic standards to protect Chinese state communications, and may be concerned that NIST&#8217;s standards contain weaknesses that Western intelligence agencies could exploit. The <a href="https://www.niccs.org.cn/niccs/Notice/pc/content/content_1975896137741635584.html">submission deadline for new algorithms</a> is June 2026. China expects to have its own national PQC standards within three years.</p><p>The quantum computing ecosystem backing this is deeper than most Western commentary acknowledges. Origin Quantum&#8217;s 72-qubit Wukong processor handled over half a million computing jobs in its first year of commercial operation. China Telecom&#8217;s Tianyan platform provides cloud access to superconducting systems totalling 880 qubits across four machines. QuantumCTek, now controlled by a state-owned enterprise, supplies the hardware for China&#8217;s national quantum key distribution backbone: a fibre-optic network connecting Beijing to Shanghai that has been operational for years.</p><p>US export controls on quantum technology were designed to slow China down. In our assessment, the effect has been the opposite: a crash programme in domestic manufacturing across dilution refrigerators, cryogenic electronics, and photonic components, with funding pouring into every major qubit modality simultaneously. We see the same pattern in our semiconductor export controls analysis &#8211; restrictions that hand domestic manufacturers a political mandate and a captive market.</p><p>The dual-track logic is clean. Defend your own future communications with quantum-resistant cryptography. Hoard everyone else&#8217;s current communications for future decryption. That is not paranoia, it is game theory.</p><h3>The Asymmetry That Matters</h3><p>Here is the strategic point that most analysis misses: China does not need to build the first CRQC to benefit from the harvest. It only needs to build one eventually. The archive is patient. Every year of delay in Western PQC adoption is another year of retroactively decryptable material added to the pile.</p><p>The inverse is also true. Every year of Western intelligence collection against Chinese targets creates an archive that a future American or allied CRQC could unlock. The race is not merely to build the machine. It is to have accumulated the largest, most strategically valuable archive by the time someone does.</p><h2>4. The Classification Problem</h2><p>There is a historical precedent that makes this story more uncomfortable, not less.</p><p>Public-key cryptography (the mathematical system that underpins virtually all modern encryption), was invented by Whitfield Diffie and Martin Hellman in 1976, and implemented as RSA by Rivest, Shamir, and Adleman in 1977. This was the accepted history until 1997, when GCHQ declassified documents revealing that British mathematician Clifford Cocks had described an equivalent system in an internal memo in 1973. Three years before the academic world &#8220;invented&#8221; public-key cryptography, a British intelligence agency already had it <em>and kept it secret</em>.</p><p>The gap between what a nation-state knows and what it discloses is determined by strategic advantage, not scientific norms.</p><p>Google&#8217;s ECDLP paper, <a href="https://arxiv.org/abs/2603.28846">published in March 2026</a>, contains a line that deserves more attention than it has received: <em>&#8220;It is conceivable that the existence of early CRQCs may first be detected on the blockchain rather than announced.&#8221;</em> This is Google Quantum AI stating that a state actor might achieve cryptographic quantum capability and choose not to tell anyone. The first sign would not be a press conference. It would be Bitcoin moving out of wallets whose private keys should be unknowable. There are roughly 6.9 million Bitcoin in addresses with exposed public keys. At current prices, that is a very large incentive.</p><p>Scott Aaronson, <a href="https://scottaaronson.blog/?p=9665">reading the March 2026 results</a>, reached for the comparison that matters: </p><blockquote><p><em>When I got an early heads-up about these results&#8212;especially the Google team&#8217;s choice to &#8220;publish&#8221; via a zero-knowledge proof&#8212;I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold?</em></p></blockquote><p>He was asking whether quantum computing was approaching that threshold &#8211; the point at which a result is too dangerous to publish in full.</p><p>The cryptography community pushed back. Their position, grounded in decades of vulnerability disclosure practice: you publish. If publishing causes people still running quantum-vulnerable systems to panic, then perhaps that is exactly what needs to happen right now.</p><p>Both sides are probably correct.</p><p>Google chose a middle path. It published its ECC results via a zero-knowledge proof &#8211; a cryptographic technique that lets independent researchers verify the result without revealing the attack circuit. This is the first time a major mathematical result has been announced this way. The team coordinated with the US government before publication and is proposing this framework &#8211; responsible disclosure via zero-knowledge verification &#8211; as the norm for future quantum vulnerability research. The cryptography community is, in real time, developing its own classification culture.</p><p>That alone should tell you how close the field believes we are.</p><h2>5. What This Means For You</h2><p>The leaders have already moved. Chrome, Signal, and iMessage already use post-quantum key exchange. Google has <a href="https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline/">set a 2029 deadline</a> for completing its full internal post-quantum cryptography migration. Cloudflare &#8211; which handles over 20 per cent of human web traffic &#8211; <a href="https://blog.cloudflare.com/post-quantum-roadmap/">announced in April 2026</a> that it is matching that 2029 target, explicitly citing the March papers as the reason for accelerating. NSA&#8217;s <a href="https://media.defense.gov/2025/May/30/2003728741/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS.PDF">CNSA 2.0</a> mandates quantum-resistant cryptography for all new national security systems by January 2027. NIST&#8217;s <a href="https://csrc.nist.gov/pubs/ir/8547/ipd">published timeline</a> calls for quantum-vulnerable algorithms to be deprecated after 2030 and disallowed after 2035.</p><p>The consumer platforms and infrastructure providers are migrating. The question is whether enterprises, governments, and financial institutions &#8211; the organisations holding the data that intelligence agencies actually want &#8211; are keeping pace. Most are not. If you are behind these timelines, you are behind the organisations whose threat modelling you should be matching.</p><p>The calculation is blunt. Take the number of years your most sensitive data must remain secret. Add the number of years it will take to complete PQC migration. If that sum exceeds the number of years until a CRQC becomes available, you are already exposed &#8211; and every month of inaction widens the window.</p><p><strong>If you hold long-lived secrets</strong> &#8211; government agencies, defence contractors, financial institutions, healthcare organisations, critical infrastructure operators &#8211; begin cryptographic inventory immediately. Identify which systems rely on RSA, ECC, or other quantum-vulnerable algorithms. Prioritise migration based on data sensitivity lifespan. Start with key exchange protocols, where PQC can be deployed in hybrid configurations alongside classical encryption, and work outward.</p><p><strong>If you are a policymaker</strong>, the structural observation is this: the quantum threat has inverted the traditional timeline of cybersecurity risk. Normally, the attack comes first and the defence follows. Here, the collection has come first. The attack will follow whenever the hardware matures. Regulatory frameworks that treat quantum risk as a future concern are mispricing the present.</p><p><strong>If you are an investor</strong>, the signal is the pace of algorithmic compression. The qubit estimates for breaking RSA-2048 have dropped by roughly twenty times per publication cycle. The estimates for ECC have dropped faster. The companies and sectors most exposed are those holding large quantities of classically encrypted long-lived data &#8211; financial services, healthcare, government contracting, critical infrastructure &#8211; and those that have not begun PQC migration planning. In our view, the market has not priced this.</p><h2>Conclusion</h2><p>Quantum computing has spent three decades as a future problem. In the past twelve months, a series of papers from Google, Caltech, Stanford, and others have repriced the distance between now and the point at which modern encryption breaks. The estimates moved by orders of magnitude, and they moved in one direction.</p><p>But the repricing is not the threat. The threat is the archive &#8211; the collection of encrypted data that has been building for years, waiting for the moment it becomes readable. Post-quantum cryptography can protect what is transmitted from today onwards. It cannot protect what has already been taken. And every day of continued transmission under classical encryption adds to the pile.</p><p>The <a href="https://globalriskinstitute.org/publication/quantum-threat-timeline-report-2025b/">averaged expert estimate</a> of a cryptographically relevant quantum computer emerging within the next ten years was between 28 and 49 per cent in 2025 &#8211; the highest in the seven years that figure has been tracked &#8211; and will likely be higher still this year as a result of the recent publications. The infrastructure companies have set their deadlines. The standards bodies have published their timelines. Governments have been filling their archives for a decade.</p><p>What cannot be undone is already done. What can still be controlled is what happens from here. The standards exist. The migration paths are published. The engineering is understood. The only thing missing &#8211; and it is the only thing that matters now &#8211; is the decision to start.</p><div><hr></div><p><em>CanaryIQ maintains analytical positions on <a href="https://app.canaryiq.com/tech/quantum">quantum computing</a>, <a href="https://app.canaryiq.com/tech/cybersecurity">post-quantum cryptography</a>, <a href="https://app.canaryiq.com/tech/semiconductors">semiconductor supply chains</a>, and <a href="https://app.canaryiq.com/tech/artificial-intelligence">AI infrastructure</a>, along with risk scenarios covering technology export controls, signals intelligence escalation, and cryptographic disruption cascades. This briefing draws on that work.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://briefings.canaryiq.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading CanaryIQ Public Briefings! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Invisible Governor]]></title><description><![CDATA[How the collapse of effort as a constraint changes everything]]></description><link>https://briefings.canaryiq.com/p/the-invisible-governor</link><guid isPermaLink="false">https://briefings.canaryiq.com/p/the-invisible-governor</guid><dc:creator><![CDATA[Simon Minton]]></dc:creator><pubDate>Sat, 07 Feb 2026 15:28:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!X37B!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e37932d-787d-4e34-905d-a867f8b61822_2160x2160.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Executive Summary</strong></h2><p>For as long as organisations and governments have existed, effort has been the invisible governor on what gets built, what gets reformed, and what gets attempted. Ideas die not because they are bad, but because the cost of testing them is too high. Backlogs grow not because nobody cares, but because nobody has the bandwidth. This constraint has been so universal, so deeply embedded in how we plan and prioritise, that we have stopped noticing it. It is simply the water we swim in.</p><p>That constraint is now collapsing. The emergence of autonomous AI agents &#8211; systems that can be given a goal and trusted to deliver it continuously, at high quality, with minimal human intervention &#8211; has fundamentally altered the economics of knowledge work. This is not a projection about what might happen in five years. It is a description of what is already happening, today, at the leading edge of software engineering. And what begins in software will not remain in software.</p><p>This briefing sets out what has changed, why it matters, and what decision-makers in both the private and public sectors need to understand now &#8211; before the window for coherent response narrows further.</p><h2><strong>1. What Has Changed</strong></h2><p>In late 2025, a series of capability upgrades to the leading AI coding agents crossed a threshold that practitioners <a href="https://x.com/karpathy/status/2015883857489522876">immediately</a> <a href="https://x.com/rough__sea/status/2013280952370573666">recognised</a> as qualitatively different from what came before. The most prominent of these tools is Claude Code, Anthropic&#8217;s autonomous coding agent, though similar capabilities are emerging across the industry.</p><p>AI coding assistants have existed for several years. The good ones were genuinely useful: tools that might make a competent engineer two or five times more productive. But the early-2026 generation is not a better assistant. It is, for most practical purposes, a replacement for the engineer. The system can be given a well-defined goal &#8211; build this feature, fix this system, refactor this architecture &#8211; and trusted to deliver it autonomously, at production quality, correcting its own errors along the way.</p><p>The numbers bear this out. Claude Code crossed one billion dollars in revenue by November 2025. Word-of-mouth exposure surged thirteen percentage points between late December and January 2026, according to data from Caliber. In mid-January, a Google principal engineer <a href="https://x.com/rakyll/status/2007659740126761033">publicly stated that Claude had reproduced a year of architectural work</a> in a single hour. Microsoft &#8211; which sells its own competing product, GitHub Copilot &#8211; has reportedly adopted Claude Code internally across major engineering teams. The creator of Claude Code, Boris Cherny, tweeted that he and his team now complete near 100% of work using Claude<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>I can speak to this from direct experience. Over the past six weeks, I have built six internal tools that have materially improved how I work. Each of these would previously have been a multi-week project requiring sustained coding effort. Each now took between two and six hours, with Claude Code doing the vast majority of the implementation work. In aggregate, I am completing what would previously have constituted roughly a year of professional development work every two to three weeks.</p><p>I want to be precise about what I mean by this, because the claim sounds extraordinary. I do not mean that I am doing rough, prototype-quality work at high speed. I mean that the finished output &#8211; the code, the architecture, the test coverage, the documentation &#8211; is at or above the standard I would previously have produced myself, and it is being delivered at a rate that no human team could match.</p><h2><strong>2. Why This Extends Beyond Software</strong></h2><p>The natural objection is that this is a story about software engineering, and software engineering is a special case. Code is structured, testable, and unambiguous. Surely the messier domains of law, finance, medicine, and policy are different.</p><p>They are different &#8211; for now. But the gap is closing faster than most observers appreciate, and understanding why requires seeing what code actually is. Code is not a technical curiosity. It is a well-defined way of expressing process: sequences of reasoning, transformation, verification, and execution, arranged to achieve some outcome. All knowledge work, when you examine it closely enough, reduces to process of some form. The difference between software engineering and, say, contract review or financial analysis or policy drafting is not a difference in kind. It is a difference in how explicitly the process has been formalised.</p><p>The evidence that AI agents are already moving into adjacent domains is substantial. In law, firms are deploying AI for contract analysis, due diligence, and compliance review. A survey of nearly 2,300 knowledge workers published in late 2025 found that 81 per cent had used AI-powered tools to start or edit their work at least once. In accounting and audit, agentic AI systems are now performing invoice processing, reconciliation, anomaly detection, and compliance workflows with decreasing human oversight. In finance, 68 per cent of hedge funds now use AI for market analysis and trading strategies, and AI-managed robo-advisory assets exceed 1.2 trillion dollars globally.</p><p>These are not speculative applications. They are in production today. The pattern is consistent: AI enters a domain, initially handling routine and well-structured tasks, and then &#8211; as the models improve and as practitioners learn how to direct them &#8211; moves progressively up the complexity curve. In accountancy, Deloitte and others are already describing a future in which the mid-tier of knowledge work is hollowed out, leaving an &#8220;hourglass&#8221; workforce concentrated at junior (AI-supervisory) and senior (strategic) levels. The same structural pressure will apply across every knowledge profession.</p><h2><strong>3. Three Structural Shifts</strong></h2><h3><strong>3.1 The Repricing of Labour</strong></h3><p>Claude Code costs approximately two hundred dollars per month at the individual tier. A competent senior software engineer in a major market commands a salary well north of $250,000 per annum. For roughly one per cent of that cost, an organisation can now access something that produces not ten times the output, but orders of magnitude more.</p><p>This is not a productivity gain in any conventional sense. It is a repricing of labour. The historical parallel is not the introduction of better tools to an existing workforce &#8211; it is the introduction of the power loom, which did not make weavers more productive but rendered the economics of hand-weaving untenable.</p><p>The comparison is not exact, of course. The power loom took decades to diffuse through the textile industry, constrained by the capital required to build factories and install machinery. AI agents face no such constraint, which brings us to the second shift.</p><h3><strong>3.2 Near-Zero Adoption Costs</strong></h3><p>Every previous industrial revolution placed its capital burden squarely on the balance sheets of individual firms. Factories had to be built. Machines had to be purchased. Infrastructure had to be installed and maintained. The result was that adoption was slow, uneven, and gated by access to capital.</p><p>This time, the capital costs are borne upstream. A small number of companies &#8211; Anthropic, OpenAI, Google, Meta, and xAI among them &#8211; have invested tens of billions of dollars in model training, infrastructure, and compute. Their customers need invest almost nothing. Capability is rented instantly, scaled elastically, and priced at a level that is functionally trivial for any organisation of meaningful size.</p><p>This collapses the time between intent and execution. When a factory had to be built, there was a multi-year lag between a firm deciding to adopt a new technology and actually deploying it. When capability can be rented by the hour, the lag is measured in days or weeks. The traditional buffers that gave firms, workers, and governments time to adapt &#8211; the lead time of capital investment &#8211; are largely absent. Gartner&#8217;s own hype cycle analysis now places <a href="https://www.gartner.com/en/newsroom/press-releases/2025-08-05-gartner-hype-cycle-identifies-top-ai-innovations-in-2025">AI agents at the Peak of Inflated Expectations</a>, but with the critical caveat that unlike most technologies at that point, the underlying capability is already production-grade.</p><h3><strong>3.3 Expertise as a Depleting Asset</strong></h3><p>Deep, specific expertise still matters. There are moments when an AI agent needs to be pointed toward exactly the right documentation, the right regulatory nuance, the right domain-specific edge case. A skilled practitioner directing an AI agent will outperform an unskilled one.</p><p>But expertise is no longer a durable moat. It is a depreciating asset. Each time an expert&#8217;s knowledge is used to direct an AI system, that interaction becomes training data, documentation, or a codified workflow. The knowledge transfers from the individual to the system. What was once a career&#8217;s worth of accumulated insight becomes, progressively, a set of instructions that anyone can invoke.</p><p>Consider what has already happened in software engineering. Two years ago, deep expertise in a specific programming framework or infrastructure stack was a genuinely scarce and valuable commodity. Today, Claude Code can navigate those frameworks with a fluency that matches or exceeds most human practitioners, because the collective expertise of the field has been absorbed into the model. The experts who remain most valuable are those who can identify <em>which problems to solve</em> &#8211; not those who know <em>how to solve them.</em> The distinction between strategic judgement and technical execution has never been starker.</p><h2><strong>4. The Core Implication: Effort Is No Longer a Constraint</strong></h2><p>The three shifts above are important, but none of them is the most important thing. The most important thing is this: effort itself has ceased to be a binding constraint on what can be attempted.</p><p>This is the highest-impact change in the entire transition, and it is the one that decision-makers most consistently fail to internalise. For decades, the limiting factor on what got built, reformed, or investigated was not imagination or even resources in the abstract. It was the sheer human effort required to move from idea to execution. The cost-benefit calculation killed ideas before they were ever tested. The internal tool that would save three hundred hours a year was never built because it would take eight hundred hours to create. The policy analysis that might have revealed a better approach was never conducted because no team had the capacity. The legacy system that everyone knew was failing was tolerated because replacing it was a two-year project.</p><p>That calculus has changed, decisively. When the effort required to build, test, and deploy a solution drops by one or two orders of magnitude, the entire landscape of what is &#8220;worth doing&#8221; is redrawn.</p><p>For businesses, the implications are immediate. Every firm of any age has a backlog of &#8220;too hard&#8221; work: internal tools never built, legacy systems never replaced, process inefficiencies tolerated for years because the effort to fix them exceeded the pain of living with them. That backlog is now liquidatable. The firms that move first will not merely gain efficiency. They will gain the compounding advantage of having fixed problems that their competitors are still living with.</p><p>For governments, the implications are more profound and more uncomfortable. State capacity has always been constrained by administrative effort: the drafting, coordination, review, consultation, and enforcement that any policy action requires. This is not a failure of intent. It is a structural feature of governance, one that shapes what policies are feasible and what reforms are attempted. When the effort required for these activities collapses, governments face a stark choice. Those that harness the change will find that policy interventions previously dismissed as &#8220;too complex&#8221; or &#8220;too resource-intensive&#8221; become achievable. They could, for example, conduct regulatory impact assessments in days rather than months, maintain living legislative codifications that update automatically as new case law emerges, or run continuous compliance monitoring across entire sectors. Those that do not adapt will find themselves outpaced not just by other states, but by private actors whose execution velocity has accelerated by orders of magnitude.</p><h2><strong>5. The Timeline</strong></h2><p>The lag between the leading edge &#8211; where I am writing from now &#8211; and mainstream adoption is twelve to twenty-four months. This estimate is grounded in three observations.</p><p>First, the adoption curve for AI is compressing dramatically relative to historical precedent. <a href="https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends.html">Deloitte&#8217;s 2026 Tech Trends</a> report notes that a leading generative AI tool reached one hundred million users in two months, compared to fifty years for the telephone and seven years for the internet. More pointedly, AI agent adoption does not require hardware procurement, physical infrastructure, or long implementation cycles. It requires a subscription and a willingness to experiment. The adoption friction is near zero.</p><p>Second, the &#8220;holiday effect&#8221; of December 2025 demonstrated how fast adoption can move when practitioners have time to experiment. Claude Code&#8217;s viral adoption over the winter break showed that the barrier to adoption is not capability or cost. It is attention and habit. As awareness spreads through professional networks, adoption follows rapidly.</p><p>Third, competitive pressure will force the hand of laggards. Once early adopters demonstrate order-of-magnitude productivity gains, organisations that fail to follow will face an existential cost disadvantage. This is not a technology that firms can afford to &#8220;wait and see&#8221; on, because by the time the results are visible, the gap may already be insurmountable. Microsoft&#8217;s data shows global generative AI adoption reached 16.3 per cent of the world&#8217;s population in the second half of 2025, up from 15.1 per cent in the first half &#8211; and that figure understates enterprise adoption, which is moving considerably faster.</p><p>Twelve to twenty-four months is not a comfortable window. It is barely enough time to understand the problem, let alone respond to it in any coordinated way.</p><h2><strong>6. What This Means For You</strong></h2><p>If you run a business, the question is not whether to adopt AI agents but how fast you can integrate them into your operations without destabilising what already works. Begin with the backlog. Every organisation has one: the list of things that everyone agrees should be done but nobody has the resources to do. That list is now your highest-return investment. A single competent generalist, working with current AI tools, can liquidate years of accumulated technical and operational debt in months.</p><p>If you manage or advise on policy, the imperative is to understand that the administrative effort constraint &#8211; the single biggest structural limitation on state capacity &#8211; is dissolving. This creates opportunity and risk in equal measure. The opportunity is a step-change in what government can accomplish. The risk is that private-sector actors, unconstrained by the deliberative processes of democratic governance, will move so much faster that the regulatory environment becomes permanently reactive.</p><p>If you are a knowledge worker, the honest assessment is this: your value as a pure executor of well-defined tasks is declining rapidly. The value that remains &#8211; and it is real and substantial &#8211; lies in judgement, in the ability to identify which problems matter, in the capacity to navigate ambiguity and politics and human relationships, and in the wisdom to know when the machine&#8217;s output is wrong. These are not skills that most organisations currently hire for, train for, or reward. That will need to change.</p><h2><strong>Conclusion</strong></h2><p>Effort, until now, has been the invisible governor on everything we build, everything we govern, and everything we choose not to attempt. It has shaped our institutions, our strategies, our career structures, and our sense of what is possible.</p><p>The governor has been removed.</p><p>What follows from that fact &#8211;in business, in government, and in the structure of professional life &#8211; will be determined by how quickly and how honestly we reckon with it. The temptation will be to wait for certainty, for the technology to mature, for best practices to emerge, for someone else to go first. That instinct is understandable. It is also, in this case, a serious strategic error.</p><div><hr></div><p><em><strong>It is always worth remembering that silicon &#8211; as in rocks, albeit very complicated rocks &#8211; is what we are teaching to do all of this.</strong></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://briefings.canaryiq.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading CanaryIQ Public Briefings. To receive these in your inbox, subscribe below.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Meaning that Claude Code is building itself. Without wishing to invoke a firestorm, we consider a key milestones for AGI is the ability for AI to recursively improve itself.</p></div></div>]]></content:encoded></item></channel></rss>