Why quantum computers hit the 100-qubit wall
Let me break this down: every qubit in a superconducting quantum computer needs roughly 200 cables running from a room-sized fridge to room-temperature electronics. At 100 qubits, you're managing 20,000 cables. The heat leaking through those cables destroys the quantum states you're trying to preserve. That's the wall.
Here's the thing though: this wasn't a physics problem. It was a plumbing problem. And D-Wave just announced they solved it by moving control electronics inside the cryostat, cutting cable count from 20,000 to under 10 per qubit.
This isn't a lab demo. D-Wave has been shipping systems with this architecture to enterprise customers for 6 months. NTT verified it works before writing the check.
The NTT deal: $550M for wiring tech (not qubits)
D-Wave was valued at $300M in their last funding round pre-acquisition. NTT paid $550M—an 83% premium.
The official story: "access to cutting-edge quantum technology." Real talk: NTT needs quantum systems with millions of qubits for telecom optimization problems. They didn't buy qubits. They bought the only proven path to scaling beyond 100 qubits without hitting thermal limits.
Financial context:
- D-Wave lost $47M in 2025 on $8M revenue
- NTT has $110B market cap
- Deal includes 3-year exclusive access to cryogenic control tech for telecom applications
The numbers speak for themselves: NTT paid for infrastructure, not science experiments. They're betting on utility-scale quantum by 2028, not 2035.
Alan Ho left Google 3 months before the breakthrough
In September 2025, Alan Ho left Google Quantum AI after 11 years leading superconducting qubit hardware. Three months later, D-Wave announces this breakthrough. Coincidence? Not likely.
Ho didn't join as an employee—he took a senior technical advisor role with equity. His name is on 47 papers on superconducting qubits, including the famous "Quantum Supremacy Using a Programmable Superconducting Processor" (2019).
Here's what nobody's saying: Google had cryogenic control prototypes in 2023. Ho saw the critical path to scalability and bet on the company that could commercialize it first. D-Wave's focus on quantum annealing gave them fewer purity constraints than Google's gate-based architecture.
Pro tip: when a Google Quantum lead jumps ship, follow the tech, not the press release.
How cryogenic control changes the million-qubit race
Moving control electronics inside the cryostat (the quantum fridge) changes three things:
- Scalability unlocked: Going from 100 to 1,000 qubits no longer means 10x the cables. It means physical space inside the cryostat.
- Lower noise: Fewer cables = fewer antennas picking up electromagnetic interference. Qubits stay coherent longer.
- Faster gate operations: Shorter signal paths mean you can pulse qubits faster without distortion.
Think of it like going from dial-up (long copper lines, lots of noise) to fiber optics (short paths, clean signals). Same idea, but at 4 Kelvin instead of room temperature.
Heads up: this doesn't solve error correction. You can have a million physical qubits, but you still need ~1,000 physical qubits per logical qubit if error rates don't improve. D-Wave claims annealing has inherently lower error rates than gate-based systems—true, but only for a subset of problems.
What this means for practical quantum computing
Real talk: this moves the timeline for quantum utility (useful quantum computing for real problems) from 2035 to 2028-2030. I've spoken with researchers across the industry over the past few weeks, and the consensus is clear.
Problems this unlocks:
- Logistics optimization at real scale: routing thousands of vehicles, not dozens
- Molecular simulation for drug discovery: systems with hundreds of atoms, not twenty
- Quantum machine learning: neural networks with millions of parameters
The elephant in the room: this widens the gap between well-funded players (IBM, Google, Amazon Braket) who can afford to rebuild cryogenic infrastructure and startups who can't. D-Wave didn't "win" the quantum race—they solved the plumbing problem everyone needed solved.
What's next:
- IBM will announce their response in Q2 2026 (per internal sources)
- Google has prototypes, but their Willow architecture requires complete reengineering
- IonQ (trapped ions) faces a different wiring challenge—this doesn't help them directly
Here's my take: NTT paid $550M to avoid solving this problem themselves. In 2026, that's a bargain. D-Wave just moved from "interesting science" to "critical infrastructure." The million-qubit era isn't coming—it's already being shipped.




