At its Quantum Summit today, IBM announced the successful development of its ‘Osprey’ QPU (Quantum Processing Unit) — its 433-qubit 2022 roadmap target. The new QPU significantly increases the number of working qubits within a single QPU — theonly carried 127 of them.
The new launch is another confident step for, which aims to deliver QPUs with tens of thousands (perhaps even hundreds of thousands) of qubits by 2030.
“The new 433 qubit ‘Osprey’ processor brings us a step closer to the point where quantum computers will be used to tackle previously unsolvable problems,” said Dr. Darío Gil, Senior Vice President, IBM and Director of Research.
“We are continuously scaling up and advancing our quantum technology across hardware, software and classical integration to meet the biggest challenges of our time, in conjunction with our partners and clients worldwide. This work will prove foundational for the coming era of“
Osprey’s launch is a significant one for IBM: smack in the middle of IBM’s roadmap, it carries the biggest boost in number of qubits within a single chip. Compared to Eagle, Osprey increases qubit counts by 3.4 times; it’s an even larger increase in qubit counts than the company expects to achieve in three years’ time, when it is planning to introduce the 4,158 qubit Kookaburra QPU. It’s also higher than any other qubit jump since the introduction of Falcon and its 27 qubits back in 2022.
Due to Osprey’s positioning within IBM’s roadmap — right before the company starts exploring quantum scaling by interconnecting multiple QPUs next year with Heron and its p couplings — the increase in qubit counts without a compromise in quality is exceptionally relevant. But perhaps more impressive is the fact that this jump in qubit counts was engineered at the same time that IBM laid most of the groundwork for its future modular products.
The company is looking to 2023 to introduce its 133-qubit, scalable Heron QPUs, which will leverage p-couplings to interconnect several Heron chips. The idea is that it’s easier to scale qubits within a given package and link separate packages than it is to create a monolithic QPU.
It does bring about challenges regarding workload distribution — there are a number of ways to cut up a higher-volume quantum problem so that it fits the chip (or chips) you have available to run the quantum circuits on, and the way this is done severely impacts performance. But multi-chip scaling is a necessity, and adopting this approach meant re-engineering the entire control electronics subsystem — the bridge between classical and quantum computing.
According to Dr. Oliver Dial, Chief Hardware Architect at IBM Quantum, a significant improvement came from changing the qubit control mechanism inside the company’s dilution refrigerators — the hardware responsible for cooling the superconducting qubits towards near absolute zero (−273.15 °C).
Before Osprey, IBM employed coaxial cables to transmit microwave control information towards the operating qubits. Now, the coaxial cables have given way to flexible ribbon cables (the same sort that’s used wherever there are electronics and hinges, such as in your laptop). These ribbon cables themselves occupy much less space and offer much higher throughput than the previous solution while costing less time and resources to deploy. Dr. Dial says they allowed IBM to increase control density by 70% while reducing costs fivefold.
Another important element to this new quantum generation from IBM was increased FPGA () performance within the control subsystem.
While the future of IBM’s qubit control passes through quantum-specific ASICs (Application-Specific Integrated Circuits), FPGAs have so far been handling the grunt of the work due to their flexibility — IBM can prototype different control schemes within the FPGA’s programmable design. This allows for quick experimentation and iteration until such a time when the company is confident enough to go the full ASIC route. Dr. Dial says this change will deliver another monumental improvement on power efficiency by cutting down the wattage required to control a single qubit from around 100 W down to just 10 milliwatts.
Image 1 of 4
Crucially, Dr. Dial says the superconducting qubits in Osprey have shown coherence times comparable to the company’s best (despite the tremendous increase in qubit count), meaning that pure quantum volume () is bound to increase in-line with qubit counts.
According to IBM, the number and quality of qubits in Osprey are such that a classical system attempting to describe its qubits’ computational state would require more available bits than there are atoms in the universe. It would seem that we’ve already entered the quantum advantage stage of the equation.
Of course, qubits may improve in both their count and quality, but there’s little that pure quantum hardware solutions have to offer the average user. Dr. Dial was quick to point out that anyone — truly anyone — can now spin up IBM’s quantum tech via the company’s cloud offering.
requires a severe abstraction effort that allows non quantum-specialists to interact with these systems, and IBM has to make it easier for users to do so — trading runtime for accuracy is as easy as changing a software setting.
Through improvements in its drivers, Quiskit runtime, and parametrized circuit improvements throughout 2022 led IBM to scrape through its 1,400 CLOPS score up to around 15,000 CLOPS — if AMD’s driver-based performance improvements on its GPUs have earned it the “fine wine” moniker, I wonder what metaphor would be appropriate for this almost 11x performance increase.
Hardware improvements are a necessary half of developing any new technological system; but the other half is actually putting that hardware to use. To that end, IBM at its Quantum Summit also announced its 100×100 Challenge, an initiative that aims to put a 100 qubit x 100 gate operation depth in the hands of users by 2024. By leveraging the company’s next-generation modular quantum architecture, Heron, IBM aims to challenge users with a “what fits here?” question — what sort of quantum computational problem can be processed within these constraints?
For all we know about quantum computing, something really special could come out of this challenge — it’s not a matter of whether humanity will see significant gains when it starts. It’s merely a matter of how and when — and this challenge gives IBM a surefire way to keep that conversation ongoing. Which is, after all, one of the company’s concerns for its Quantum Summit 2022.
“The IBM Quantum Summit 2022 marks a pivotal moment in the evolution of the global quantum computing sector, as we advance along our quantum roadmap. As we continue to increase the scale of quantum systems and make them simpler to use, we will continue to see adoption and growth of the quantum industry,” said Jay Gambetta, IBM Fellow and VP of IBM Quantum. “Our breakthroughs define the next wave in quantum, which we call quantum-centric supercomputing, where modularity, communication, and middleware will contribute to enhanced scaling computation capacity, and integration of quantum and classical workflows.”
On the back of those remarks and(and perfect execution, according to its roadmap), it seems the company’s bet on superconducting qubits is paying off. As IBM has showcased, there’s much life yet in optimizing the company’s quantum systems, apart from the increase in qubit densities and coherence times that have enabled the jump from the 127-qubit Eagle to the 433 qubit Osprey.
Osprey has now taken flight, but IBM has never stopped looking towards the future of quantum — and that continues with the 2023, 1,121 qubit Condor.