BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

IBM’s Newest Quantum Computing Roadmap Unveils Four New Quantum Processors And Future Plans For A Quantum Supercomputer

Following


Last week IBM updated its quantum computing roadmap for the third time since the first one was published in 2020. In this roadmap, IBM has effectively introduced new and essential technologies at every layer of the stack. It has also provided new tools for kernel developers, algorithm developers, and model developers. These developments all require new hardware, software, and new architecture.

This roadmap suggests that IBM will accelerate quantum's expected trajectory by developing quantum processors that have the potential to scale to hundreds of thousands of qubits several years earlier than expected.

If IBM’s roadmap is implemented, it will change the paradigm of quantum computing. A decade ago, CPU-centric supercomputing was the exclusive domain of government and researchers for solving large and complex scientific problems. Since then, it has been democratized and transformed into various types of AI-centric supercomputing used in almost every industry today.

This roadmap is IBM's plan to create a new family of quantum processors, software, and services that will lead to the realization of the next generation of supercomputers, a quantum-centric supercomputer. The combined resources of quantum processors, CPUs, and GPUs are expected to solve some of the world's most challenging problems.

The big picture

I had the opportunity to discuss IBM’s new roadmap and its long-term impact on quantum computing with Dr. Blake Johnson, IBM Quantum Platform Lead. Dr. Johnson has an extensive quantum background. Before his current role at IBM, Dr. Johnson was Vice President of Quantum Engineering at Rigetti Computing, and preceding Rigetti, he was Senior Scientist at Raytheon’s BBN Technologies. Dr. Johnson received his undergraduate degree in physics from Harvard University and his PhD in physics from Yale University.

Dr. Johnson explained that IBM Research is developing four new quantum processors scheduled for release in 2023, 2024, and 2025. IBM Quantum System Two will provide the infrastructure needed to support its new processor architecture. IBM is planning for a prototype of System 2 to be running in 2023.

Even though IBM has scheduled the release of its new quantum processors, it still plans to release the single-chip QPUs shown on the previous roadmap. These include Osprey 433-qubit processor, scheduled for release later this year, and Condor 1121-qubit processor, expected to release in 2023.

On the previous roadmap, IBM launched Qiskit Runtime, a runtime environment of co-located classical systems and quantum systems built to support containerized execution of quantum circuits at speed and scale. Earlier this month, IBM announced updates to Qiskit Runtime, equipping it with two new primitives. Primitives are predefined programs that make it easy to create quantum-classical workloads needed to build and customize applications.

The new primitives - Sampler and Estimator - optimize how code is sent to a quantum computer. Sampler generates outputs that help determine a solution to the computation by sampling quantum circuits. Estimator is a program interface that estimates the expected values of quantum operators so that users can calculate and interpret the anticipated quantum operator values needed by many algorithms.

In 2023, IBM will provide additional primitives that run on parallelized quantum processors to obtain application speedup. At a high level, Quantum Serverless allows for flexible combinations of elastic classical computing with quantum, while Primitives serve as the quantum-classical interface.

A new modular architecture

IBM’s latest roadmap introduces an entirely new modular architecture much different than the architecture used by its existing family of quantum processors. The new architecture connects quantum processors to a common control infrastructure so that data can flow classically and in real-time between the QPU and other chips in a multi-chip environment.

In addition, it also employs an entirely new multi-qubit gate scheme that is both faster and higher-fidelity.

In 2023, a new 133-qubit QPU called Heron will be the first IBM processor to use the new architecture. The above graphic illustrates how multiple Heron processors can be linked together using classical couplers to permit classical parallelization.

Dr. Johnson said that the multi-chip Heron configuration would be extensible based on demand and application requirements. He said, “We believe this is an extensible architecture that is scalable to whatever size we want by using classical parallelization of quantum hardware."

Modular design, classical coupling, and parallelization of quantum hardware are all essential elements in designing a quantum-centric supercomputer.

Scaling with quantum couplers

In 2023, IBM’s roadmap begins building the foundation needed for its long-term goals by introducing short and long-range quantum coupling technologies. Couplers allow qubits to be logically scaled without fabricating larger chips. This accommodates increased input-output density that would otherwise be needed to get more signals in and out of the system.

The coupling scheme requires the same number of wires per qubit, but the couplers stretch out the footprint so that more wires are not crammed into the same physical space.

  • Short-range couplers use chip-to-chip parallelization to extend IBM’s heavy-hex lattice between multiple chips. It effectively scales qubits by creating a larger but logical chip. Gate speed and gate fidelity of the expanded logical chip does not affect performance because is about the same as the individual chips.
  • Long-range couplers use cables to connect multiple independent modules so that quantum information can be shared between quantum processors. IBM estimates this link will be somewhat slower and lower fidelity than short-range chip-to-chip coupling, however, programming accommodations can be made to adjust for the differences. One of the benefits of long-range coupling is that it allows modules to be separated for additional input-output space.

Transitioning from single-chip QPUs to multiple chip QPUs

Existing IBM quantum processors are single-chip devices. In 2024, IBM will introduce its first multiple chip processor called Crossbill, a 408 qubit processor that demonstrates the first application of short-range coupling.

Concurrent with Crossbill’s development in 2024, IBM will also develop a 1386+ qubit quantum processor called Flamingo, the first QPU to use long-range coupling. IBM will also demonstrate parallel quantum processors using three link-connected Flamingos.

Quantum technologies developed in 2024 will pave the way for the next generation of quantum processors and enable them to scale to hundreds of thousands of qubits using multiple chips.

2025 – Kookaburra, the big bird

In 2025, IBM will use technologies developed in prior years to create a 4158+ qubit quantum processor called Kookaburra. It will be the first processor to use a combination of the chip-to-chip short-range and long-range couplers.

Looking past 2025, coupling technologies will begin to solve most near-term scaling problems. As shown with Heron, systems can be linked together with classical parallelism using chip-to-chip links for multiple modules or extend the size of individual units with long range coupling.

Dynamic circuits

Like previous roadmaps, this year’s roadmap also shows software layers associated with respective hardware targets. Although dynamic circuits were first announced in 2021, after further development, IBM will selectively deploy the technology on exploratory systems later this year.

Dynamic circuits are a powerful and important technology that can:

  • Extend hardware capabilities allowing reduced circuit depth
  • Allow consideration of alternative models for quantum computing, such as the measurement model in contrast to the standard gate array model
  • Play a fundamental role in quantum error correction codes that use parity checks that are dependent on real-time classical data

The use of dynamic circuits has essentially created a much broader family of circuits that take advantage of measurement and computation and management to allow future states to be changed or controlled by the outcome of mid-circuit measurements made during circuit execution.

Quantum Serverless

In 2023, IBM will begin developing more enhanced applications of elastic computing and parallelization of Qiskit Runtime.

It is much easier for algorithm developers to create and run many small quantum and classical programs than one large program. IBM is integrating Quantum Serverless into its core software stack to enable circuit knitting, allowing large quantum circuits to be solved by splitting them into smaller circuits and distributing them across quantum resources. Knitted circuits can be recombined by using an orchestrated solution of classical CPUs and GPUs.

The necessary hardware and software should be in place by 2023 that allows model developers to begin prototyping software applications for specific use cases. According to the roadmap, machine learning will be the first case. Jumping forward to 2025, IBM plans to expand applications to include optimization, natural sciences, and others.

Roadmap challenges

There are several challenges IBM must address in its roadmap if it is to reach its end goal of building a quantum-centric supercomputer:

  1. Expanding the use of dynamic circuits will be difficult to implement because the technology requires multiple elements of innovation within the stack. Dynamic circuits need control systems that allow data to be moved around with latency low enough that it can be processed in real-time. IBM must design a third-generation control system to meet those low latency requirements created by the new roadmap.
  2. According to Dr. Johnson, a new language is also needed that allows users to describe combinations of real-time classical computation with quantum gates. He said that IBM is leading an effort with the assistance of the broader quantum community to develop an OpenQASM 3 circuit description language for describing new circuits.
  3. IBM also needs new compiler technology to convert OpenQASM 3 circuits into a form that allows it to run on a control system. IBM is already working on eliminating this as a problem. It is building a new compiler from the ground up.
  4. A body of research demonstrates that superconducting qubits are on a viable path to fault-tolerant quantum computing. However, it will take several years before fault-tolerance is possible. In the meantime, as shown on the roadmap, quantum errors will be handled with error suppression and error mitigation. In the years beyond 2026, if IBM is to be successful in scaling to hundreds of thousands of qubits, then it must have quantum error correction to build a quantum machine that large. Failure to do so will jeopardize the long-range plan.
  5. Can IBM execute its ambitious roadmap and build a quantum supercomputer? Skepticism is always appropriate; however, history shows that IBM has met all its past roadmap development targets on time. That includes one of the most essential 2021 hardware targets, the Eagle 127-qubit processor, the first to put quantum computation beyond the simulated reach of classical computers. Additionally, Qiskit Runtime was implemented in 2021 with a demonstrated 120x speedup in quantum runtimes. From my past experience, if it is on the roadmap, IBM either has a very high degree of confidence that it’s doable or it’s already done but not announced.

Wrapping it up

IBM’s future efforts will continue to focus on scaling qubits, increasing quality and maximizing speed of quantum circuits. Each block of technology in its roadmap is a measured and critical step in an overall orchestrated evolution that, if properly executed, will allow IBM to achieve its end goal of building quantum processors with hundreds of thousands of qubits. Qiskit Runtime and Runtime primitives will continue to play an essential role in IBM’s future plans and it is expected to increase speedup from today’s 120x to 200,000x sometime in the future.

In 2023, IBM will deploy the last of its single-chip quantum processors, the 1121-qubit Condor. That year will also see the deployment of three key technologies that form the foundation of its overall plan. These consist of a completely new quantum computing architecture and two key scaling elements: a short-range chip-to-chip coupler and a long-range coupler.

The first multiple chip processor, called Crossbill with 408 qubits, will be introduced in 2024. This step explores the path to increase the size of quantum processors beyond the area limits of a single chip.

A year later, in 2025, almost every part of IBM’s technology plan comes together in the form of a 4158+ qubit quantum processor called Kookaburra (a plus sign behind the qubit count means IBM believes it can scale this processor to any number of qubits it feels is necessary for the application). While each new quantum processor is important to the overall roadmap, Kookaburra appears to be the cornerstone of IBM’s future generations of quantum processors. In previous roadmaps, it seemed that Condor would be the future architecture.

By 2026, quantum computers with large numbers of qubits should finally be able to solve a select number of useful problems far beyond classical computers' capability.

Analyst notes:

  1. IBM is conducting a significant amount of research into error correction and how to make a bridge between noise mitigation and error correction. Rather than an abrupt jump between the two concents, IBM believes that a smooth transition is possible. For that reason, it is exploring borrowing concepts and ideas from the mitigation space as well as from error correction. IBM recently published a research paper outlining the achievement of more accurate results in this area than it has ever achieved before. A viable error correction solution would not only benefit IBM, but it would also benefit the entire ecosystem as well.
  2. When reading the roadmap, it is important to understand that systems are slotted into the year when the technology will first be demonstrated, not when it will be available.
  3. The roadmap isn’t a precise representation of how much overlap will exist between new and old systems. Today, most IBM systems are Falcons with some Hummingbirds and Eagles in the mix. My guess is that eventually IBM will transition most of its processors to Eagles. At some point, Crossbill and others such as Osprey will also be in the mix.
  4. The innovations in this roadmap open many doors. The combination of quantum processors, classical processors, classical communication, and quantum communication make it possible that a single quantum system could be scaled to 10,000 to 100,000 qubits earlier than expected.
  5. In our current cloud-based environment of Quantum-as-a-Service, the size of a quantum computer or the need for cryogenics make little difference to most users on a laptop or terminal. However, there are a growing number applications that will need room temperature performance and a compact size.


Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, or speaking sponsorships. The company has had or currently has paid business relationships with 8×8, A10 Networks, Advanced Micro Devices, Amazon, Ambient Scientific, Anuta Networks, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak (now Aruba-HPE), SONY Optical Storage, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zoho, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is a personal investor in technology companies dMY Technology Group Inc. VI and Dreamium Labs.