Data center operators have learned to scrutinize their buildings for wasted energy. They understand that the way to efficiency is to follow the basic lessons of thermodynamics: minimize the effort spent removing heat - and if possible avoid creating it in the first place.

They know it’s cool to be “adiabatic”: to avoid transferring heat to one’s surroundings. And they compete to get PUE figures less than 1.1 - which is more than 90 percent efficiency in getting power to the computers.

But the servers in the racks are letting the side down. Awesome as their processors are, their CMOS circuitry is actually way less efficient than the CRAC units and power transformers they share a home with. And the technology is reaching its limits.

The IT industry has triumphantly ridden on the back of Moore’s Law: the phenomenal fact that every two years, the number of transistors on a chip doubled… for more than forty years. This delivered continuous improvements in computing power and energy efficiency, because of the related Koomey’s Law which says that energy requirements will fall at the same rate.

But everyone knows this is coming to an end. Chip makers are struggling to make a 5nm process work, because the gates are so small there is quantum tunneling through them.

Looking ahead, we might be able to use around 100 attojoules (100 x 10-18J) per operation, according to Professor Jon Summers, who leads the Research Institute of Sweden (RISE), but there’s an even lower theoretical limit to computing energy, which derives from work by Rolf Landauer of IBM in the 1960s.

Reversible computing

Landauer observed that, even if everything else in a computer is done completely efficiently, there’s an apparently inescapable energy cost. Whenever there’s an irreversible loss of information - such as erasing a bit - entropy increases, and energy is turned into heat. There’s a fundamental minimum energy required to erase a bit, and it is tiny: just under 3 zeptojoules (3 x 10-21J) which is kBTln 2, where kB is the Boltzmann constant and T is the temperature in kelvins. Physicists like to use electron volts (eV) to measure small amounts of energy, but even on that scale it’s tiny: 0.0175 eV.

“On that basis, computing is only 0.03 percent efficient,” said Summers, a shocking comparison with efficiencies above 95 percent claimed for some of the mechanical and electrical equipment these computers share a building with.

Can computers catch up? In their glory days, Moore’s and Koomey’s laws projected that we might reach the Landauer limit by 2050, but Summers thinks that’s never going to happen: “You can’t get down to three zeptojoules because of thermal fluctuations.”

But if you can’t reduce the energy required, your technology hits limitations: when you miniaturize it, the power density goes through the roof. Bipolar transistor-transistor logic (TTL) was used for early computers and continued in use till the IBM 3081 of 1980, but as it was miniaturized, it generated so much heat it needed water cooling.

The newer CMOS technology rapidly replaced TTL in the 1980s because it used 100,000 times less energy, said Summers: “The heat flux went down, the need for liquid cooling disappeared, and they could stick with air cooling.”

Now, as CMOS has shrunk, the heat density has increase: "We’ve gone up that curve again, three times higher than in TTL.” Water cooling is coming back into fashion. As before, other people are looking for alternative technologies. And, as it happens, there is an often-overlooked line of research which could drastically reduce the heat emission - sidestepping the Landauer limit, but questioning its assumption that computing involves overwriting data.

Landauer assumed that at some point data had to be erased, But what if that were not true? That’s a question which Michael Frank of the US Sandia National Laboratories has been asking for more than 25 years: “A conventional computer loses information all the time. Every logic gate, on every clock cycle, destructively overwrites old output with new output. Landauer’s principle tells you, no matter how you do those operations, any operation that overwrites memory has to dissipate some energy. That’s just because of the connection between information and entropy.”

Yves Lecerf in 1963, and Charles Bennett in 1973, both pointed out that Landauer’s assumption was mistaken. In theory, a computer did not need to erase a bit, as the erasing part wasn’t mathematically required.

Back in 1936, Alan Turing had proved that any computation could be done by a device writing and erasing marks on a paper tape, leading to the stored program model followed by all computers since. Turing’s universal machine was not reversible, as it both reads and erases bits (Turing was thinking about other things than entropy). Lecerf and Bennett proved any Universal Turing machine could be made reversible.

In the 1970s, Richard Feynman followed this up, noting that there is no lower limit to the energy required for a reversible process, so in principle, a reversible computer could give us all the computing we need while consuming almost no energy!

However, Feynman pointed out a big drawback that followed from the physics of these systems. To be reversible or “adiabatic,” an operation should take place in thermal equilibrium. To stay in thermal equilibrium, those processes must operate really, really slowly. So the system could be infinitely efficient, at the cost of being infinitely slow.

A small number of physicists and computer scientists have been brainstorming for years, looking for reversible technologies which might operate adiabatically or nearly adiabatically - but not take an infinite time over it.

Theory meets reality

In 1982, Edward Fredkin and Tomasso Toffoli at MIT designed reversible logic gates… but this is theory, so they based them on billiard balls. Physicists like to use classical mechanics as a model, and the pair considered a switch where hard spheres entered a physical box and had elastic collisions. Where they bounced to comprised the output - and in theory the switch could work adiabatically in real time.

The trouble is, you can’t get a system aligned infinitely precisely with zero friction, any more than you can eliminate the thermal noise and tunneling in an electronic system. Fredkin and Toffoli offered an electronic alternative, based on capacitors and inductors, but that needed zero resistance to work.

Researchers began to work towards reversible circuits in CMOS, and fully reversible circuits began with Saed Younis working in Tom Knight’s group at MIT.

At the same time, other groups designed mechanical systems, based on rods and levers. Ralph Merkle at the Institute for Molecular Manufacturing, Palo Alto, designed completely reversible nano-scale machines, based around moving tiny physical bars. Others worked on quantum dots - systems that use single electrons to handle information, at very low temperatures.

All these approaches involve trade-offs. They can be completely reversible, but they would require utterly new manufacturing methods, they might involve a lot more physical space. Low temperature systems have an associated energy cost, and - as Feynman pointed out - some reversible systems work very slowly.

As things stand, the IT industry has chosen a path that prioritizes results over efficiency. That’s nothing new. Canals are fantastically efficient, allowing goods to be floated without friction, pulled by a horse. Yet, in the 19th century, the industrial revolution in Britain chose railways, powered by burning coal, because railways enabled transport to any destination quickly.

Classical reversible computing can potentially save energy, doing conventional general problems: “A good analogy is the difference between throwing trash away and recycling,” Frank told DCD. “It’s easier to put it in the landfill than to transform it, but in principle, you could get a saving.” When we overwrite bits, he said, we let the information turn into entropy, which generates heat that has to be moved out of the machine.

“If you have some information that you have generated, and you don’t need it anymore, the reversible way to deal with it is to ‘decompute’ it or undo the physical operations that computed it. In general you can have an extra tape, where you temporarily record information that you would have erased, and then decompute it later.” Decomputing is a bit like regenerative braking in a vehicle - getting back the energy that has been put in.

Computing chose CMOS because it works, and since that choice was made reversible computing has been ignored by mainstream research. As one of the MIT group in the 1990s, Frank helped create some of the prototypes and demonstrations of the concepts. He’s stayed true to its promise since then, patiently advocating its potential, while watching investment and media focus go to more exciting prospects like quantum computing.

Since 2015, as a senior scientist at the Sandia National Laboratories, he’s established reversible computing as a possible future direction for IT. Through the IEEE, he’s contributed to the International Roadmap for Devices and Systems (IRDS), a series of documents which chart the likely developments in semiconductors - and reversible computing is now on that map.

There’s lots of excitement about other technologies on the list such as quantum computing, which offers potentially vast increases in speed using superpositions of quantum states.

Google and others are racing towards “quantum supremacy,” but quantum may have limits, said Frank. “Quantum computing offers algorithm speed ups. It boils down to specific kinds of problem. It has so much overhead, with very low temperatures and massive systems. The real world cost per operation is huge - and it will never run a spreadsheet.”

Frank is clear about the tradeoffs in reversible computing. Saving bits takes up space. “It’s a tradeoff between cost and energy efficiency,” he told us - but as chips approach their current limitations, that space could become available.

Because today's chips are now generating so much heat - the problem Summers noted earlier - they are often riddled with "dark silicon."

Today’s chips have more gates on them than they can use at any one time. They are built as systems on a chip (SOCs), that combine different modules. Said Summers: “A multicore chip might have a budget of 100W. In order to not exceed that you can only switch on so many transistors.”

Silicon providers are giving us gates we can’t use all at once. “We’ve been duped by the manufacturers,” said Summers.

IBM Quantum Computer
– Sebastian Moss

One step at a time

Frank thinks this dark silicon could be used to introduce some aspects of reversible computing. “We could harness more of the transistors that can be fabricated on a given chip, if we have them turned on and doing things adiabatically.” Space on chips, that would otherwise be under-utilized, could be given over to adiabatic switches - because they won’t use significant power and cause heating.

Adiabatic circuits made in CMOS are still in their infancy and at present have some limitations: “Adiabatic CMOS gets better at low temperatures, and is especially good at cryogenic temperatures,” he said.

Perhaps ironically, a field which needs electronics that work at low temperatures is the well-funded area of quantum computing. The quantum bits or “qubits” need to be kept at cryogenic temperatures, and any heat generated in the surrounding control circuits must be removed.

It could be that early adiabatic silicon will be developed for supporting circuitry in quantum computing projects, said Frank: “That may end up being one of the first commercial applications of adiabatic logic - you gain some benefits from operating at low temperature.”

Getting it to work well at room temperature will require more work and new kinds of component, including resonators which absorb energy and recover it, providing the “regenerative brakes” that decomputing needs.

“The energy required for data processing is never going to be zero,” said Frank. “No system is perfectly efficient. We don’t yet know the fundamental limits to how small the losses can become. More generally, we don’t know the fundamental limits to reversible technologies.”

Frank’s group is starting to get funding, and access to fabs for custom chips, but it’s slow: “We need more engineers working on these kinds of ideas. We have the funding to make test chips. It’s possible to make these through commercial foundries, too. That’s another avenue we can pursue.”

However it happens, we know that something like this is needed, as classical CMOS approaches the end of its useful development. “We’re getting closer to the point where you can’t go further.” said Summers. “Will the semiconductor industry stagnate, and people lose interest in it? Or will it do other things?”

Frank and Summers agree that in the next ten years or so there will be a gap between the end of CMOS and whatever the next technology is. “No one knows what will fill that gap at this point,” said Frank. “A wide variety of things are being investigated. There’s no clear winner. I would not be surprised if there’s a period of stagnation. It takes quite a while - about ten years - for a technology to go from working in the lab to mass production.”