Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
Do these reversible techniques help or hinder in applications where hardened electronics are required, like satellites or space probes? I can see a case for both.
Can one define the process of an adiabetic circuit goes through like one would do analogusly for the carnot engine? The idea being coming up with a theoretical cieling for the efficiency of such a circuit in terms of circuit parameters?
Yes a similar analysis is where the above expression f²RC²V² comes from.
Essentially -- (and I'm probably missing a factor of 2 or 3 somewhere as I'm on my phone and don't have reference materials) -- in an adiabatic circuit the unavoidable power loss for any individual transistor stems from current (I) flowing through that transistor's channel (a resistor R) on its way to and from another transistor's gate (a capacitor C). So that's I²R unavoidable power dissipation.
I must be sufficient to fill and then discharge the capacitor to/from operating voltage (V) in the time of one cycle (1/f). So I=2fCV. Substituting this gives 4f²RC²V².
Compare to traditional CMOS, wherein the gate capacitance C is charged through R from a voltage source V. It can be shown that this dissipates ½CV² of energy though the resistor in the process, and the capacitor is filled with an equal amount of energy. Discharging then dissipates this energy through the same resistor. Repeat this every cycle for a total power usage of fCV².
Divide these two figures and we find that adiabatic circuits use 4fRC times as much energy as traditional CMOS. However, f must be less than about 1/(5RC) for a CMOS circuit to function at all (else the capacitors don't charge sufficiently during a cycle) so this is always power savings in favor of adiabatics. And notably, decreasing f of an adiabatic circuit from the maximum permissible for CMOS on the same process increases the efficiency gain proportionally.
(N.B., I feel like I missed a factor of 2 somewhere as this analysis differs slightly from my memory. I'll return with corrections if I find an error.)
There indeed has been research on reversible adiabatic logic in superconducting electronics. But superconducting electronics has a whole host of issues of its own, such as low density and a requirement for ultra-low temperatures.
When I was at Sandia we also had a project exploring ballistic reversible computation (as opposed to adiabatic) in superconducting electronics. We got as far as confirming to our satisfaction that it is possible, but this line of work is a lot farther from major commercial applications than the adiabatic CMOS work.
Possibly, that's an interesting thought. The main benefit of adiabatics as I see them is that, all else being equal, a process improvement of the RC figure can be used to enable either an increase in operating frequency or a decrease in power usage (this is reflected as the additional factor of fRC in the power equation). With traditional CMOS, this only can benefit operating frequency -- power usage is independent of the RC product per se. Supercondition (or near-superconduction) is essentially a huge improvement in RC which wouldn't be able to be realized as an increase in operating frequency due to speed of light limitations, so adiabatics would see an outsize benefit in that case.
it doesn't necessarily take any energy at all to process information, but it does take roughly kT work of energy to erase a bit of information. It's related to
No, and yes, so long as you don't delete information.
Think of a marble-based computer, whose inner workings are frictionless and massless. The marbles roll freely without losing energy unless they are forced to stop somehow, but computation is nonetheless performed.
Henry G. Baker wrote this paper titled "The Thermodynamics of Garbage Collection" in the 90s about linear logic, stack machines, reversibility and the cost of erasing information:
A subset of FRACTRAN programs are reversible, and I would love to see rewriting computers as a potential avenue for reversible circuit building(similar to the STARAN cpu):
This is really cool, I never expected to see reversible computation made in electrical systems. I learned about it undergrad taking a course by Bruce MacLennan* though it was more applied to "billiard ball" or quantum computing. It was such a cool class though.
>it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire’s road map but probably 10 or 15 years out.
How I wish I could place bets on this never happening.
>In the following years, Vaire plans to design the first reversible chip specialized for AI inference.
These guys are cashing in on AI hype. Watch them raise VC money, give themselves 6 figures salaries and file for bankruptcy in 3 years.
You are exactly correct - the combination of deep belief in the ability to obtain quick riches by investing in the "next big thing" aligned with a large gap of knowledge between reality and hype, all mixed into a milieu of in-group speak and customs always leads to the proliferation of the con. It's the next "Long Blockchain Corp!"
4000x cost saving would bring operation costs for compute down close to zero, meaning marginal costs for data centers would go down as well, meaning data centers would buy a shit ton of these chips. Think the valuation of Nvidia x 1000
I still think the technical challange is too big, but it's high reward for early investors.
The ideas are neat and both Landauer and Bennet did some great work and left a powerful legacy. The energetic limits we are talking about are not yet relevant in modern computers. The amount of excess thermal energy for performing 10^26 erasures associated to some computation (of say an LLM that would be too powerful for the current presidential orders) would only be about 0.1kWh, so 10 minutes of a single modern GPU. There are other advantages to reversibility, of course, and maybe one day even that tiny amount of energy savings will matter.
Wow. This whole logic sounds like something really harebrained from a Dr Who episode: "It takes energy to destroy information. Therefore if you don't destroy information, it doesn't take energy!" - sounds completely illogical.
I honestly don't understand from the article how you "recover energy". Yet I have no reason to disbelieve it.
Someone else here compared it to regenerative braking in cars, which is what made it click for me. If you spend energy to accelerate, then recapture that energy while decelerating, then you can manage to transport yourself while your net energy expenditure is zero (other than all that pesky friction). On the other hand, if you spend energy to accelerate, then shed all that energy via heat from your brake pads, then you need to expend new energy to accelerate next time.
>Abstract. The theory of reversible computing is based on invertible primitives
and composition rules that preserve invertibility. With these constraints, one can still
satisfactorily deal with both functional and structural aspects of computing processes;
at the same time, one attains a closer correspondence between the behavior of abstract
computing systems and the microscopic physical laws (which are presumed to be
strictly reversible) that underly any concrete implementation of such systems.
According to a physical interpretation, the central result of this paper is that it
is ideally possible to build sequential circuits with zero internal power dissipation.
>In 1970s, Ed Fredkin, Tommaso Toffoli, and others at MIT formed the Information Mechanics group to the study the physics of information. As we will
see, Fredkin and Toffoli described computation with idealized, perfectly elastic balls reflecting o↵ barriers. The balls have minimum dissipation and are
propelled by (conserved) momentum. The model is unrealistic but illustrates
many ideas of reversible computing. Later we will look at it briefly (Sec. C.7).
>They also suggested a more realistic implementation involving “charge packets bouncing around along inductive paths between capacitors.” Richard
Feynman (Caltech) had been interacting with Information Mechanics group,
and developed “a full quantum model of a serial reversible computer” (Feynman, 1986).
>Charles Bennett (1973) (IBM) first showed how any computation could be
embedded in an equivalent reversible computation. Rather than discarding
information (and hence dissipating energy), it keeps it around so it can later
“decompute” it back to its initial state. This was a theoretical proof based on
Turing machines, and did not address the issue of physical implementation. [...]
>How universal is the Toffoli gate for classical reversible computing:
All of quantum computing is reversible by nature (until you measure the state, of course). Yet, there'some research in the field focusing on irreversible ("non-unitary") quantum algorithms and it appears there is some advantage in throwing away, algorithmically speaking, the reversibility. See https://arxiv.org/abs/2309.16596
It's interesting that classical and quantum computing researchers are each looking in the direction of the other field.
> The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly.
But if you change the gate voltage slowly, then the transistor will be for a longer period in the resistive region where it dissipates energy. Shouldn't you go between the OFF and ON states as quickly as possible?
The trick is not to have a voltage across the channel while it's transitioning states. For this reason, adiabatic circuits are typically "phased" such that any given adiabatic logic gate is either having its gates charged or discharged (by the previous logic gate), or current is passing through its channels to charge/discharge the next logic gate.
Calling the addition of an energy storage device into a transistor "reverse computing" is like calling a hybrid car using regenerative braking "reverse driving".
It's a very interesting concept - best discussed over pints at the pub on a Sunday afternoon along with over unity devices and the sad lack of adoption of bubble memory.
Well actually, "reversible driving" is perfectly apt in the sense of acceleration being a reversible process. It means that in theory the net energy needed to drive anywhere is zero because all the energy spent on acceleration is gained back on braking. Yes I know in practice there's always friction loss, but the point is there isn't a theoretical minimum amount of friction that has to be there. In principle a car with reversible driving can get anywhere with asymptotically close to zero energy spent.
Put another way, there is no way around the fact that a "non-reversible car" has to have friction loss because the brakes work on friction. But there is no theoretical limit to how far you can reduce friction in reversible driving.
Cars specifically dissipate energy on deformation of the tires; this loss is irreversible at any speed, even if all the bearings have effectively zero losses (e.g. using magnetic levitation).
A train spends much less on that because the rails and the wheels are very firm. A maglev train likely recuperates nearly 100% of its kinetic energy during deceleration, less the aerodynamic losses; it's like a superconducting reversible circuit.
Actually, a non-reversible car also has no lower energy limit, as long as you drive on a flat surface (same for a reversible one) and can get to the answer arbitrarily slowly.
An ideal reversible computer also works arbitrarily slowly. To make it go faster, you need to put energy in. You can make it go arbitrarily slowly with arbitrarily little energy, just like a non-reversible car.
(I once read a fiction story about someone who, instead of having perfect pitch, had perfect winding number: he couldn't get to sleep before returning to zero, so it took him some time to realise that when other people talked about "unwinding" at the end of the day, they didn't mean it literally)
This is probably a dumb question, but where does this fail: what happens if I run something like the SHA256 algorithm or a sudoku solver backwards using these techniques? I assume that wouldn’t actually work, but why?
I'm speaking out of my depth, but as I understand you'd need the extra information that was accumulated along the way (as shown in the XOR gate example) If you had that you certainly could run sha 256 in reverse but for the starting hash + info, you have at least as many bits as the starting information that was hashed.
Billiard Ball cellular automata, proposed and studied by Edward Fredkin and Tommaso Toffoli, are one interesting type of reversible computer. The Ising spin model of ferromagnetism is another reversible cellular automata technique.
https://en.wikipedia.org/wiki/Billiard-ball_computer
Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.
Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.
Abstract
Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.
I doubt it would use DRAM. Maybe some sort of MRAM/FeRAM would be a better fit. Or maybe a tiny amount of memory (e.g. Josephson junction) in a quantum circuit at some point in the future.
Quantum computations have to be reversible , because you have to collapse the wave function and take a measurement to throw away any bits of data. You can accumulate junk bits as long as they remain in a superposition. But at some point you have to take a measurement. So, very much related.
on a basic level, with the gates, it seems, if you have at most two input amounts of work, and get at most one out, then storing the lost work for later reuse makes sense
>The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe.[6] He claims that this point is required to exist due to the laws of physics. According to him, it is required, for the known laws of physics to be consistent, that intelligent life take over all matter in the universe and eventually force its collapse. During that collapse, the computational capacity of the universe diverges to infinity, and environments emulated with that computational capacity last for an infinite duration as the universe attains a cosmological singularity. This singularity is Tipler's Omega Point.[7] With computational resources diverging to infinity, Tipler states that a society in the far future would be able to resurrect the dead by emulating alternative universes.[8] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties of God claimed by most traditional religions.[8][9]
>Tipler's argument of the omega point being required by the laws of physics is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[10]
>Tipler, along with co-author physicist John D. Barrow, defined the "final anthropic principle" (FAP) in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle:
>Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[11]
>One paraphrasing of Tipler's argument for FAP runs as follows: For the universe to physically exist, it must contain living observers. Our universe obviously exists. There must be an "Omega Point" that sustains life forever.[12]
>Tipler purportedly used Dyson's eternal intelligence hypothesis to back up his arguments.
Cellular Automata Machines: A New Environment for Modeling:
>It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
>I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
>The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Theory of Self-Reproducing Automata: John von Neumann's Quantum Mechanical Universal Constructors:
[...] Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
>p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
perl4ever on Dec 26, 2017 | root | parent | next [–]
Tipler's Omega Point prediction doesn't seem like it would be compatible with the expanding universe, would it? Eventually everything will disappear over the speed-of-light horizon, and then it can't be integrated into one mind.
DonHopkins on Dec 26, 2017 | root | parent | next [–]
It also wishfully assumes that the one mind can't think of better things to do with its infinite amount of cloud computing power than to simulate one particular stone age mythology.
Then again, maybe it's something like the 1996 LucasArts game Afterlife, where you simulate every different religion's version of heaven and hell at once.
The primary goal of the game is to provide divine and infernal services for the inhabitants of the afterlife. This afterlife caters to one particular planet, known simply as the Planet. The creatures living on the Planet are called EMBOs, or Ethically Mature Biological Organisms. When an EMBO dies, its soul travels to the afterlife where it attempts to find an appropriate "fate structure". Fate structures are places where souls are rewarded or punished, as appropriate, for the virtues or sins that they practiced while they were alive.
I'm curious, how did the book change your life? What kind of problems did the authors model using their approach? I'm new to the topic, thanks for any input.
DonHopkins on March 22, 2022 | parent [–]
It really helped me get my head around how to understand and program cellular automata rules, which is a kind of massively parallel distributed "Think Globally, Act Locally" approach that also applies to so many other aspects of life.
But by "life" I don't mean just the cellular automata rule "life"! Not to be all depressing like Marvin the Paranoid Android, but I happen to think "life" is overrated. ;) There are so many billions of other extremely interesting cellular automata rules besides "life" too, so don't stop once you get bored with life! ;)
It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Also I've frequently written on HN about Dave Ackley's great work on Robust-First Computing and the Moveable Feast Machine, which I think is brilliant, and quite important in the extremely long term (which is coming sooner than we think).
The simplest, dumbest alternative to for reversible computing is to install datacenters in ex-USSR, where there is still (slowly disappearing) rich infrastructure for central hot water. Instead of charging people, utilities can charge both people and datacenters and yet lower the carbon footprint.
Energy-aware computing isn't about environmentalism and saving energy. It's sometimes framed as such in the name of greenwashing but it really isn't, the consumption was negligible before the AI/crypto craze. It's about "longer-lasting battery" and "getting more stuff on the chip without melting it".
I believe it would be more efficient to use a heat pump for the district heating even if the datacenter heat is just dumped. Heat pumps can get up to 400% efficiency.
The heat emitted by the electronics will always be emitted and needs to go somewhere. If 1MWh of that heat is dumped into district heating how would that be less efficient than the 1MWh being dumped in the atmosphere to (hopefully) be reclaimed by a heat pump elsewhere?
Or, alternatively, that 1MWh could be absorbed by the already existing datacenter AC coils which could ultimately still be used to heat up district water as it cools the refrigerant. (People actually do this with swimming pools, using the coils from their AC to heat the pool).
1. The reality of ex-USSR is that no one will ever convince governments to invest in very costly infrastructure modification for efficiency; datacenters are far far easier to integrate into existing boiler based system.
2. The point was not to replace district heating with datacenters - it is not possible, for variety of reasons, but augment the existing huge gas boilers with the datacenter to collect waste heat to render the datacenter carbon neutral.
3. Even with 400% efficiency, you will still gain if heatmpumps augmented with waste heat, as you would need far less heatpumping. You'd still need your datacenters, won't you?
Wait, you mean there is no central hot water infrastructure in the world? Poland is not ex-USRR but it is common place and I always assumed this is a normal thing everywhere.
Norway, no such thing here (at least not in smaller cities, not sure about Oslo). NTNU campus in Trondheim is warmed by waste heat from supercomputer exactly as GP suggested.
Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
Do these reversible techniques help or hinder in applications where hardened electronics are required, like satellites or space probes? I can see a case for both.
Can one define the process of an adiabetic circuit goes through like one would do analogusly for the carnot engine? The idea being coming up with a theoretical cieling for the efficiency of such a circuit in terms of circuit parameters?
Yes a similar analysis is where the above expression f²RC²V² comes from.
Essentially -- (and I'm probably missing a factor of 2 or 3 somewhere as I'm on my phone and don't have reference materials) -- in an adiabatic circuit the unavoidable power loss for any individual transistor stems from current (I) flowing through that transistor's channel (a resistor R) on its way to and from another transistor's gate (a capacitor C). So that's I²R unavoidable power dissipation.
I must be sufficient to fill and then discharge the capacitor to/from operating voltage (V) in the time of one cycle (1/f). So I=2fCV. Substituting this gives 4f²RC²V².
Compare to traditional CMOS, wherein the gate capacitance C is charged through R from a voltage source V. It can be shown that this dissipates ½CV² of energy though the resistor in the process, and the capacitor is filled with an equal amount of energy. Discharging then dissipates this energy through the same resistor. Repeat this every cycle for a total power usage of fCV².
Divide these two figures and we find that adiabatic circuits use 4fRC times as much energy as traditional CMOS. However, f must be less than about 1/(5RC) for a CMOS circuit to function at all (else the capacitors don't charge sufficiently during a cycle) so this is always power savings in favor of adiabatics. And notably, decreasing f of an adiabatic circuit from the maximum permissible for CMOS on the same process increases the efficiency gain proportionally.
(N.B., I feel like I missed a factor of 2 somewhere as this analysis differs slightly from my memory. I'll return with corrections if I find an error.)
Maybe this would work better with superconducting electronics?
There indeed has been research on reversible adiabatic logic in superconducting electronics. But superconducting electronics has a whole host of issues of its own, such as low density and a requirement for ultra-low temperatures.
When I was at Sandia we also had a project exploring ballistic reversible computation (as opposed to adiabatic) in superconducting electronics. We got as far as confirming to our satisfaction that it is possible, but this line of work is a lot farther from major commercial applications than the adiabatic CMOS work.
Possibly, that's an interesting thought. The main benefit of adiabatics as I see them is that, all else being equal, a process improvement of the RC figure can be used to enable either an increase in operating frequency or a decrease in power usage (this is reflected as the additional factor of fRC in the power equation). With traditional CMOS, this only can benefit operating frequency -- power usage is independent of the RC product per se. Supercondition (or near-superconduction) is essentially a huge improvement in RC which wouldn't be able to be realized as an increase in operating frequency due to speed of light limitations, so adiabatics would see an outsize benefit in that case.
Notably the physical limit is
https://en.wikipedia.org/wiki/Landauer%27s_principle
it doesn't necessarily take any energy at all to process information, but it does take roughly kT work of energy to erase a bit of information. It's related to
https://en.wikipedia.org/wiki/Maxwell%27s_demon
as, to complete cycles, the demon has to clear its memory.
Does it not take energy to process information? Can any computable function be computed with arbitrarily low energy input/entropy increase?
No, and yes, so long as you don't delete information.
Think of a marble-based computer, whose inner workings are frictionless and massless. The marbles roll freely without losing energy unless they are forced to stop somehow, but computation is nonetheless performed.
Henry G. Baker wrote this paper titled "The Thermodynamics of Garbage Collection" in the 90s about linear logic, stack machines, reversibility and the cost of erasing information:
https://wiki.xxiivv.com/docs/baker_thermodynamics.html
A subset of FRACTRAN programs are reversible, and I would love to see rewriting computers as a potential avenue for reversible circuit building(similar to the STARAN cpu):
https://wiki.xxiivv.com/site/fractran.html#reversibility
This is really cool, I never expected to see reversible computation made in electrical systems. I learned about it undergrad taking a course by Bruce MacLennan* though it was more applied to "billiard ball" or quantum computing. It was such a cool class though.
*Seems like he finally published the text book he was working on when teaching the class: [https://www.amazon.com/dp/B0BYR86GP7?ref_=pe_3052080_3975148...
>it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire’s road map but probably 10 or 15 years out.
How I wish I could place bets on this never happening.
>In the following years, Vaire plans to design the first reversible chip specialized for AI inference.
These guys are cashing in on AI hype. Watch them raise VC money, give themselves 6 figures salaries and file for bankruptcy in 3 years.
You are exactly correct - the combination of deep belief in the ability to obtain quick riches by investing in the "next big thing" aligned with a large gap of knowledge between reality and hype, all mixed into a milieu of in-group speak and customs always leads to the proliferation of the con. It's the next "Long Blockchain Corp!"
It's also how proliferation of advances happen. Nothing advances unless someone tries it, and trying it costs money.
4000x cost saving would bring operation costs for compute down close to zero, meaning marginal costs for data centers would go down as well, meaning data centers would buy a shit ton of these chips. Think the valuation of Nvidia x 1000 I still think the technical challange is too big, but it's high reward for early investors.
This is insanity - "compute" is a tiny fraction of energy usage compared to memory, data storage, and data retrieval.
The ideas are neat and both Landauer and Bennet did some great work and left a powerful legacy. The energetic limits we are talking about are not yet relevant in modern computers. The amount of excess thermal energy for performing 10^26 erasures associated to some computation (of say an LLM that would be too powerful for the current presidential orders) would only be about 0.1kWh, so 10 minutes of a single modern GPU. There are other advantages to reversibility, of course, and maybe one day even that tiny amount of energy savings will matter.
Wow. This whole logic sounds like something really harebrained from a Dr Who episode: "It takes energy to destroy information. Therefore if you don't destroy information, it doesn't take energy!" - sounds completely illogical.
I honestly don't understand from the article how you "recover energy". Yet I have no reason to disbelieve it.
Someone else here compared it to regenerative braking in cars, which is what made it click for me. If you spend energy to accelerate, then recapture that energy while decelerating, then you can manage to transport yourself while your net energy expenditure is zero (other than all that pesky friction). On the other hand, if you spend energy to accelerate, then shed all that energy via heat from your brake pads, then you need to expend new energy to accelerate next time.
If the concept has existed for 60 years and no one has capitalized it yet - you can bet it's more Dr. Who than reality.
Also an Edward Fredkin https://en.wikipedia.org/wiki/Edward_Fredkin interest https://en.wikipedia.org/wiki/Fredkin_gate .
As well as Tommaso Toffoli, Norman Margolus, Tom Knight, Richard Feynman, and Charles Bennett:
Reversible Computing, Tommaso Toffoli:
https://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TM-1...
>Abstract. The theory of reversible computing is based on invertible primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems. According to a physical interpretation, the central result of this paper is that it is ideally possible to build sequential circuits with zero internal power dissipation.
A Scalable Reversible Computer in Silicon:
https://www.researchgate.net/publication/2507539_A_Scalable_...
Reversible computing:
https://web.eecs.utk.edu/~bmaclenn/Classes/494-594-UC-F17/ha...
>In 1970s, Ed Fredkin, Tommaso Toffoli, and others at MIT formed the Information Mechanics group to the study the physics of information. As we will see, Fredkin and Toffoli described computation with idealized, perfectly elastic balls reflecting o↵ barriers. The balls have minimum dissipation and are propelled by (conserved) momentum. The model is unrealistic but illustrates many ideas of reversible computing. Later we will look at it briefly (Sec. C.7).
>They also suggested a more realistic implementation involving “charge packets bouncing around along inductive paths between capacitors.” Richard Feynman (Caltech) had been interacting with Information Mechanics group, and developed “a full quantum model of a serial reversible computer” (Feynman, 1986).
>Charles Bennett (1973) (IBM) first showed how any computation could be embedded in an equivalent reversible computation. Rather than discarding information (and hence dissipating energy), it keeps it around so it can later “decompute” it back to its initial state. This was a theoretical proof based on Turing machines, and did not address the issue of physical implementation. [...]
>How universal is the Toffoli gate for classical reversible computing:
https://quantumcomputing.stackexchange.com/questions/21064/h...
All of quantum computing is reversible by nature (until you measure the state, of course). Yet, there'some research in the field focusing on irreversible ("non-unitary") quantum algorithms and it appears there is some advantage in throwing away, algorithmically speaking, the reversibility. See https://arxiv.org/abs/2309.16596
It's interesting that classical and quantum computing researchers are each looking in the direction of the other field.
> The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly.
But if you change the gate voltage slowly, then the transistor will be for a longer period in the resistive region where it dissipates energy. Shouldn't you go between the OFF and ON states as quickly as possible?
The trick is not to have a voltage across the channel while it's transitioning states. For this reason, adiabatic circuits are typically "phased" such that any given adiabatic logic gate is either having its gates charged or discharged (by the previous logic gate), or current is passing through its channels to charge/discharge the next logic gate.
Interesting, thanks!
Calling the addition of an energy storage device into a transistor "reverse computing" is like calling a hybrid car using regenerative braking "reverse driving".
It's a very interesting concept - best discussed over pints at the pub on a Sunday afternoon along with over unity devices and the sad lack of adoption of bubble memory.
Well actually, "reversible driving" is perfectly apt in the sense of acceleration being a reversible process. It means that in theory the net energy needed to drive anywhere is zero because all the energy spent on acceleration is gained back on braking. Yes I know in practice there's always friction loss, but the point is there isn't a theoretical minimum amount of friction that has to be there. In principle a car with reversible driving can get anywhere with asymptotically close to zero energy spent.
Put another way, there is no way around the fact that a "non-reversible car" has to have friction loss because the brakes work on friction. But there is no theoretical limit to how far you can reduce friction in reversible driving.
Cars specifically dissipate energy on deformation of the tires; this loss is irreversible at any speed, even if all the bearings have effectively zero losses (e.g. using magnetic levitation).
A train spends much less on that because the rails and the wheels are very firm. A maglev train likely recuperates nearly 100% of its kinetic energy during deceleration, less the aerodynamic losses; it's like a superconducting reversible circuit.
Actually, a non-reversible car also has no lower energy limit, as long as you drive on a flat surface (same for a reversible one) and can get to the answer arbitrarily slowly.
An ideal reversible computer also works arbitrarily slowly. To make it go faster, you need to put energy in. You can make it go arbitrarily slowly with arbitrarily little energy, just like a non-reversible car.
This is glorious.
The reverse computing is independent of the energy storage mechanism. It's used to "remember" how to route the energy for recovery.
A pub in Cambridge, perhaps! I doubt you'd overhear such talk in some Aldershot dive.
The Falling Edge, maybe? The Doped Wafer?
The Flipped Bit? The Reversed Desrevereht?
(I once read a fiction story about someone who, instead of having perfect pitch, had perfect winding number: he couldn't get to sleep before returning to zero, so it took him some time to realise that when other people talked about "unwinding" at the end of the day, they didn't mean it literally)
Sounds like a good time :)
This is probably a dumb question, but where does this fail: what happens if I run something like the SHA256 algorithm or a sudoku solver backwards using these techniques? I assume that wouldn’t actually work, but why?
I'm speaking out of my depth, but as I understand you'd need the extra information that was accumulated along the way (as shown in the XOR gate example) If you had that you certainly could run sha 256 in reverse but for the starting hash + info, you have at least as many bits as the starting information that was hashed.
Reversible Computing (2016) [video] (youtube.com)
https://news.ycombinator.com/item?id=16007128
https://www.youtube.com/watch?v=rVmZTGeIwnc
DonHopkins on Dec 26, 2017 | next [–]
Billiard Ball cellular automata, proposed and studied by Edward Fredkin and Tommaso Toffoli, are one interesting type of reversible computer. The Ising spin model of ferromagnetism is another reversible cellular automata technique. https://en.wikipedia.org/wiki/Billiard-ball_computer
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
https://en.wikipedia.org/wiki/Ising_model
If billiard balls aren't creepy enough for you, live soldier crabs of the species Mictyris guinotae can be used in place of the billiard balls.
https://www.newscientist.com/blogs/onepercent/2012/04/resear...
https://www.wired.com/2012/04/soldier-crabs/
http://www.complex-systems.com/abstracts/v20_i02_a02.html
Robust Soldier Crab Ball Gate
Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.
Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.
Abstract
Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.
The miniscule amount of energy retained from the "reverse computation" will be absolutely demolished by the first DRAM refresh.
I doubt it would use DRAM. Maybe some sort of MRAM/FeRAM would be a better fit. Or maybe a tiny amount of memory (e.g. Josephson junction) in a quantum circuit at some point in the future.
SRAM is actually very architecturally similar to some adiabatic circuit topologies.
The concept completely flummoxed me but how does this play with quantum computers? That’s the direction we are going aren’t we?
Quantum computations have to be reversible , because you have to collapse the wave function and take a measurement to throw away any bits of data. You can accumulate junk bits as long as they remain in a superposition. But at some point you have to take a measurement. So, very much related.
interesting.
on a basic level, with the gates, it seems, if you have at most two input amounts of work, and get at most one out, then storing the lost work for later reuse makes sense
https://news.ycombinator.com/item?id=35366971
Tipler's Omega Point cosmology:
https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Poin...
>The Omega Point cosmology
>The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe.[6] He claims that this point is required to exist due to the laws of physics. According to him, it is required, for the known laws of physics to be consistent, that intelligent life take over all matter in the universe and eventually force its collapse. During that collapse, the computational capacity of the universe diverges to infinity, and environments emulated with that computational capacity last for an infinite duration as the universe attains a cosmological singularity. This singularity is Tipler's Omega Point.[7] With computational resources diverging to infinity, Tipler states that a society in the far future would be able to resurrect the dead by emulating alternative universes.[8] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties of God claimed by most traditional religions.[8][9]
>Tipler's argument of the omega point being required by the laws of physics is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[10]
>Tipler, along with co-author physicist John D. Barrow, defined the "final anthropic principle" (FAP) in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle:
>Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[11]
>One paraphrasing of Tipler's argument for FAP runs as follows: For the universe to physically exist, it must contain living observers. Our universe obviously exists. There must be an "Omega Point" that sustains life forever.[12]
>Tipler purportedly used Dyson's eternal intelligence hypothesis to back up his arguments.
Cellular Automata Machines: A New Environment for Modeling:
https://news.ycombinator.com/item?id=30735397
>It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
>I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
>The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
Spin Glass:
https://en.wikipedia.org/wiki/Spin_glass
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
Billiard Ball Computer:
https://en.wikipedia.org/wiki/Billiard-ball_computer
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
Reversible Cellular Automata:
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Theory of Self-Reproducing Automata: John von Neumann's Quantum Mechanical Universal Constructors:
https://news.ycombinator.com/item?id=22738268
[...] Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
>p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
https://www.deepdyve.com/lp/association-for-computing-machin...
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5578...
https://www.worldscientific.com/worldscibooks/10.1142/10841
perl4ever on Dec 26, 2017 | root | parent | next [–]
Tipler's Omega Point prediction doesn't seem like it would be compatible with the expanding universe, would it? Eventually everything will disappear over the speed-of-light horizon, and then it can't be integrated into one mind.
DonHopkins on Dec 26, 2017 | root | parent | next [–]
It also wishfully assumes that the one mind can't think of better things to do with its infinite amount of cloud computing power than to simulate one particular stone age mythology.
Then again, maybe it's something like the 1996 LucasArts game Afterlife, where you simulate every different religion's version of heaven and hell at once.
https://en.wikipedia.org/wiki/Afterlife_(video_game)
The primary goal of the game is to provide divine and infernal services for the inhabitants of the afterlife. This afterlife caters to one particular planet, known simply as the Planet. The creatures living on the Planet are called EMBOs, or Ethically Mature Biological Organisms. When an EMBO dies, its soul travels to the afterlife where it attempts to find an appropriate "fate structure". Fate structures are places where souls are rewarded or punished, as appropriate, for the virtues or sins that they practiced while they were alive.
https://news.ycombinator.com/item?id=30735397
DonHopkins on March 19, 2022 | parent | context | favorite | on: Ask HN: What book changed your life?
Cellular Automata Machines: A New Environment for Modeling Published April 1987 by MIT Press. ISBN: 9780262200608.
http://mitpress.mit.edu/books/cellular-automata-machines
http://www.researchgate.net/publication/44522568_Cellular_au...
https://donhopkins.com/home/cam-book.pdf
https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...
themodelplumber on March 20, 2022 | prev [–]
I'm curious, how did the book change your life? What kind of problems did the authors model using their approach? I'm new to the topic, thanks for any input.
DonHopkins on March 22, 2022 | parent [–]
It really helped me get my head around how to understand and program cellular automata rules, which is a kind of massively parallel distributed "Think Globally, Act Locally" approach that also applies to so many other aspects of life.
But by "life" I don't mean just the cellular automata rule "life"! Not to be all depressing like Marvin the Paranoid Android, but I happen to think "life" is overrated. ;) There are so many billions of other extremely interesting cellular automata rules besides "life" too, so don't stop once you get bored with life! ;)
https://www.youtube.com/watch?v=CAA67a2-Klk
For example, it's kind of like how the world wide web works: "Link Globally, Interact Locally":
https://donhopkins.medium.com/scriptx-and-the-world-wide-web...
It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
Spin Glass:
https://en.wikipedia.org/wiki/Spin_glass
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
Billiard Ball Computer:
https://en.wikipedia.org/wiki/Billiard-ball_computer
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Also I've frequently written on HN about Dave Ackley's great work on Robust-First Computing and the Moveable Feast Machine, which I think is brilliant, and quite important in the extremely long term (which is coming sooner than we think).
https://news.ycombinator.com/item?id=22304110
https://news.ycombinator.com/item?id=22300376
https://news.ycombinator.com/item?id=22303313
The simplest, dumbest alternative to for reversible computing is to install datacenters in ex-USSR, where there is still (slowly disappearing) rich infrastructure for central hot water. Instead of charging people, utilities can charge both people and datacenters and yet lower the carbon footprint.
Energy-aware computing isn't about environmentalism and saving energy. It's sometimes framed as such in the name of greenwashing but it really isn't, the consumption was negligible before the AI/crypto craze. It's about "longer-lasting battery" and "getting more stuff on the chip without melting it".
I believe it would be more efficient to use a heat pump for the district heating even if the datacenter heat is just dumped. Heat pumps can get up to 400% efficiency.
What do you mean by efficient?
The heat emitted by the electronics will always be emitted and needs to go somewhere. If 1MWh of that heat is dumped into district heating how would that be less efficient than the 1MWh being dumped in the atmosphere to (hopefully) be reclaimed by a heat pump elsewhere?
Or, alternatively, that 1MWh could be absorbed by the already existing datacenter AC coils which could ultimately still be used to heat up district water as it cools the refrigerant. (People actually do this with swimming pools, using the coils from their AC to heat the pool).
1. The reality of ex-USSR is that no one will ever convince governments to invest in very costly infrastructure modification for efficiency; datacenters are far far easier to integrate into existing boiler based system.
2. The point was not to replace district heating with datacenters - it is not possible, for variety of reasons, but augment the existing huge gas boilers with the datacenter to collect waste heat to render the datacenter carbon neutral.
3. Even with 400% efficiency, you will still gain if heatmpumps augmented with waste heat, as you would need far less heatpumping. You'd still need your datacenters, won't you?
Wait, you mean there is no central hot water infrastructure in the world? Poland is not ex-USRR but it is common place and I always assumed this is a normal thing everywhere.
Norway, no such thing here (at least not in smaller cities, not sure about Oslo). NTNU campus in Trondheim is warmed by waste heat from supercomputer exactly as GP suggested.
In the US it's pretty much limited to a few university campuses I believe. I think a couple cities in Canada have it, too.
[flagged]