So the trick is to do the computation forwards, but take care to only use reversible operations, store the result outside of the auxiliary "full" memory and then run the computation backwards, reversing all instructions and thus undoing their effect on the auxiliary space.
Which is called catalytic, because it wouldn't be able to do the computation in the amount of clean space it has, but can do it by temporarily mutating auxiliary space and then restoring it.
What I haven't yet figured out is how to do reversible instructions on auxiliary space. You can mutate a value depending on your input, but how do you use that value, since you can't assume anything about the contents of the auxiliary space and just overwriting with a constant (e.g. 0) is not reversible.
Maybe there is some xor like trick, where you can store two values in the same space and you can restore them, as long as you know one of the values.
Edit:
After delving into the paper linked in another comment, which is rather mathy (or computer sciency in the original meaning of the phrase), I'd like to have a simple example of a program that can not run in it's amount of free space and actually needs to utilize the auxiliary space.
> FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
This is very similar to an old approach to LISP garbage collection. All of memory is a tree of small cells. You need to walk the the tree and don't have space for a stack of the links that got you to where you are. So you store the backlinks in the forward links by XORing the forward link with the backlink. As the tree-walker backs down the tree, the links are XORed again, restoring the forward link.
Wikipedia has a note on XOR linked lists.[1] That's a more general form of this concept.
This technique is decades older than the references there indicate.
I'm trying to find a reference to this in early garbage collection papers, and not, so far, succeeding.
It's the sort of thing that mattered around when LISP was being invented in the 1950s and 1960s, and memory cost millions of dollars per megabyte.
If I give you a hard drive full of photos, you could compress them, use the excess hard drive space for your algorithm, then uncompress the photos and give a full hard drive back to me. Is that’s what’s going on here?
Basically it’s exploiting unused channel capacity of the memory if the Shannon entropy isn’t maxed out?
They are explicitly not assuming anything about the content of the auxiliary space (full hard drive).
So the data might be incompressible and thus compressing it and restoring it afterwards would not work.
Edit:
From the paper:
> One natural approach is to compress the data on the hard
disk as much as possible, use the freed-up space for your computation and finally uncompress the data, restoring it to its original setting. But suppose that the data is not compressible. In other words, your scheme has to always work no matter the contents of the hard drive. Can you still
make good use of this additional space?
it is indeed one of important techniques in catalyitic computing that either 1)you can compress the catalyst, giving you space or 2)uncompressible catalyst gives you a source of randomness
So the trick is to do the computation forwards, but take care to only use reversible operations, store the result outside of the auxiliary "full" memory and then run the computation backwards, reversing all instructions and thus undoing their effect on the auxiliary space.
Which is called catalytic, because it wouldn't be able to do the computation in the amount of clean space it has, but can do it by temporarily mutating auxiliary space and then restoring it.
What I haven't yet figured out is how to do reversible instructions on auxiliary space. You can mutate a value depending on your input, but how do you use that value, since you can't assume anything about the contents of the auxiliary space and just overwriting with a constant (e.g. 0) is not reversible.
Maybe there is some xor like trick, where you can store two values in the same space and you can restore them, as long as you know one of the values.
Edit: After delving into the paper linked in another comment, which is rather mathy (or computer sciency in the original meaning of the phrase), I'd like to have a simple example of a program that can not run in it's amount of free space and actually needs to utilize the auxiliary space.
Just run it twice and XOR the relevant bits each time, so the second pass flips them back?
for me, perfect introduction to catalytic computing was this lecture: https://www.youtube.com/watch?v=_KEORzRpxY8
Is this something like pointer reversing garbage collection or the XOR trick to interchange two integer values?
That sounds similar to this in QC:
From "Reversible computing escapes the lab" (2025) https://news.ycombinator.com/item?id=42660606#42705562 :
> FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
> "The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123
Though also Landauer's limit presumably only applies to electrons; not photons or phonons or gravitational waves.
[dead]
This is very similar to an old approach to LISP garbage collection. All of memory is a tree of small cells. You need to walk the the tree and don't have space for a stack of the links that got you to where you are. So you store the backlinks in the forward links by XORing the forward link with the backlink. As the tree-walker backs down the tree, the links are XORed again, restoring the forward link.
That's brilliant
Wikipedia has a note on XOR linked lists.[1] That's a more general form of this concept. This technique is decades older than the references there indicate. I'm trying to find a reference to this in early garbage collection papers, and not, so far, succeeding. It's the sort of thing that mattered around when LISP was being invented in the 1950s and 1960s, and memory cost millions of dollars per megabyte.
[1] https://en.wikipedia.org/wiki/XOR_linked_list
This was brought up in another thread here yesterday titled "XOR", as well!
Found the original paper outside of the paywall:
https://iuuk.mff.cuni.cz/~koucky/papers/catalytic.pdf
The ~koucky/papers/ root is also a goldmine of papers, including another catalytic one.
If I give you a hard drive full of photos, you could compress them, use the excess hard drive space for your algorithm, then uncompress the photos and give a full hard drive back to me. Is that’s what’s going on here?
Basically it’s exploiting unused channel capacity of the memory if the Shannon entropy isn’t maxed out?
They are explicitly not assuming anything about the content of the auxiliary space (full hard drive).
So the data might be incompressible and thus compressing it and restoring it afterwards would not work.
Edit: From the paper:
> One natural approach is to compress the data on the hard disk as much as possible, use the freed-up space for your computation and finally uncompress the data, restoring it to its original setting. But suppose that the data is not compressible. In other words, your scheme has to always work no matter the contents of the hard drive. Can you still make good use of this additional space?
Why wouldn't decompressing work? The whole point is that the hard drive is restored to its (arbitrary) original state upon completion.
Some data is just incompressible, like random numbers, encrypted data or already compressed data.
The compression from GP's comment is irrelevant.
Put arbitrary data in, get the same arbitrary data out.
it is indeed one of important techniques in catalyitic computing that either 1)you can compress the catalyst, giving you space or 2)uncompressible catalyst gives you a source of randomness