FTA: “Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable”
I think that means this proposal adds a very specific form of finalisers to go.
How is that implemented efficiently? I can think of doing something akin to NSAutoReleasePool (https://developer.apple.com/documentation/foundation/nsautor...), with all allocations inside a `secret.Do` block going into a separate section of the heap (like a new generation), and, on exit of the block, the runtime doing a GC cycle, collecting and clearing every now inaccessible object in that section of the heap.
It can’t do that, though, because the article also says:
“Heap allocations are only erased if the program drops all references to them, and then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act”
and I think what I am thinking of will guarantee that the garbage collector will eagerly erase any heap allocations that can be freed.
Also, the requirement “ the program drops all references to them” means this is not a 100% free lunch. You can’t simply wrap your code in a `secret.Do` and expect your code to be free of leaking secrets.
(As an aside, the linked blog series is great, but if you're interested in new Go features, I've found it really helpful to also subscribe to https://go.dev/issue/33502 to get the weekly proposal updates straight from the source. Reading the debates on some of these proposals provides a huge level of insight into the evolution of Go.)
I have to wonder if we need, say, a special "secret data" type (or modifier) that has the semantics of both crypto/subtle and runtime/secret. That is to say, comparison operators are always constant-time, functions holding the data zero it out immediately, GC immediately zeroes and deallocs secret heap allocations, etc.
I mean, if you're worried about ensuring data gets zeroed out, you probably also don't want to leak it via side channels, either.
One thing that makes me unsure about this proposal is the silent downgrading on unsupported platforms. People might think they're safe when they're not.
Go has the best support for cryptography of any language
I'm not sure there's a realistic alternative. If you need to generate a key then it has to happen somehow on unsupported platforms. You can check Enabled() if you need to know and intend to do something different but I assume most of the time you run the same function either way you'd just prefer to opt into secret mode if it's available.
This is not what secret.Enabled() means. But it probably illustrates that the function needs to be renamed already. Here's what the doc comment says:
// Enabled reports whether Do appears anywhere on the call stack.
In other words, it is just a way of checking that you are indeed running inside the context of some secret.Do call; it doesn't guarantee that secret.Do is actually offering the protection you may desire.
One of the goals here is to make it easy to identify existing code which would benefit from this protection and separate that code from the rest. That code is going to run anyway, it already does so today.
Not OP, but Go has some major advantages in cryptography:
1. Well-supported standard libraries generally written by Google
2. Major projects like Vault and K8s that use those implementations and publish new stuff
3. Primary client language for many blockchains, bringing cryptography contributions from the likes of Ethereum Foundation, Tendermint, Algorand, ZK rollups, etc
Do you mean “best support for cryptography in the standard library”?
Because there is tremendous support for cryptography in, say, the C/C++ ecosystem, which has traditionally been the default language of cryptographers.
Yeah the standard library crypto package is really good and so is the tls package. There's also golang.org/x/crypto which is.seprate because it doesn't fall under the go compatibility guarantee. You can do all kinds of hashes and generate certs and check signatures and do aes encryption all built in and accessible. There's even lower level constant time compare functions and everything.
I'm a big fan of the go standard library + /x/ packages.
4. The community seems to have realized that untangling the mess that is building C/C++ stuff is a fool's errand and seems to mostly prefer to reimplement it in Go
I'd probably want some way to understand whether secret.Do is launched within a secret-supporting environment so that I'm able to show some user warning / force a user confirmation or generate_secrets_on_unsupported_platforms flag.
But, this is probably a net improvement over the current situation, and this is still experimental, so, changes can happen before it gets to GA.
> Go has the best support for cryptography of any language
This isn't true at all.
Writing cryptography code in Go is incredibly annoying and cumbersome due lack of operator overloading, forcinforcing you to do method calls like `foo.Add(bar.Mul(baz).Mod(modulus)).Mod(modulus)`. These also often end up having to be bignums instead of using generic fixed-size field arithmetic types. Rust has incredibly extensive cryptographic libraries, the low-level taking advantage of this operator overloading so the code ends up being able to following the notation in literature more closely. The elliptic_curve crate in particular is very nice to work with.
Go is supposed to be cross-platform. I guess it's cross-platform until it isn't, and will silently change the semantics of security-critical operations (yes, every library builder will definitely remember to check if it's enabled.)
If you need this for Windows so desperately why aren’t you offering to add support for that platform? It’s open source.
Many advanced Go features start in certain platforms and then expand to others once the kinks are worked out. It’s a common pattern and has many benefits. Why port before its stable?
Which is exactly why it should fail explicitly on unsupported platforms unless the developer says otherwise. I'm not sure how Go developers make things obvious, but presumably you have an ugly method or configuration option like:
dangerousAllowSecretsToLeak()
...for when a developer understands the risk and doesn't want to panic.
This is a sharp-edged tool guarded behind an experimental flag. You are not meant to use it unless you want to participate in the experiment. Objections like this and the other one ("check if it's enabled" -- you can't, that's not what secret.Enabled() means) illustrate that this API may still need further evolution, which it won't get if it's never available to experiment with.
> If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100]), don't create a pointer to that location (don't create a pointer p to &data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.
That doesn't make sense to me. How can the "offset in an array itself" be "secret" if it's "always" 100? 100 isn't secret.
I think it may be about the absolute memory address to the secret being stored, which may itself be exploitable (ie you’re thinking about the offset value, rather than the pointer value). it’s about leaking even indirect information that could be exploited in different ways. From my understanding, this type of cryptography goes to extremely lengths to basically hide everything.
That’s my hunch at least, but I’m not a security expert.
The example could probably have been better phrased.
Ok, i kinda get the idea, and with some modification it might be quite handy - but i wonder why its deemed like an "unsolvable" issue right now.
It may sound naive, but packages which include data like said session related or any other that should not persist (until the next Global GC) - why don't you just scramble their value before ending your current action?
And dont get me wrong - yes that implies extra computation yada yada - but until a a solution is practical and builtin - i'd just recommend to scramble such variables with new data so no matter how long it will persist, a dump would just return your "random" scramble and nothing actually relevant.
It is fundamentally not possible to be in complete control of where the data you are working with is stored in go. The compiler is free to put things on the heap or on the stack however it wants. Relatedly it may make whatever copies it likes in between actions defined in the memory model which could leak arbitrary temporaries.
Yeah, .NET tried to provide a specific type related to this concept (SecureString) in the past and AFAIK there were were two main problems that have caused it to fall into disuse;
First one being, it was -very- tricky to use properly for most cases, APIs to the outside world typically would give a byte[] or string or char[] and then you fall into the problem space you mention. That is, if you used a byte[] or char[] array, and GC does a relocation of the data, it may still be present in the old spot.
(Worth noting, the type itself doesn't do that, whatever you pass in gets copied to a non-gc buffer.)
The second issue is that there's not a unified unix memory protection system like in windows; The windows implementation is able to use Crypt32 such that only the current process can read the memory it used for the buffeer.
without language level support, it makes code look like a mess.
Imagine, 3 level nesting calls where each calls another 3 methods, we are talking about 28 functions each with couple of variables, of course you can still clean them up, but imagine how clean code will look if you don't have to.
Just like garbage collection, you can free up memory yourself, but someone forgot something and we have either memory leak or security issues.
There are two main reasons why this approach isn't sufficient at a technical level, which are brought up by comments on the original proposal: https://github.com/golang/go/issues/21865
1) You are almost certainly going to be passing that key material to some other functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle
2) The compiler can always keep some data in registers, and most Go code can be interrupted at any time, with the registers of the running goroutine copied to somewhere in memory temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine
So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what you intentionally preserve beyond its scope, while scramble wipes only what you think it is important to wipe.
While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.
Tho i didn't think of (2) which is especially problematic.
Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.
As I understand it, this is too brittle. I think this is trivially defeated if someone adds an append to your code:
func do_another_important_thing(key []byte) []byte {
newKey := append(key, 0x0, 0x1) // this might make a copy!
return newKey
}
key := make([]byte, 32)
defer scramble(&key)
do_another_important_thing(key)
// do all the secret stuff
Because of the copy that append might do, you now have 2 copies of the key in data, but you only scramble one. There are many functions that might make a copy of the data given that you don't manually manage memory in Go. And if you are writing an open source library that might have dozens of authors, it's better for the language to provide a guarantee, rather than hope that a developer that probably isn't born yet will remember not to call an "insecure" function.
This proposal is worse because all the valuable regions of code will be clearly annotated for static analysis, either explicitly via a library/function call, or heuristically using the same boilerplate or fences.
Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.
As another response pointed out, its also possible that said secret data is still in the register, which no matter what we do to the curr value could exist.
> Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.
This is essentially already the case whenever you use encryption, because there are tell-tale signs you can detect (e.g., RSA S-Box). But this will make it even easier and also tip you off to critical sections that are sensitive yet don't involve encryption (e.g., secure strings).
yes, you now have to deal in pointers, but that's not too ugly, and everything is stored in secretStash so can iterate over all the types it supports and thrash them to make them unusable, even without the gc running.
I'm now wondering with a bit of unsafe, reflection and generics magic one could make it work with any struct as well (use reflection to instantiate a generic type and use unsafe to just overwrite the bytes)
Hard to understand what you’re asking. This is the solution that will practical and built-in. This is a summary of a new feature coming to Go’s runtime in 1.26.
How much control does a language runtime have over whether the memory controller actually zeros out physical memory? My guess is very little on consumer hardware but happy to be wrong
This is interesting, but how do you bootstrap it? How does this little software enclave get key material in that doesn't transit untrusted memory? From a file? I guess the attacker this is guarding against can read parts of memory remotely but doesn't have RCE. Seems like a better approach would be an explicitly separate allocator and message passing boundaries. Maybe a new way to launch an isolated go routine with limited copying channels.
As that suggests that somehow for PFS it is critical that the ephemeral key (not the long-term one) is zeroed out, while the plaintext message - i.e. the thing that in the example we allegedly want secrecy for - is totally fine to be outside of the whole `secret` machinery, and remain in memory potentially "forever".
I get that the example is simplified (because what it should actually do is protect the long-term key, not the ephemeral one)... so, yeah, it's just a bad example.
PFS is just one of many desirable properties, and getting access to plaintext is just one of many kinds of threat. Getting access to ephemeral keys and other sensitive state can enable session hijacking. It's still not a great example, though, because it doesn't illustrate that threat model either.
This seems like it might be expensive (though plausibly complete), so I wonder if it’ll actually benchmark with a low enough overhead to be practical. We already struggle with a lack of optimization in some of the named target use cases - that said this also means there’s space to make up.
> The new runtime/secret package lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used.
I don't understand. Why do you need it in a garbage-collected language?
My impression was that you are not able to access any register in these language. It is handled by the compiler instead.
This is about minimizing attack surface. Not only could secrets be leaked by hacking the OS process somehow to perform arbitrary reads on the memory space and send keys somewhere, they could also be leaked with root access to the machine running the process, root access to the virtualization layer, via other things like rowhammering potentially from an untrusted process in an entirely different virtual context running on the same machine, and at the really high end, attacks where the government agents siezing your machine physically freeze your RAM (that is, reduce the physical temperature of your RAM to very low temperatures) when they confiscate your machine and read it out later. (I don't know if that is still possible with modern RAM, but even if it isn't I wouldn't care to bet much on the proposition that they don't have some other way to read RAM contents out if they really, really want to.) This isn't even intended as a complete list of the possibilities, just more than enough to justify the idea that in very high security environments there's a variety of threats that come from leaving things in RAM longer than you absolutely need to. You can't avoid having things in RAM to operate on them but you can ensure they are as transient as possible to minimize the attack window.
If you are concerned about secrets being zeroed out in almost any language, you need some sort of support for it. Non-GC'd languages are prone to optimize away zeroing out of memory before deallocation, because under normal circumstances a write to a value just before deallocation that is never effectfully read can be dropped without visible consequence to the rest of the program. And as compilers get smarter it can be harder to fool them with code, like, simply reading afterwards with no further visible effect might have been enough to fool 20th century compilers but nowadays I wouldn't count on my compiler being that stupid.
There are also plenty of languages where you may want to use values that are immutable within the context of the language, so there isn't even a way to express "let's zero out this RAM".
Basically, if you don't build this in as a language feature, you have a whole lot of pressures constantly pushing you in the other direction, because why wouldn't you want to avoid the cost of zeroing memory if you can? All kinds of reasons to try to avoid that.
In theory it prevents failures of the allocator that would allow reading uninitialized memory, which isn't really a thing in Go.
In practice it provides a straightforward path to complying with government crypto certification requirements like FIPS 140 that were written with languages in mind where this is an issue.
Go has both assembly language and unsafe pointer operations available. While any uses of these more advanced techniques should be vetted before going to production, they are obviously able to break out of any sandboxing that you might otherwise think a garbage collector provides.
And any language which can call C code that is resident in the same virtual memory space can have its own restrictions bypassed by said C code. This even applies to more restrictive runtimes like the JVM or Python.
This would potentially protect against other process reading memory via some system compromise - they would be able to get new secrets but not old ones.
Go is not a memory safe language. Even in memory safe languages, memory safety vulnerabilities can exist. Such vulnerabilities can be used to hijack your process into running untrusted code. Or as others point out sibling processes could attack yours. This underlying principle is defense in depth - you make add another layer of protection that has to be bypassed to achieve an exploit. All the chains combined raise the expense of hacking a system.
Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
I’m not aware of any definition of memory safety that allows for segfaults- by definition those are an indication of not being memory safe.
It is true that go is only memory unsafe in a specific scenario, but such things aren’t possible in true memory safe languages like c# or Java. That it only occurs in multithreaded scenarios matters little especially since concurrency is a huge selling point of the language and baked in.
Java can have data races, but those data races cannot be directly exploited into memory safety issues like you can with Go. I’m tired of Go fans treating memory safety as some continuum just because there are many specific classes of how memory safety can be violated and Go protecting against most is somehow the same as protecting against all (which is what being a memory safe language means whether you like it or not).
I’m not aware of any other major language claiming memory safety that is susceptible to segfaults.
Safety is a continuum. It's a simple fact. Feel free to use a term other than memory safety to describe what you're talking about, but so long as you use safety, there's going to be a continuum.
Also, by your definition, e.g. Rust is not memory safe. And "It is true that Rust is only memory unsafe in a specific scenario, but [...]". I hope you agree.
Another canard, unfortunately. "Segfault" is simply Go's reporting convention for things like nil pointer hits. "Segfaults" are not, in fact, part of the definition for memory safety or a threshold condition for it. All due respect to Ralf's Ramblings, but I'm going to rest my case with the Prossimo page on memorysafety.org that I just posted. This isn't a real debate.
The panic address is 42, a value being mutated, not a nil pointer. You could easily imagine this address pointing to a legal but unintended memory address resulting in a read or write of unintended memory.
No, you can't, and the reason you know you can't is that it's never happened. That looks like a struct offset dereference from a nil pointer, for what it's worth.
> That looks like a struct offset dereference from a nil pointer, for what it's worth.
The 42 is an explicit value in the example code. From what I understand the code repeatedly changes the value assigned to an interface variable from an object containing a pointer to an object containing an integer. Since interface variables store the type of the assigned value, but do not update both type and value atomically a different thread can interpret whatever integer you put into it as a valid pointer. Putting a large enough value into the integer should avoid the protected memory page around 0 and allow for some old fashioned memory corruption.
You’d be wrong. I recommend you reread the blog post and grok what’s happening in the example.
> When that happens, we will run the Ptr version of get, which will dereference the Int’s val field as a pointer – and hence the program accesses address 42, and crashes.
If you don’t see an exploit gadget there based on a violation of memory safety I don’t know how to have a productive conversation.
Rust is susceptible to segfaults when overflowing the stack. Is Rust not memory safe then?
Of course, Go allows more than that, with data races it's possible to reach use after free or other kinds of memory unsafety, but just segfaults don't mark a language memory unsafe.
Go is most emphatically NOT memory-safe. It's trivially easy to corrupt memory in Go when using gorotuines. You don't even have to try hard.
This stems from the fact that Go uses fat pointers for interfaces, so they can't be atomically assigned. Built-in maps and slices are also not corruption-safe.
In contrast, Java does provide this guarantee. You can mutate structures across threads, and you will NOT get data corruption. It can result in null pointer exceptions, infinite loops, but not in corruption.
This is just wrong. Not that you can't blow up from a data race; you certainly can. Simply that any of these properties admit to exploitable vulnerabilities, which is the point of the term as it is used today. When you expand the definition the way you are here, you impair the utility of the term.
Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities; that remains true even when those systems are maintained by the best-resourced security teams in the world. Functionally no Go projects have this property. The empirics are hard to get around.
There were CVEs caused by concurrent map access. Definitely denials of service, and I'm pretty sure it can be used for exploitation.
> Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities
I'm not saying that Go is as unsafe as C. But it definitely is NOT completely safe. I've seen memory corruptions from improper data sync in my own code.
Go ahead and demonstrate it. Obviously, I'm saying this because nobody has managed to do this in a real Go program. You can contrive vulnerabilities in any language.
It's not like this is a small track record. There is a lot of Go code, a fair bit of it important, and memory corruption exploits in non-FFI Go code is... not a thing. Like at all.
Personally, I’m more interested in what a process can do to protect a small amount of secret material longer-term, such as using wired memory and trust zones. I was hoping this would be an abstraction for that.
Consumer-grade hardware generally lacks real confidentiality assurance features. Such a software feature implemented in user-space is moot without the ability to control context switching, rendering it mostly security theater. Security critical bits should be done in a dedicated crypto processor that has tamper self-zeroing and self-contained RAM, or at the very least, in the kernel away outside the reach of user-space processes. No matter how much marketing or blog hype is offered, it's lipstick on a pig. They've essentially implemented a soft, insecure HSM.
Kind of stupid it didn’t have something like this to begin with tbh. It really is an incredible oversight when one steps back. I am fully ready to be downvoted to hell for this, but rust ftw.
It doesn’t but the problem space is more constrained as you are at least in control of heap vs stack storage. Register clearing is not natively available though. To put it more simply: yes but you can write this in rust- you can’t write it in go today.
You can try to write it in Rust, doesn't mean you'll succeed. Rust targets the abstract machine, i.e. the wonderful land of optimizing compilers, which can copy your data anywhere they want and optimize out any attempts to scramble the bytes. What we'd need for this in Rust would be an integration with LLVM, and likely a number of modifications to LLVM passes, so that temporarily moved data can be tracked and erased. The only reason Go can even begin to do this is they have their own compiler suite.
I'm pretty sure you could do it with inline assembly, which targets the actual machine.
You could definitely zero registers that way, and a allocator that zeros on drop should be easy. The only tricky thing would be zeroing the stack - how do you know how deep to go? I wonder what Go's solution to that is...
I meeeeean... plenty of functions allocate internally and don't let the user pass in an allocator. So it's not clear to me how to do this at least somewhat universally. You could try to integrate it into the global allocator, I suppose, but then how do you know which allocations to wipe? Should anything allocated in the secret mode be zeroed on free? Or should anything be zeroed if the deallocation happens while in secret mode? Or are both of these necessary conditions? It seems tricky to define rigidly.
And stack's the main problem, yeah. It's kind of the main reason why zeroing registers is not enough. That and inter-procedural optimizations.
So you’re correct that covering the broadest general case is problematic. You have to block code from doing IO of any form to be safe.
In general though getting to a fairly predictable place is possible and the typical case of key material shouldn’t have highly arbitrary stacks, if you do you’re losing (see io comment above).
https://docs.rs/zeroize/1.8.1/zeroize/ has been effective for some users, it’s helped black box tests searching for key material no longer find it. There are also some docs there on how to avoid common pitfalls and links to ongoing language level discussions on the remaining and more complex register level issues.
It's not clear to me how true your comment is. I think that if things were as unpredictable as you are saying, there would be insane memory leaks all over the place in Rust (let alone C++) that would be the fault of compilers as opposed to programs, which does not align with my understanding of the world.
"Memory leaks" would be a mischaracterisation. "Memory leak" typically refers to not freeing heap-allocated data, while I'm talking about data being copied to temporary locations, most commonly on the stack or in registers.
In a nutshell, if you have a function like
fn secret_func() -> LargeType {
/* do some secret calculations */
LargeType::init_with_safe_Data()
}
...then even if you sanitize heap allocations and whatnot, there is still a possibility that those "secret calculations" will use the space allocated for the return value as a temporary location, and then you'll have secret data leaked in that type's padding.
More realistically, I'm assuming you're aware that optimizing compilers often simplify `memset(p, 0, size); free(p);` to `free(p);`. A compiler frontend can use things like `memset_s` to force rewrites, but this will only affect the locals created by the frontend. It's entirely possible that the LLVM backend notices that the IR wants to erase some variable, and then decides to just copy the data to another location on the stack and work with that, say to utilize instruction-level parallelism.
I'm partially talking out of my ass here, I don't actually know if LLVM utilizes this. I'm sure it does for small types, but maybe not with aggregates? Either way, this is something that can break very easily as optimizing compilers improve, similarly to how cryptography library authors have found that their "constant-time" hacks are now optimized to conditional jumps.
Of course, this ignores the overall issue that Rust does not have a runtime. If you enter the secret mode, the stack frames of all nested invoked functions needs to be erased, but no information about the size of that stack is accessible. For all you know, memcpy might save some dangerous data to stack (say, spill the vector registers or something), but since it's implemented in libc and linked dynamically, there is simply no information available on the size of the stack allocation.
This is a long yap, but personally, I've found that trying to harden general-purpose languages simply doesn't work well enough. Hopefully everyone realizes by now that a borrow checker is a must if you want to prevent memory unsoundness issues in a low-level language; similarly, I believe an entirely novel concept is needed for cryptographical applications. I don't buy that you can just bolt it onto an existing language.
Okay, fair point, sort of. Rust does not have a built-in feature to zero data. Rust does automatically drop references to data on the heap. Zeroing data is fairly trivial, whereas in go, the issue is non-trivial (afaiu).
use std::ptr;
struct SecretData {
data: Vec<u8>,
}
impl Drop for SecretData {
fn drop(&mut self) {
// Zero out the data
unsafe {
ptr::write_bytes(self.data.as_mut_ptr(), 0, self.data.len());
}
}
}
FTA: “Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable”
I think that means this proposal adds a very specific form of finalisers to go.
How is that implemented efficiently? I can think of doing something akin to NSAutoReleasePool (https://developer.apple.com/documentation/foundation/nsautor...), with all allocations inside a `secret.Do` block going into a separate section of the heap (like a new generation), and, on exit of the block, the runtime doing a GC cycle, collecting and clearing every now inaccessible object in that section of the heap.
It can’t do that, though, because the article also says:
“Heap allocations are only erased if the program drops all references to them, and then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act”
and I think what I am thinking of will guarantee that the garbage collector will eagerly erase any heap allocations that can be freed.
Also, the requirement “ the program drops all references to them” means this is not a 100% free lunch. You can’t simply wrap your code in a `secret.Do` and expect your code to be free of leaking secrets.
My guess is that it just uses the existing finalizers and ensures the memory is overwritten.
https://pkg.go.dev/runtime#SetFinalizer
SetFinalizer is deprecated by AddCleanup: https://pkg.go.dev/runtime#AddCleanup
AddCleanup might be too heavy, it is cheaper to just set a bit in the header/info zone of memory blocks.
Related: https://pkg.go.dev/crypto/subtle#WithDataIndependentTiming (added in 1.25)
And an in-progress proposal to make these various "bubble" functions have consistent semantics: https://github.com/golang/go/issues/76477
(As an aside, the linked blog series is great, but if you're interested in new Go features, I've found it really helpful to also subscribe to https://go.dev/issue/33502 to get the weekly proposal updates straight from the source. Reading the debates on some of these proposals provides a huge level of insight into the evolution of Go.)
I have to wonder if we need, say, a special "secret data" type (or modifier) that has the semantics of both crypto/subtle and runtime/secret. That is to say, comparison operators are always constant-time, functions holding the data zero it out immediately, GC immediately zeroes and deallocs secret heap allocations, etc.
I mean, if you're worried about ensuring data gets zeroed out, you probably also don't want to leak it via side channels, either.
One thing that makes me unsure about this proposal is the silent downgrading on unsupported platforms. People might think they're safe when they're not.
Go has the best support for cryptography of any language
I'm not sure there's a realistic alternative. If you need to generate a key then it has to happen somehow on unsupported platforms. You can check Enabled() if you need to know and intend to do something different but I assume most of the time you run the same function either way you'd just prefer to opt into secret mode if it's available.
This is not what secret.Enabled() means. But it probably illustrates that the function needs to be renamed already. Here's what the doc comment says:
In other words, it is just a way of checking that you are indeed running inside the context of some secret.Do call; it doesn't guarantee that secret.Do is actually offering the protection you may desire.That's not how it's implemented (it returns false if you're inside a Do() on a unsupported platform), although I agree the wording should be clearer.
Why not just panic and make it obvious?
One of the goals here is to make it easy to identify existing code which would benefit from this protection and separate that code from the rest. That code is going to run anyway, it already does so today.
Does it? I'm not disputing you, I'm curious why you think so.
Not OP, but Go has some major advantages in cryptography:
1. Well-supported standard libraries generally written by Google
2. Major projects like Vault and K8s that use those implementations and publish new stuff
3. Primary client language for many blockchains, bringing cryptography contributions from the likes of Ethereum Foundation, Tendermint, Algorand, ZK rollups, etc
Do you mean “best support for cryptography in the standard library”?
Because there is tremendous support for cryptography in, say, the C/C++ ecosystem, which has traditionally been the default language of cryptographers.
Yeah the standard library crypto package is really good and so is the tls package. There's also golang.org/x/crypto which is.seprate because it doesn't fall under the go compatibility guarantee. You can do all kinds of hashes and generate certs and check signatures and do aes encryption all built in and accessible. There's even lower level constant time compare functions and everything.
I'm a big fan of the go standard library + /x/ packages.
And since any language can call those C/C++ libraries, all languages are equally good at cryptography! Thanks for the "insight".
4. The community seems to have realized that untangling the mess that is building C/C++ stuff is a fool's errand and seems to mostly prefer to reimplement it in Go
"The best" is still a strong claim. How does it stack up against Java or C#, for example?
I'd probably want some way to understand whether secret.Do is launched within a secret-supporting environment so that I'm able to show some user warning / force a user confirmation or generate_secrets_on_unsupported_platforms flag.
But, this is probably a net improvement over the current situation, and this is still experimental, so, changes can happen before it gets to GA.
> Go has the best support for cryptography of any language
This isn't true at all.
Writing cryptography code in Go is incredibly annoying and cumbersome due lack of operator overloading, forcinforcing you to do method calls like `foo.Add(bar.Mul(baz).Mod(modulus)).Mod(modulus)`. These also often end up having to be bignums instead of using generic fixed-size field arithmetic types. Rust has incredibly extensive cryptographic libraries, the low-level taking advantage of this operator overloading so the code ends up being able to following the notation in literature more closely. The elliptic_curve crate in particular is very nice to work with.
Meh, this is a defence in depth measure anyway
Edit: also, the supported platforms are ARM and x86. If your code isn’t running on one of those platforms, you probably know what you’re doing.
Linux
Windows and MacOS?
Go is supposed to be cross-platform. I guess it's cross-platform until it isn't, and will silently change the semantics of security-critical operations (yes, every library builder will definitely remember to check if it's enabled.)
If you need this for Windows so desperately why aren’t you offering to add support for that platform? It’s open source.
Many advanced Go features start in certain platforms and then expand to others once the kinks are worked out. It’s a common pattern and has many benefits. Why port before its stable?
I look forward to your PR.
Absolutely not the right take unless the OP is a security researcher
> Meh, this is a defence in depth measure
Which is exactly why it should fail explicitly on unsupported platforms unless the developer says otherwise. I'm not sure how Go developers make things obvious, but presumably you have an ugly method or configuration option like:
...for when a developer understands the risk and doesn't want to panic.This is a sharp-edged tool guarded behind an experimental flag. You are not meant to use it unless you want to participate in the experiment. Objections like this and the other one ("check if it's enabled" -- you can't, that's not what secret.Enabled() means) illustrate that this API may still need further evolution, which it won't get if it's never available to experiment with.
Alternatively:
> If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100]), don't create a pointer to that location (don't create a pointer p to &data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.
That doesn't make sense to me. How can the "offset in an array itself" be "secret" if it's "always" 100? 100 isn't secret.
I think it may be about the absolute memory address to the secret being stored, which may itself be exploitable (ie you’re thinking about the offset value, rather than the pointer value). it’s about leaking even indirect information that could be exploited in different ways. From my understanding, this type of cryptography goes to extremely lengths to basically hide everything.
That’s my hunch at least, but I’m not a security expert.
The example could probably have been better phrased.
Ok, i kinda get the idea, and with some modification it might be quite handy - but i wonder why its deemed like an "unsolvable" issue right now.
It may sound naive, but packages which include data like said session related or any other that should not persist (until the next Global GC) - why don't you just scramble their value before ending your current action?
And dont get me wrong - yes that implies extra computation yada yada - but until a a solution is practical and builtin - i'd just recommend to scramble such variables with new data so no matter how long it will persist, a dump would just return your "random" scramble and nothing actually relevant.
It is fundamentally not possible to be in complete control of where the data you are working with is stored in go. The compiler is free to put things on the heap or on the stack however it wants. Relatedly it may make whatever copies it likes in between actions defined in the memory model which could leak arbitrary temporaries.
Yeah, .NET tried to provide a specific type related to this concept (SecureString) in the past and AFAIK there were were two main problems that have caused it to fall into disuse;
First one being, it was -very- tricky to use properly for most cases, APIs to the outside world typically would give a byte[] or string or char[] and then you fall into the problem space you mention. That is, if you used a byte[] or char[] array, and GC does a relocation of the data, it may still be present in the old spot.
(Worth noting, the type itself doesn't do that, whatever you pass in gets copied to a non-gc buffer.)
The second issue is that there's not a unified unix memory protection system like in windows; The windows implementation is able to use Crypt32 such that only the current process can read the memory it used for the buffeer.
In case you’re interested in a potential successor: https://github.com/dotnet/designs/pull/147
without language level support, it makes code look like a mess.
Imagine, 3 level nesting calls where each calls another 3 methods, we are talking about 28 functions each with couple of variables, of course you can still clean them up, but imagine how clean code will look if you don't have to.
Just like garbage collection, you can free up memory yourself, but someone forgot something and we have either memory leak or security issues.
With good helpers, it could become something as simple as
Unless I don't understand the problem correctly.There are two main reasons why this approach isn't sufficient at a technical level, which are brought up by comments on the original proposal: https://github.com/golang/go/issues/21865
1) You are almost certainly going to be passing that key material to some other functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle
2) The compiler can always keep some data in registers, and most Go code can be interrupted at any time, with the registers of the running goroutine copied to somewhere in memory temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine
So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what you intentionally preserve beyond its scope, while scramble wipes only what you think it is important to wipe.
Thanks, you brought up good points.
While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.
Tho i didn't think of (2) which is especially problematic.
Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.
As I understand it, this is too brittle. I think this is trivially defeated if someone adds an append to your code:
Because of the copy that append might do, you now have 2 copies of the key in data, but you only scramble one. There are many functions that might make a copy of the data given that you don't manually manage memory in Go. And if you are writing an open source library that might have dozens of authors, it's better for the language to provide a guarantee, rather than hope that a developer that probably isn't born yet will remember not to call an "insecure" function.Yep thats what i had in mind
This proposal is worse because all the valuable regions of code will be clearly annotated for static analysis, either explicitly via a library/function call, or heuristically using the same boilerplate or fences.
Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.
As another response pointed out, its also possible that said secret data is still in the register, which no matter what we do to the curr value could exist.
Thanks for pointing it out!
> Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.
This is essentially already the case whenever you use encryption, because there are tell-tale signs you can detect (e.g., RSA S-Box). But this will make it even easier and also tip you off to critical sections that are sensitive yet don't involve encryption (e.g., secure strings).
I could imagine code that did something like this for primatives
yes, you now have to deal in pointers, but that's not too ugly, and everything is stored in secretStash so can iterate over all the types it supports and thrash them to make them unusable, even without the gc running.I used to see this is bash scripts all the time. It’s somewhat gone out of favor (along with using long bash scripts).
If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.
It’s not pretty, but a similar concept. (I also don't know how helpful it actually is, but that's another question...)Thats even better than what i had in mind but agree also a good way to just scrumble stuff unusable ++
I'm now wondering with a bit of unsafe, reflection and generics magic one could make it work with any struct as well (use reflection to instantiate a generic type and use unsafe to just overwrite the bytes)
Hard to understand what you’re asking. This is the solution that will practical and built-in. This is a summary of a new feature coming to Go’s runtime in 1.26.
Aka .NET SecureString - which is barely used because everything accepts String.
How much control does a language runtime have over whether the memory controller actually zeros out physical memory? My guess is very little on consumer hardware but happy to be wrong
If it's no longer readable by software that's at least far better than no protection, I imagine
awnumar/memguard[1] exists and does even more
1) allocations via memguard bypass gc entirely
2) they are encrypted at all times when not unsealed
3) pages are mprotected to prevent leakage via swap
4) and so on...
Not as ergonomic as OP's proposal, of course.
[1] https://github.com/awnumar/memguard
This is interesting, but how do you bootstrap it? How does this little software enclave get key material in that doesn't transit untrusted memory? From a file? I guess the attacker this is guarding against can read parts of memory remotely but doesn't have RCE. Seems like a better approach would be an explicitly separate allocator and message passing boundaries. Maybe a new way to launch an isolated go routine with limited copying channels.
> How does this little software enclave get key material in that doesn't transit untrusted memory?
Linux has memfd_secret ( https://man7.org/linux/man-pages/man2/memfd_secret.2.html ), that allow you to create a secure memory region that can't be directly mapped into regular RAM.
I find this example mildly infuriating/amusing:
As that suggests that somehow for PFS it is critical that the ephemeral key (not the long-term one) is zeroed out, while the plaintext message - i.e. the thing that in the example we allegedly want secrecy for - is totally fine to be outside of the whole `secret` machinery, and remain in memory potentially "forever".I get that the example is simplified (because what it should actually do is protect the long-term key, not the ephemeral one)... so, yeah, it's just a bad example.
PFS is just one of many desirable properties, and getting access to plaintext is just one of many kinds of threat. Getting access to ephemeral keys and other sensitive state can enable session hijacking. It's still not a great example, though, because it doesn't illustrate that threat model either.
This seems like it might be expensive (though plausibly complete), so I wonder if it’ll actually benchmark with a low enough overhead to be practical. We already struggle with a lack of optimization in some of the named target use cases - that said this also means there’s space to make up.
> The new runtime/secret package lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used.
I don't understand. Why do you need it in a garbage-collected language?
My impression was that you are not able to access any register in these language. It is handled by the compiler instead.
This is about minimizing attack surface. Not only could secrets be leaked by hacking the OS process somehow to perform arbitrary reads on the memory space and send keys somewhere, they could also be leaked with root access to the machine running the process, root access to the virtualization layer, via other things like rowhammering potentially from an untrusted process in an entirely different virtual context running on the same machine, and at the really high end, attacks where the government agents siezing your machine physically freeze your RAM (that is, reduce the physical temperature of your RAM to very low temperatures) when they confiscate your machine and read it out later. (I don't know if that is still possible with modern RAM, but even if it isn't I wouldn't care to bet much on the proposition that they don't have some other way to read RAM contents out if they really, really want to.) This isn't even intended as a complete list of the possibilities, just more than enough to justify the idea that in very high security environments there's a variety of threats that come from leaving things in RAM longer than you absolutely need to. You can't avoid having things in RAM to operate on them but you can ensure they are as transient as possible to minimize the attack window.
If you are concerned about secrets being zeroed out in almost any language, you need some sort of support for it. Non-GC'd languages are prone to optimize away zeroing out of memory before deallocation, because under normal circumstances a write to a value just before deallocation that is never effectfully read can be dropped without visible consequence to the rest of the program. And as compilers get smarter it can be harder to fool them with code, like, simply reading afterwards with no further visible effect might have been enough to fool 20th century compilers but nowadays I wouldn't count on my compiler being that stupid.
There are also plenty of languages where you may want to use values that are immutable within the context of the language, so there isn't even a way to express "let's zero out this RAM".
Basically, if you don't build this in as a language feature, you have a whole lot of pressures constantly pushing you in the other direction, because why wouldn't you want to avoid the cost of zeroing memory if you can? All kinds of reasons to try to avoid that.
In theory it prevents failures of the allocator that would allow reading uninitialized memory, which isn't really a thing in Go.
In practice it provides a straightforward path to complying with government crypto certification requirements like FIPS 140 that were written with languages in mind where this is an issue.
Go has both assembly language and unsafe pointer operations available. While any uses of these more advanced techniques should be vetted before going to production, they are obviously able to break out of any sandboxing that you might otherwise think a garbage collector provides.
And any language which can call C code that is resident in the same virtual memory space can have its own restrictions bypassed by said C code. This even applies to more restrictive runtimes like the JVM or Python.
The Go runtime may not be the only thing reading your process’ memory.
This would potentially protect against other process reading memory via some system compromise - they would be able to get new secrets but not old ones.
Go is not a memory safe language. Even in memory safe languages, memory safety vulnerabilities can exist. Such vulnerabilities can be used to hijack your process into running untrusted code. Or as others point out sibling processes could attack yours. This underlying principle is defense in depth - you make add another layer of protection that has to be bypassed to achieve an exploit. All the chains combined raise the expense of hacking a system.
Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
https://www.memorysafety.org/docs/memory-safety/
I’m not aware of any definition of memory safety that allows for segfaults- by definition those are an indication of not being memory safe.
It is true that go is only memory unsafe in a specific scenario, but such things aren’t possible in true memory safe languages like c# or Java. That it only occurs in multithreaded scenarios matters little especially since concurrency is a huge selling point of the language and baked in.
Java can have data races, but those data races cannot be directly exploited into memory safety issues like you can with Go. I’m tired of Go fans treating memory safety as some continuum just because there are many specific classes of how memory safety can be violated and Go protecting against most is somehow the same as protecting against all (which is what being a memory safe language means whether you like it or not).
I’m not aware of any other major language claiming memory safety that is susceptible to segfaults.
https://www.ralfj.de/blog/2025/07/24/memory-safety.html
Safety is a continuum. It's a simple fact. Feel free to use a term other than memory safety to describe what you're talking about, but so long as you use safety, there's going to be a continuum.
Also, by your definition, e.g. Rust is not memory safe. And "It is true that Rust is only memory unsafe in a specific scenario, but [...]". I hope you agree.
Another canard, unfortunately. "Segfault" is simply Go's reporting convention for things like nil pointer hits. "Segfaults" are not, in fact, part of the definition for memory safety or a threshold condition for it. All due respect to Ralf's Ramblings, but I'm going to rest my case with the Prossimo page on memorysafety.org that I just posted. This isn't a real debate.
> Segfault" is simply Go's reporting convention for things like nil pointer hits.
Blatantly false. From Ralf’s post:
> panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x2a pc=0x468863]
The panic address is 42, a value being mutated, not a nil pointer. You could easily imagine this address pointing to a legal but unintended memory address resulting in a read or write of unintended memory.
No, you can't, and the reason you know you can't is that it's never happened. That looks like a struct offset dereference from a nil pointer, for what it's worth.
https://go.dev/play/p/0fUzmP0cLEa
> That looks like a struct offset dereference from a nil pointer, for what it's worth.
The 42 is an explicit value in the example code. From what I understand the code repeatedly changes the value assigned to an interface variable from an object containing a pointer to an object containing an integer. Since interface variables store the type of the assigned value, but do not update both type and value atomically a different thread can interpret whatever integer you put into it as a valid pointer. Putting a large enough value into the integer should avoid the protected memory page around 0 and allow for some old fashioned memory corruption.
You’d be wrong. I recommend you reread the blog post and grok what’s happening in the example.
> When that happens, we will run the Ptr version of get, which will dereference the Int’s val field as a pointer – and hence the program accesses address 42, and crashes.
If you don’t see an exploit gadget there based on a violation of memory safety I don’t know how to have a productive conversation.
Please do explain the exploit gadget you're talking about.
Rust is susceptible to segfaults when overflowing the stack. Is Rust not memory safe then?
Of course, Go allows more than that, with data races it's possible to reach use after free or other kinds of memory unsafety, but just segfaults don't mark a language memory unsafe.
Go is most emphatically NOT memory-safe. It's trivially easy to corrupt memory in Go when using gorotuines. You don't even have to try hard.
This stems from the fact that Go uses fat pointers for interfaces, so they can't be atomically assigned. Built-in maps and slices are also not corruption-safe.
In contrast, Java does provide this guarantee. You can mutate structures across threads, and you will NOT get data corruption. It can result in null pointer exceptions, infinite loops, but not in corruption.
This is just wrong. Not that you can't blow up from a data race; you certainly can. Simply that any of these properties admit to exploitable vulnerabilities, which is the point of the term as it is used today. When you expand the definition the way you are here, you impair the utility of the term.
Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities; that remains true even when those systems are maintained by the best-resourced security teams in the world. Functionally no Go projects have this property. The empirics are hard to get around.
There were CVEs caused by concurrent map access. Definitely denials of service, and I'm pretty sure it can be used for exploitation.
> Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities
I'm not saying that Go is as unsafe as C. But it definitely is NOT completely safe. I've seen memory corruptions from improper data sync in my own code.
Go ahead, talk through how this would be used for exploitation.
I would try to cause the map reallocation at the same moment I'm writing to it. Leading to corrupted memory allocator structures.
Go ahead and demonstrate it. Obviously, I'm saying this because nobody has managed to do this in a real Go program. You can contrive vulnerabilities in any language.
It's not like this is a small track record. There is a lot of Go code, a fair bit of it important, and memory corruption exploits in non-FFI Go code is... not a thing. Like at all.
Yeah, I can hardly disagree with that sentiment myself.
Personally, I’m more interested in what a process can do to protect a small amount of secret material longer-term, such as using wired memory and trust zones. I was hoping this would be an abstraction for that.
I looked into this a bit for a rust project I'm working on, it's slightly difficult to be confident, when you get all the way down to the CPU.
https://github.com/rust-lang/rust/issues/17046
https://github.com/conradkleinespel/rpassword/issues/100#iss...
> Protection does not cover any global variables that f writes
Seems like this should raise a compiler error or panic on runtime.
I wonder if people will start using this as magic security sauce.
More likely they'll use it and end up with a false sense of security.
Seems neat, anything similar in Java?
Consumer-grade hardware generally lacks real confidentiality assurance features. Such a software feature implemented in user-space is moot without the ability to control context switching, rendering it mostly security theater. Security critical bits should be done in a dedicated crypto processor that has tamper self-zeroing and self-contained RAM, or at the very least, in the kernel away outside the reach of user-space processes. No matter how much marketing or blog hype is offered, it's lipstick on a pig. They've essentially implemented a soft, insecure HSM.
Big thumbs down from me.
Wow, this is so neat. I spent some time thinking about this problem years ago, and never thought of such an elegant solution.
Kind of stupid it didn’t have something like this to begin with tbh. It really is an incredible oversight when one steps back. I am fully ready to be downvoted to hell for this, but rust ftw.
Rust doesn't have anything like this either. I think you misunderstood what it is.
It doesn’t but the problem space is more constrained as you are at least in control of heap vs stack storage. Register clearing is not natively available though. To put it more simply: yes but you can write this in rust- you can’t write it in go today.
You can try to write it in Rust, doesn't mean you'll succeed. Rust targets the abstract machine, i.e. the wonderful land of optimizing compilers, which can copy your data anywhere they want and optimize out any attempts to scramble the bytes. What we'd need for this in Rust would be an integration with LLVM, and likely a number of modifications to LLVM passes, so that temporarily moved data can be tracked and erased. The only reason Go can even begin to do this is they have their own compiler suite.
I'm pretty sure you could do it with inline assembly, which targets the actual machine.
You could definitely zero registers that way, and a allocator that zeros on drop should be easy. The only tricky thing would be zeroing the stack - how do you know how deep to go? I wonder what Go's solution to that is...
I meeeeean... plenty of functions allocate internally and don't let the user pass in an allocator. So it's not clear to me how to do this at least somewhat universally. You could try to integrate it into the global allocator, I suppose, but then how do you know which allocations to wipe? Should anything allocated in the secret mode be zeroed on free? Or should anything be zeroed if the deallocation happens while in secret mode? Or are both of these necessary conditions? It seems tricky to define rigidly.
And stack's the main problem, yeah. It's kind of the main reason why zeroing registers is not enough. That and inter-procedural optimizations.
So you’re correct that covering the broadest general case is problematic. You have to block code from doing IO of any form to be safe.
In general though getting to a fairly predictable place is possible and the typical case of key material shouldn’t have highly arbitrary stacks, if you do you’re losing (see io comment above).
https://docs.rs/zeroize/1.8.1/zeroize/ has been effective for some users, it’s helped black box tests searching for key material no longer find it. There are also some docs there on how to avoid common pitfalls and links to ongoing language level discussions on the remaining and more complex register level issues.
Yeah I meant a global allocator. You would wipe anything that was allocated while executing a "secure" function.
It's not clear to me how true your comment is. I think that if things were as unpredictable as you are saying, there would be insane memory leaks all over the place in Rust (let alone C++) that would be the fault of compilers as opposed to programs, which does not align with my understanding of the world.
"Memory leaks" would be a mischaracterisation. "Memory leak" typically refers to not freeing heap-allocated data, while I'm talking about data being copied to temporary locations, most commonly on the stack or in registers.
In a nutshell, if you have a function like
...then even if you sanitize heap allocations and whatnot, there is still a possibility that those "secret calculations" will use the space allocated for the return value as a temporary location, and then you'll have secret data leaked in that type's padding.More realistically, I'm assuming you're aware that optimizing compilers often simplify `memset(p, 0, size); free(p);` to `free(p);`. A compiler frontend can use things like `memset_s` to force rewrites, but this will only affect the locals created by the frontend. It's entirely possible that the LLVM backend notices that the IR wants to erase some variable, and then decides to just copy the data to another location on the stack and work with that, say to utilize instruction-level parallelism.
I'm partially talking out of my ass here, I don't actually know if LLVM utilizes this. I'm sure it does for small types, but maybe not with aggregates? Either way, this is something that can break very easily as optimizing compilers improve, similarly to how cryptography library authors have found that their "constant-time" hacks are now optimized to conditional jumps.
Of course, this ignores the overall issue that Rust does not have a runtime. If you enter the secret mode, the stack frames of all nested invoked functions needs to be erased, but no information about the size of that stack is accessible. For all you know, memcpy might save some dangerous data to stack (say, spill the vector registers or something), but since it's implemented in libc and linked dynamically, there is simply no information available on the size of the stack allocation.
This is a long yap, but personally, I've found that trying to harden general-purpose languages simply doesn't work well enough. Hopefully everyone realizes by now that a borrow checker is a must if you want to prevent memory unsoundness issues in a low-level language; similarly, I believe an entirely novel concept is needed for cryptographical applications. I don't buy that you can just bolt it onto an existing language.
This is basically my point, in addition to the fact that the time at which data is freed from the heap is far more predictable.
Okay, fair point, sort of. Rust does not have a built-in feature to zero data. Rust does automatically drop references to data on the heap. Zeroing data is fairly trivial, whereas in go, the issue is non-trivial (afaiu).
Zeroing memory is trickier than that, if you want to do it in Rust you should use https://crates.io/crates/zeroize
He was pretty close tbf - you just need to use `write_volatile` instead of `write_bytes`.