I'm kind of confused by AMD's and Intel's response. I thought both companies were building technology that allows datacenter operators to prove to their customers that they do not have access to data processed on the machines, despite having physical access to them. If that's out of scope, what is the purpose of these technologies?
Not surprising - even having 2 DDR5 DIMMs on the same channel compromises signal integrity enough to need to drop the frequency by ~30-40%, so perhaps the best mitigation at the moment is to ensure the host is using the fastest DDR5 available.
So - Is the host DRAM/DIMM technology and frequency included in the remote attestation report for the VM?
The attestation report is signed by a key in the PSP hardware, not accessible by any OS or software, which can then be validated with the vendor's certificate/public-key. If that can be faked, are you saying that those private keys are compromised?
I'm willing to bet if you ran terrorism-as-a-service.com on a protected VM, it wouldn't be secure for long, and if it really came down to it, the keys would be coughed up.
This seems pretty trivial to fix (or at least work around) by adding an enclave generation number to the key initialization inputs. (They mention that the key is only based on the physical address, but surely it has to include CPUID or something similar as well?) Understood that this is likely hardware key generation so won’t be fixed without a change, and that persistent generation counters are a bit of a pain… but what else am I missing?
Need to go Apple style where the AES engine is on die. Only the AES engine and the Secure Enclave know the decryption keys. The CPU doesn't know the decryption key. Nothing is sent in clear text over the bus.
I find it reassuring that you can still get access to the data running on your own device, despite all the tens of thousands of engineering hours being poured into preventing just that.
My 2017 bottom shelf lenovo has SGX whether I like it or not.
In current year you can't really buy new hardware without secure enclaves[0], be it a phone, a laptop or server. Best you can do is refuse to run software that requires it, but even that will become tough when goverments roll out mandatory software that depends on it.
[0]: unless you fancy buying nerd vanity hardware like a Talos POWER workstation with all the ups and downs that come with it.
I think I talked about this possibility with Bunnie Huang about 15 years ago. As I recall, he said it was conceptually achievable. I guess it's also practically achievable!
In their current form, AMD and Intel proposals never fulfilled the Confidential Computing promises, one can hope they will do better in their next iteration of SGX/TDX/SEV, but they were always broken, by design.
It's a bit more fundamental in my opinion. Cryptographic techniques are supported by strong mathematics; while I believe hardware-based techniques will always be vulnerable against a sufficiently advanced hardware-based attack. In theory, there exists an unbreakable version of OpenSSL ("under standard cryptographic assumptions"), but it is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Proof of existence does exist. Some Xbox variant has now been unbroken (jailbroken) for more than 10 years. And not for lack of trying.
Credit/debit cards with chips (EMV) are another proof of existence that hardware-based protection can exist.
> It is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Not in the absolute, but in the more than $10 mil required to break it (atomic microscopes to extract keys from CPU gates, ...), and that to break a single specific device, not the whole class.
I like how the FAQ doesn't really actually answer the questions (feels like AI slop but giving benefit of the doubt), so I will answer on their behalf, without even reading the paper:
Am I impacted by this vulnerability?
For all intents and purposes, no.
Battering RAM needs physical access; is this a realistic attack vector?
You're twisting their words. For the second question, they clearly answer yes.
It depends on the threat model you have in mind. If you are a nation state that is hosting data in a US cloud, and you want to protect yourself from the NSA, I would say this is a realistic attack vector.
I haven't twisted their words, they didn't actually answer the question, so I gave my own commentary. For all intents and purposes, as in practically speaking, this isn't going to affect anyone*. The nation state threat is atypical even to those customers of confidential computing, I guess the biggest pool of users being those that use Apple Intelligence (which wouldn't be vulnerable to this attack since they use soldered memory in their servers and a different TEE).
Happy to revisit this in 20 years and see if this attack is found in the wild and is representative. (I notice it has been about 20 years since cold boot / evil maid was published and we still haven't seen or heard of it being used in the wild (though the world has kind of moved onto soldered ram for portable devices).
* They went to great lengths to provide a logo, a fancy website and domain, etc. to publicise the issue, so they should at least give the correct impression on severity.
They answer the second question quite clearly in my opinion:
It requires only brief one-time physical access, which is realistic in cloud environments, considering, for instance:
* Rogue cloud employees;
* Datacenter technicians or cleaning personnel;
* Coercive local law enforcement agencies;
* Supply chain tampering during shipping or manufacturing of the memory modules.
This reads as "yes". (You may disagree, but _their_ answer is "yes.")
Consider also "Room 641A" [1]: the NSA has asked big companies to install special hardware on their premises for wiretapping. This work is at least proof that a similar request could be made to intercept confidential compute environments.
There is clearly a market for this and it is relevant to those customers. The host has physical access to the hardware and therefore can perform this kind of attack. Whether they have actually done so is irrelevant. I think the point of paying for confidential computing is knowing they cannot. Why do you consider physical access not a realistic attack vector?
Physical access owns. If the computer can’t trust its components what can it do?
I'm kind of confused by AMD's and Intel's response. I thought both companies were building technology that allows datacenter operators to prove to their customers that they do not have access to data processed on the machines, despite having physical access to them. If that's out of scope, what is the purpose of these technologies?
Remote attestation of our personal devices, including computers, the apps we run and the media we play on them.
The server side also has to be secure for the lock-in to be effective.
I’ve always assumed it’s a long term goal for total DRM
TEEs don't work, period.
FHE does (ok, it's much slower for now).
Security theater, mostly.
> No, our interposer only works on DDR4
Not surprising - even having 2 DDR5 DIMMs on the same channel compromises signal integrity enough to need to drop the frequency by ~30-40%, so perhaps the best mitigation at the moment is to ensure the host is using the fastest DDR5 available.
So - Is the host DRAM/DIMM technology and frequency included in the remote attestation report for the VM?
All of that info is faked. You should never trust a cloud vm. That is why it is called "public cloud".
The attestation report is signed by a key in the PSP hardware, not accessible by any OS or software, which can then be validated with the vendor's certificate/public-key. If that can be faked, are you saying that those private keys are compromised?
I'm willing to bet if you ran terrorism-as-a-service.com on a protected VM, it wouldn't be secure for long, and if it really came down to it, the keys would be coughed up.
This seems pretty trivial to fix (or at least work around) by adding an enclave generation number to the key initialization inputs. (They mention that the key is only based on the physical address, but surely it has to include CPUID or something similar as well?) Understood that this is likely hardware key generation so won’t be fixed without a change, and that persistent generation counters are a bit of a pain… but what else am I missing?
Need to go Apple style where the AES engine is on die. Only the AES engine and the Secure Enclave know the decryption keys. The CPU doesn't know the decryption key. Nothing is sent in clear text over the bus.
I hope people dont give up on TEE, see AWS Nitro
The AWS business is built on isolating compute so IMO AWS are the best choice
I've built up a stack for doing AWS Nitro dev
https://lock.host/
https://github.com/rhodey/lock.host
With Intel and AMD you need the attestation flow to prove not only that you are using the tech but you need to attest to who is hosting the CPU
With Amazon Nitro always Amazon is hosting the CPU
I find it reassuring that you can still get access to the data running on your own device, despite all the tens of thousands of engineering hours being poured into preventing just that.
I doubt you own hardware capable of any of the confidential computing technology mentioned
My 2017 bottom shelf lenovo has SGX whether I like it or not.
In current year you can't really buy new hardware without secure enclaves[0], be it a phone, a laptop or server. Best you can do is refuse to run software that requires it, but even that will become tough when goverments roll out mandatory software that depends on it.
[0]: unless you fancy buying nerd vanity hardware like a Talos POWER workstation with all the ups and downs that come with it.
Intel killed SGX on consumer CPUs a while ago
https://news.ycombinator.com/item?id=31047888
Intel TXT is another related trusted execution/attestation/secure enclave feature, not sure how prevalent that one is, though
Pretty sure you can turn off SGX in the BIOS?
Well microcontrollers can prevent you from repairing your own device with DRM and secure enclaves
I think I talked about this possibility with Bunnie Huang about 15 years ago. As I recall, he said it was conceptually achievable. I guess it's also practically achievable!
Dupe of: https://news.ycombinator.com/item?id=45439286
11 points by mici 4 days ago
Is this making confidential computing obsolete?
In their current form, AMD and Intel proposals never fulfilled the Confidential Computing promises, one can hope they will do better in their next iteration of SGX/TDX/SEV, but they were always broken, by design.
That's like saying a security vulnerability in OpenSSL/SSH is making SSL/SSH obsolete.
It's a bit more fundamental in my opinion. Cryptographic techniques are supported by strong mathematics; while I believe hardware-based techniques will always be vulnerable against a sufficiently advanced hardware-based attack. In theory, there exists an unbreakable version of OpenSSL ("under standard cryptographic assumptions"), but it is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Proof of existence does exist. Some Xbox variant has now been unbroken (jailbroken) for more than 10 years. And not for lack of trying.
Credit/debit cards with chips (EMV) are another proof of existence that hardware-based protection can exist.
> It is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Not in the absolute, but in the more than $10 mil required to break it (atomic microscopes to extract keys from CPU gates, ...), and that to break a single specific device, not the whole class.
I like how the FAQ doesn't really actually answer the questions (feels like AI slop but giving benefit of the doubt), so I will answer on their behalf, without even reading the paper:
Am I impacted by this vulnerability?
For all intents and purposes, no.
Battering RAM needs physical access; is this a realistic attack vector?
For all intents and purposes, no.
You're twisting their words. For the second question, they clearly answer yes.
It depends on the threat model you have in mind. If you are a nation state that is hosting data in a US cloud, and you want to protect yourself from the NSA, I would say this is a realistic attack vector.
I haven't twisted their words, they didn't actually answer the question, so I gave my own commentary. For all intents and purposes, as in practically speaking, this isn't going to affect anyone*. The nation state threat is atypical even to those customers of confidential computing, I guess the biggest pool of users being those that use Apple Intelligence (which wouldn't be vulnerable to this attack since they use soldered memory in their servers and a different TEE).
Happy to revisit this in 20 years and see if this attack is found in the wild and is representative. (I notice it has been about 20 years since cold boot / evil maid was published and we still haven't seen or heard of it being used in the wild (though the world has kind of moved onto soldered ram for portable devices).
* They went to great lengths to provide a logo, a fancy website and domain, etc. to publicise the issue, so they should at least give the correct impression on severity.
They answer the second question quite clearly in my opinion:
This reads as "yes". (You may disagree, but _their_ answer is "yes.")Consider also "Room 641A" [1]: the NSA has asked big companies to install special hardware on their premises for wiretapping. This work is at least proof that a similar request could be made to intercept confidential compute environments.
[1] https://en.wikipedia.org/wiki/Room_641A
There is clearly a market for this and it is relevant to those customers. The host has physical access to the hardware and therefore can perform this kind of attack. Whether they have actually done so is irrelevant. I think the point of paying for confidential computing is knowing they cannot. Why do you consider physical access not a realistic attack vector?