I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications. If it isn't some sort of gimmick to allow NSA to break these, it's sure showing a huge amount of confidence in relatively recently developed mechanisms.
It feels kind of like saying "oh, now that we can detect viruses in sewage, hospitals should stop bothering to report possible epidemic outbreaks, because that's redundant with the sewage monitoring capability". (Except worse, because it involves some people who may secretly be pursuing goals that are the opposite of everyone else's.)
Edit: DJB said in that 2022 post
> Publicly, NSA justifies this by
>
> . pointing to a fringe case where a careless effort to add an extra security layer damaged security, and
> . expressing "confidence in the NIST PQC process".
> I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications.
Why is that so surprising? Adopting new cryptography by running it in a hybrid mode with the cryptography it's replacing is generally not standard practice and multi-algorithm schemes are pretty niche at best (TrueCrypt/VeraCrypt are the only non-PQ cases that come to mind, although I'm sure there are others). Now you could certainly argue that PQ algorithms are untested and risky in a way that was not true of any other new algorithm and thus a hybrid scheme makes the most sense, but it's not such an obviously correct argument that anyone arguing otherwise must be either stupid or malicious.
There are probably other periods of time when I might have advocated running hybrids of different families of primitives, although I'm not sure that I was ever following the details closely enough to have actually advocated for that.
The cool thing is the dramatic security improvements against certain unknown unknowns for approximately linear additional work and space. Seems like a pretty great advantage for the defender, although seriously arguing that quantitatively requires some way to reason about the unknown unknowns (the reductio ad absurdum being that we would need to use every relevant primitive ever published in every protocol¹).
I see PQC as somehow very discontinuous with existing cryptography, both in terms of the attacks it tries to mitigate and the methods it uses to resist them. This might be wrong. Maybe it's fair to consider it an evolutionary advance in cryptographic primitive design.
The casual argument from ignorance is that lattices are apparently either somewhat harder to understand, or just less-studied overall, than other structures that public-key primitives have been built on, to the extent that we would probably currently not use them at all in practical cryptography if it weren't for the distinctive requirements of resistance to quantum algorithms. I understand that this isn't quantitative or even particularly qualitative (for instance, I don't have any idea of what about lattices is actually harder to understand).
Essentially, in this view, we're being forced into using weird esoteric stuff much earlier than we'd like because it offers some hope of defending against other weird esoteric stuff. Perhaps this is reinforced by, for example, another LWE submission having been called "NewHope", connoting to me that LWE was thought even by many of its advocates to offer urgently-needed "hope", but maybe not "confidence".
I'd like not to have to have that argument only in terms of vibes (and DJB does have some more concrete arguments that the security of SIKE was radically overestimated, while the security of LWE methods was moderately overestimated, so we need to figure out how to model how much of the problem was identified by the competition process and how much may remain to be discovered). I guess I just need to learn more math!
¹ I think I remember someone at CCC saying with respect to the general risk of cryptographic backdoors that we should use hybrids of mechanisms that were created by geopolitical rivals, either to increase the chance that at least one party did honest engineering, or to decrease the chance that any party knows a flaw in the overall system! This is so bizarre and annoying as a pure matter of math or engineering, but it's not like DJB is just imagining the idea that spy agencies sometimes want to sabotage cryptography, or have budgets and staff dedicated to doing so.
2005 (LWE), 2012 (LWE for key exchange), earlier (1990s for lattice math in general), 2017 (Kyber submission), later (competition modifications to Kyber)?
I can see where one could see the mathematics as moderately mature (comparable in age to ECC, but maybe less intensively studied?). As above, I don't know quite how to think about whether the "thing" here is properly "lattices", "LWE", "LWE-KEX", "Kyber", or "the parameters and instantiation of Kyber from the NIST PQ competition". Depending where we focus our attention there, I suppose this gives us some timeframe from the 1980s (published studies of computational complexity of lattice-related algorithms) to "August 2024" (adoptions of NIST PQ FIPS documents).
Edit: The other contextual thing that freaks out DJB, for those who might not be familiar, is that one of the proposed standards NIST was considering, SIKE, made it all the way through to the final (fourth) round of consideration, whereupon it was completely broken by a couple of researchers bringing to bear mathematical insight. Now SIKE had a very different architecture than the other proposals in the fourth round, so it seems like a portion of the debate is whether the undetected mathematical problems in SIKE are symptomatic of "the NIST competition came extraordinarily close to approving something that was totally broken, so maybe it wasn't actually that great at evaluating candidate algorithms, or at least maybe the mathematics community's understanding of post-quantum key exchange algorithms is still immature" or more symptomatic of "SIKE had such a weird and distinctive architecture that it was hard to understand or analyze, or hard to motivate relevant experts to understand or analyze it, unlike other candidate algorithms that were and are much better understood". It seems like DJB is saying the former and you're saying the latter.
There is so much here to debate about. A) Never trust the cyber feds. B) The NSA is not the place anyone thinks, it’s a Wild West in the most bizarre of places, trust me from experience. C) Cryptology concerns more of than security and exchanging messages or packets, sometimes you don’t even know what kind of thing (living) can and has been decrypted. D) The NSA plays very, very, very dirty. It is like a digital CIA, they are in everything (i.e. cyber spies in various roles at tech/telecom/manufacturer company xyz). E) NEVER LISTEN TO THE DAMN NSA / DRIVEN BY A CULTURE OF EXPLOITATION
The point is to trust no one and no thing that we cannot examine freely, closely, and transparently. And to maintain healthy skepticism of any entity that claims to have a virtuous process to do its business.
No it's not. The NSA has been the Federal Govt's designated expert on cryptography since the end of WW2. You are pretending that the current set of NIST standards and every previous NIST standard has not had incredibly intimate contact with the NSA.
You're lived experience tells you to trust the NSA, at least as it relates to NIST standards.
Lots of respect to both you and the author, but the rejection gives no real response to any of the issues I see raised in the document.
It failed to raise my confidence at all.
> The IESG has concluded that there were no process failures by the SEC ADs. The IESG declines to directly address the complaint on the TLS WG document adoption matter. Instead, the appellant should refile their complaint with the SEC ADs in a manner which conforms to specified process.
There's a bunch of content that's not actually the complaint, and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.
> and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.
Ah, yes, procedural complaints such as "The draft creates security risks." and "There are no principles supporting the adoption decision.", and "The draft increases software complexity."
I don't know what complaint you're reading, but you're working awful hard to ignore the engineering concerns presented in the one I've read and linked to.
As is made clear from the fact that those issues all link to the mailing list, these are not novel issues. They were raised during discussion, taken into account, and the draft authors concluded they were answered adequately. Complaining about them at this point is fundamentally a complaint that the process failed to take these issues into account appropriately, and the issues should be revisited. Given that this was raised to the IESG, who are not the point of contact for engineering issues, the response is focused on that. There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.
> There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.
This is the retort of every bureaucracy which fails to do the right thing, and signals to observers that procedure is being used to overrule engineering best practices. FYI.
I'm thankful for the work djb has put in to these complaints, as well as his attempts to work through process, successful or not, as otherwise I wouldn't be aware of these dangerous developments.
Excuses of any kind ring hollow in the presence of historical context around NSA and encryption standardization, and the engineering realities.
Hey, look, you're free to read the mailing list archives and observe that every issue Dan raised was discussed at the time, he just disagreed with the conclusions reached. He made a complaint to the ADs, who observed that he was using an email address with an autoresponder that asserted people may have to pay him $250 for him to read their email, and they (entirely justifiably) decided not to do that. Dan raised the issue to the next level up, who concluded that the ADs had behaved entirely reasonably in this respect and didn't comment on the engineering issues because it's not their job to in this context.
It's not a board's job to handle every engineering complaint themselves, simply because they are rarely the best suited people to handle engineering complaints. When something is raised to them it's a matter of determining whether the people whose job it is to make those decisions did so appropriately, and to facilitate review if necessary. In this case the entire procedural issue is clear - Dan didn't raise a complaint in the appropriate manner, there's still time for him to do so, there's no problem, and all the other complaints he made about the behaviour of the ADs were invalid.
They're adhering to their charter. If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you - that's not the process, that's not his job, he's going to trust me to make good decisions unless presented with evidence I've misbehaved.
But as has been pointed out elsewhere, the distinction between the Dual EC DRBG objections and here are massive. The former had an obvious technical weakness that provided a clear mechanism for a back door, and no technical justification for this was ever meaningfully presented, and also it wasn't an IETF discussion. The counterpoints to Dan's engineering complaints (such as they are) are easily accessible to everyone, Dan just chose not to mention them.
The complaint seems well referenced with evidence of poor engineering decisions to me.
> Dual EC DRBG ... had an obvious technical weakness that provided a clear mechanism for a back door
Removing an entire layer of well tested encryption qualifies as an obvious technical weakness to me. And as I've mentioned elsewhere in these comments, opens users up to a https://en.wikipedia.org/wiki/Downgrade_attack should flaws in the new cipher be found. There is a long history of such flaws being discovered, even after deployment. Several examples of which DJB references.
I see no cogent reason for such recklessness, and many reasons to avoid it.
Continued pointing toward "procedure" seems to cede the case.
Why don't we hybridise all crypto? We'd get more security if we required RSA+ECDSA+ED25519 at all times, right? Or is the answer that the benefits are small compared to the drawbacks? I am unqualified to provide an answer, but I suspect you are also, and the answer we have from a whole bunch of people who are qualified is that they think the benefits aren't worth it. So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.
I am curious what the costs are seen to be here. djb seems to make a decent argument that the code complexity and resource usage costs are less of an issue here, because PQ algorithms are already much more expensive/hard to implement then elliptic curve crypto. (So instead of the question being "why don't we triple our costs to implement three algorithms based on pretty much the same ideas", it's "why don't we take a 10% efficiency hit to supplement the new shiny algorithm with an established well-understood one".)
On the other hand, it seems pretty bad if personal or career cost was a factor here. The US government is, for better or worse, a pretty major stakeholder in a lot of companies. Like realistically most of the people qualified to opine on this have a fed in their reporting chain and/or are working at a company that cares about getting federal contracts. For whatever reason the US government is strongly anti-hybrid, so the cost of going against the grain on this might not feel worth it to them.
Which insinuations do you think are ludicrous? Is it not a matter of public record at this point that the NSA and NIST have lied to weaken cryptography standards?
The entirely unsupported insinuation that the customer Cisco is describing is the NSA. What's even supposed to be the motivation there? The NSA want weak crypto so they're going to buy a big pile of Ciscos that they'll never use but which will make people think it's secure? There are others, but on its own that should already be a red flag.
The article links a statement from an NSA official that explicitly says the NSA has been asking vendors for this, which seems like fairly strong support to me.
>So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.
As a response to this only, while djb's recent blog posts have adopted a slightly crackpotish writing style, PQC hybridization is not a fringe idea, and is not deployed because of djb's rants.
Over in Europe, German BSI and French ANSSI both strongly recommend hybrid schemes. As noted in the blog, previous Google and Cloudflare experiments have deployed hybrids. This was at an earlier stage in the process, but the long history of lattices that is sometimes being used as a (reasonable) argument against hybrids applied equally when those experiments were deployed, so here I'm arguing that the choice made at the time is still reasonably today, since the history hasn't changed.
Yes, there is also a more general "lots of PQC fell quite dramatically" sentiment at play that doesn't attempt to separate SIKE and MLKEM. That part I'm happy to see criticized, but I think the broader point stands. Hybrids are a reasonable position, actually. It's fine.
So you've constructed a strawman. Another indication of ceding the argument.
> and the answer we have from a whole bunch of people who are qualified
The ultimate job of a manager or a board is to take responsibility for the decisions of the organization. All of your comments in this thread center around abdicating that responsibility to others.
> This isn't actually an engineering hill I'd die on
Could have fooled me.
> we basically have djb against the entire world
Many of your comments indicate to me that clashing personalities may be interfering with making the right engineering decision.
If the argument is "Why adopt a protocol that may rely on a weak algorithm without any additional protection" then I think it's up to you to demonstrate why that argument doesn't apply to any other scenario as well.
"Why adopt a protocol that may rely on a weak algorithm without any additional protection"
Does not accurately represent the situation at hand. And that seems intentional.
"Why weaken an existing protocol in ways we know may be exploitable?" is a more accurate representation. And I believe the burden of evidence lies on those arguing to do so.
Another strawman. No one in this thread said Kyber was known to be weaker. Just that elliptic curve cryptography is well tested, better understood as a consequence of being used in production longer, and that removing it opens up transmissions made without both to attacks on the less widely used algorithm which would not otherwise be successful.
It really seems like you're trying not to hear what's been said.
As a friendly reminder, you're arguing with an apologist for the security-flawed approach that the NSA advocates for and wants.
There are absolutely NSA technical and psychological operations personnel who are on HN not just while at work, but for work, and this site is entirely in-scope for them to use rhetoric to try to advance their agenda, even in bad faith.
I'm not saying mjg59 is an NSA propagandist / covert influencer / astroturf / sockpuppet account, but they sure fail the duck test for sounding and acting like one.
Appreciated. I'll only note that if this is the kind of resistance DJB encountered when raising his objections it goes a long way toward explaining why he might choose to publish his complaints publicly and lends additional credibility to his position.
It has certainly affected my perception of the individuals involved.
People can reasonably disagree with the djb position. His blog posts are notoriously divisive, and that doesn't make everyone on the other side a secret NSA influencer.
Please assume good faith, or discussions turn into personal attacks and wild accusations.
I fully agree Matthew Garrett is not a secret NSA propagandist. There is a much simpler explanation.
In 2016, Isis Lovecruft was romantically involved with Jacob Appelbaum. Isis lost a coveted PhD student spot studying under Bernstein to… Jacob Appelbaum. Isis broke up with Jacob and accused him of sexual abuse in a spectacularly public manner.
Isis became romantically involved with Henry de Valence, another Bernstein PhD student. Valence became acquainted with Appelbaum. Later, under Isis’ direction, Valence published a wild screed full of bizarre accusations trying to get Appelbaum expelled and Bernstein fired. When this failed, Isis dumped Valence and publicly accused him of sexual abuse.
Isis Lovecruft is now married to Matthew Garrett. Obviously Matthew is going to work to discredit Bernstein, because if he fails, he knows what the next two steps are.
I didn't claim mjg59 is NSA. I said their arguments function like NSA advocacy. Whether that's by design or coincidence doesn't change the effect. When someone consistently advances positions that serve surveillance state interests using procedural deflection to avoid security substance, noting that pattern isn't a personal attack - it's public, transparent, community-led threat assessment. Pointing out behavior that is functionally indistinguishable from NSA discourse manipulation in a community technical forum - in a conversation about NSA discourse manipulation in community technical forums, no less - isn't a personal attack, it's a social IDS system firing off an alert for a known-bad signature detection.
The effect of claiming that people act like NSA propagandists is indistinguishable from claiming they are an NSA propagandist, except that the wording allows you to weasel out of it.
This turns a thread about cryptography into a thread about attacking someone's particular posting style. This is not going to advance the discussion in any sort of useful direction, the only thing this can do is divide people further while cementing existing positions.
If your IDS thinks well-known free software people are NSA agents because they disagree in a style you don't like, the problem is with the IDS.
If someone doesn't want to be characterized as sounding like an NSA advocate, perhaps they should consider not advocating for NSA objectives.
Anyway, sounds like I'm being dismissed for being "divisive" despite raising substantive security concerns, just like djb. Readers: form your own conclusions about the repetitive patterns here; don't listen to the people telling you not to trust your own eyes.
Note the hallmarks: zero engagement with the substance of the critique (functional equivalence), ad-hom strawman attacks against my character as a response to a misrepresentation of my position, emotional manipulation techniques: demanding focus on tone / civility, maligning moral character of opponent (accusations of divisiveness), still trying to reframe a critique about behavior into an attack against identity that it isn't.
It is uncouth to accuse a person of being an X without evidence.
It is dishonest to state categorically that a person is not an X unless a person is in the position to know.
A pattern of behavior is a kind of evidence and the observed pattern of behavior does not seem to be in dispute.
There is no evidence presented that the person making a categorical statement is in a position to know about anyone's role or lack of a role in the NSA's clandestine activities.
> If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you
Well if your working in a standards development organisation then your manager probably should.
It looks like (in the US at least) standards development organisations have to have (and follow) very robust transparency processes to not be default-liable for individual decisions.
(Unlike most organisations, such as where where you and your manager from your scenario come from)
This is just a bureaucracy making up fake excuses. qsecretary, the autoresponder, is way less annoying than having to create a new account everywhere on each SaaS platform. At least you know your mail arrived.
Everyone has no issues forcing other people to use 2FA, which preferably requires a smartphone, but a simple reply to qsecretary is something heinous.
The $250 are for spam and everyone apart from bureaucrats who want to smear someone as a group knows that this is 1990s bravado and hyperbole.
It's still nice that it was put up for completeness. And as we know this stuff has a long sordid history of people who are proponents of weakening encryption not giving up easily.
This is quite concerning, and respect to DJB for fighting against it. However, I have to wonder...who would this actually compromise that matters to NSA?
* Targets with sufficient technical understanding would use hybrids anyway.
* Average users and unsophisticated targets can already be monitored through PRISM which makes cryptography moot.
The vast majority of organisations just use whatever default security settings their Cisco router or web browser comes with.
The NSA starts by requiring some insecure protocols be supported, and then when support is widespread they start requiring it be made a default by requiring compliance testing be done with default config.
They also historically have extremely deep access to networks, and even if a given corp doesn't allow them to put a box inside the corp's own network, they control / have access to many or all of the links between most corps' datacenters.
From this privileged network position, if both sides support weaker crypto that NSA lobbied for, they can MitM the initial connection and omit the hybrid methods from the client's TLS ClientHello, and then client/server proceed to negotiate into a cipher that NSA prefers.
Pretty sure this isn't possible?? There must be some way to use a hash of the clientHello later in the key exchange process to make sure the connection fails if the hello is tampered with...?
I think the point is... even if you can't get everyone to adopt your backdoored technology, you are much better off if 30% of the market adopts it than if >1%...
Intelligence is a numbers game, they never get everything, but if your net is wide enough and you don't give up, you'll catch a lot of fish over time
It is concerning that the IETF is moving forward with a proposal that weakens security and is of questionable technical merit, with the most reasonable explanation being that this is the result of efforts by government surveillance agencies to enable or potentially enable monitoring of encrypted communications supposedly protected by this standard; additionally, it is concerning that disagreeing with this decision is being met with censure and outright hostility by leaders of the IETF.
1. adopt hybrid/dual encryption. This is safe against a break of the PQC layer which seems entirely plausible given that the algorithms are young, the implementations are younger, and there has been significant weakening of the algorithms in the past decade.
2. Adopt PQC without a backup layer. This approach is ~5% faster (PQC algorithms are pretty slow), with the cost of breaking encryption for everyone on the internet if any flaw in the PQC algorithms or implementations is found.
Poor quality analogy: should ed25519 only have been incorporated into protocols in conjunction with another cryptographic primitive? Surely requiring a hybrid with ecdsa would be more secure? Why did djb not argue for everyone using ed25519 to use a hybrid? Was he trying to reduce security?
The reason this is a poor quality analogy is that fundamentally ecdsa and ed25519 are sufficiently similar that people had a high degree of confidence that there was no fundamental weakness in ed25519, and so it's fine - whereas for PQC the newer algorithms are meaningfully mathematically distinct, and the fact that SIKE turned out to be broken is evidence that we may not have enough experience and tooling to be confident that any of them are sufficiently secure in themselves and so a protocol using PQC should use a hybrid algorithm with something we have more confidence in. And the counter to that is that SIKE was meaningfully different in terms of what it is and does and cryptographers apparently have much more confidence in the security of Kyber, and hybrid algorithms are going to be more complicated to implement correctly, have worse performance, and so on.
And the short answer seems to be that a lot of experts, including several I know well and would absolutely attest are not under the control of the NSA, seem to feel that the security benefits of a hybrid approach don't justify the drawbacks. This is a decision where entirely reasonable people could disagree, and there are people other than djb who do disagree with it. But only djb has engaged in a campaign of insinuating that the NSA has been controlling the process with the goal of undermining security.
> seem to feel that the security benefits of a hybrid approach don't justify the drawbacks.
The problem with this statement to me is that we know of at least 1/4 finalists in the post quantum cryptography challenge is broken, so it's very hard to assign a high probability that the rest of the algorithms will be secure from another decade of advancement (this is not helped by the fact that since the beginning of the contest, the lattice based methods have lost a signficant number of bits as better attacks have been discovered).
Make sure you absolutely have fresh entropy for all ten of your encryption layers. Re-using secrets and randomness between different encryption algorithms can leak a lot of data!
> Nothing is as cheap as hardware-accelerated AES.
Yes, and at the same time all of modern crypto is incredibly cheap and can be added as wished on almost every application without any visible extra costs.
So the answer to the GP is not that trivial one. The actual answer is about software complexity making errors more likely, and similar encryption schemes not really adding any resiliency.
It sounds like there is probably some ongoing drama here, but aside from that: this post has convinced me that standards this important need to be decided on by organizations that aren't a government.
I wonder who else could reasonably host a standardization process?
Maybe the Linux Foundation?
All the cryptography talent seems to be working on ZK proofs at the moment in the Ethereum ecosysetem; I think if Vitalik organized a contest like NIST people would pay attention.
The most important thing is to incentivize attackers to break the cryptography on dummy examples instead of in the wild.
Ideally: before the algorithm is standardized.
The Ethereum folks are well setup to offer bounties for this.
If a cryptographer can make FU money through responsible disclosure, then there is less incentive to sell the exploit to dishonest parties.
The same applies to the raving push to replace RSA with ECC. A long trusted algorithm suddenly became ill-trusted, too complex to implement right, too slow, too unfashionable, and the influx of these accusations was too synchronized and templated to look like something organic.
I used to be such a fan of this guy. But he's turned into Ed Zitron, the same long rambling rants, except about cryptography, and except that he knows what he's talking about, and he knows that you have to know literally nothing at all about the field he's commenting on to associated Dual EC with anything happening in PQ. And if you know anything about the field, trying to compare MLKEM with SIKE is the same deal. It's really sad.
Dual EC wasn't a shockingly clever, CS-boundary-pushing hack (and NSA has apparently deployed at least one of those in the last 20 years). It was an RNG (not a key agreement protocol) based on asymmetric public key cryptography, a system where you could look at it and just ask "where's the private key?" There wasn't a ton of academic research trying to pick apart flaws in Dual EC because why would there be? Who would ever use it?
(It turns out: a big chunk of the industry, which all ran on ultra-closed source code and was much less cryptographically literate that most people thought. I was loudly wrong about this at the time!)
MLKEM is a standard realization of CRYSTALS-Kyber, an algorithm submitted to the NIST PQ contest by a team of some of the biggest names in academic PQ cryptography, including Peter Schwabe, a prior collaborator of Bernstein. Nobody is looking at MLKEM and wondering "huh, where's the private key?".
MLKEM is based on cryptographic ideas that go back to the 1990s, and were intensively studied in the 2000s. It's not oddball weird cryptography. It is to the lineage of lattice cryptography roughly what Ed25519 was to elliptic curve cryptography at the time of Ed25519's adoption.
Utterly unlike SIKE, which isn't a lattice algorithm at all, but rather a supersingular isogeny algorithm, a cryptographic primitive based on an entirely new problem class, and an extremely abstruse one at that. The field had been studying lattice cryptography intensively for decades by the time MLKEM came to pass. That's not remotely true of isogeny cryptography. Isogenies were taken seriously not because of confidence in the hardness of isogenies, but because of ergonomics: they were a drop-in replacement for Diffie Hellman in a way MLKEM isn't.
These are all things Bernstein is counting on you not knowing when you read this piece.
The SIKE comparison is not particularly inconsistent since Bernstein has been banging the drum that structured lattices may not be as secure as thought for years now.
Currently the best attacks on NTRU, Kyber, etc, are essentially the same generic attacks that work for something like Frodo, which works on unstructured lattices. And while the resistance of unstructured attacks is pretty well studied at this point, it is not unreasonable to suspect that the algebraic structure in the more efficient lattice schemes can lead to more efficient attacks. How efficient? Who knows.
As I read it, the point of mentioning Dual EC is to show a previous case where NSA have acted in a way that reduces security for hand-wavy reasons, in addition to the DES case where they did the same.
And now, in a world where QR + pre-QR algos are typically being introduced in a layered fashion, they're saying "let's add another option, to reduce the number of options" which at least looks very suspicious
Practical quantum computers are probably not very close, but you can certainly use the fear of them as a chance to introduce a new back-door. If you did, you'd have to behave exactly as the NSA is doing right now.
Bernstein wasn't the only objector. There were 7 objectors and 20 proponents.
Dual EC isn't the only comparison he's making. He's also making a comparison to DES, which had an obvious weakness: 53 bit limitation, similar to the obvious weakness of non-hybrid. In neither case is there a secret backdoor. At the time of DES, the NSA publicly said they used it, to make others confident in it. Similarly, the NSA is saying "we do not anticipate supporting hybrid in NSS", which will make people confident in non-hybrid. But in the background, NSA actually uses something more secure (using 2 layers of encryption themselves).
Thanks, I'll use Bernstein's recommendations. His article is not rambling: Mailing list discussions are just tedious to recap.
I wonder what your strategy here is. Muddying the waters and depict Bernstein as a renegade? You have made too many big-state and big-money apologist posts for that to work.
ML-KEM and SIKE were both candidates in the PQ competition which ML-KEM won. SIKE was considered such a strong contender that it was used in production TLS experiments at scale by Google and Cloudflare. (I guess you didn’t read past the second paragraph?)
You find it offensive now to compare ML-KEM and SIKE because SIKE was so thoroughly broken and demonstrated to be worse than pre-quantum crypto. But ML-KEM may already be broken this thoroughly by NSA and friends, and they’re keeping it secret because shipping bad crypto to billions of people enables SIGINT. The idea that your professional crypto acquaintances might be on the NSA’s payroll clearly disturbs you enough that you dismiss it out of hand.
Bernstein is proposing more transparency because that is what was promised after the Dual-EC debacle. Do you disagree with Bernstein because he advocates for transparency (which could prevent bad crypto shipping), or because of his rhetorical style?
I find the comparison risible because SIKE is based on an entirely different and novel problem class, and the vibe I get from Bernstein is that he thinks lattice cryptography is alien enough to people who don't work in this space that they'll miss the fact that cryptosystems based on ring-LWE hardness have been worked on by giants in the field since the mid-1990s.
You seem blind to the obvious corollary to that fact, which is if cryptosystems based on ring-LWE hardness have been worked on by giants for 30 years, then those same cryptosystems have been cryptanalyzed for 30 years, and a significant chunk of cryptanalytic research stays in NSA’s Classified Mathematics Library.
You’ve admitted you were “loudly wrong” when you announced Dual-EC couldn’t be an NSA cryptography backdoor. Snowden let us all know the NSA spends $250 million every year secretly convincing/bribing the private sector to use bad cryptography. Despite that history, you are still convinced there’s no way ML-KEM is an NSA cryptographic backdoor and that all the bizarre procedural errors in the PQ crypto contest are mere coincidences.
[checks my text messages] Lucy just texted me, Thomas. She’s outside waiting for you to kick her football.
For the same reason your Toyota Camry doesn't have a roll cage.
I'd use a hybrid if I was designing a system; I am deeply suspicious of all cryptography, and while I don't think Kyber is going to collapse, I wouldn't bet against 10-15 years of periodic new implementation bugs nobody knew to look for.
But I'm cynical about cryptography. It's really clear why people would want a non-hybrid code point.
Let me just say this once as clearly as I can: I sort of don't give a shit about any of this. A pox on all their houses. I think official cryptographic standards are a force for evil. More good is going to be done for the world by systems that implement well enough to become de facto standards. More WireGuards, fewer RFCs. Certainly, I can't possibly give even a millifuck about what NIST wants.
But I also can't be chill about these blog posts Bernstein writes where it's super clear his audience is not his colleagues in cryptography research, but rather a lay audience that just assumes anything he writes must be true and important. It's gross, because you can see the wires he's using to hold these arguments together (yes, even I can see them), and I don't like it when people insult their audiences this way.
> For the same reason your Toyota Camry doesn't have a roll cage.
It does though. It's just been engineered integral to the unibody. And there are crumple zones, airbags, seat belts, ABS, emergency braking systems, collision sensors, and more layered defenses in addition.
No sane engineer would argue that removing these layers of defense would make the car safer.
If you remove enough of them, then the car is much lighter: in some scenarios (such as when the car has no occupants), that makes it much safer. Of course, these scenarios are relatively rare – but a "sane engineer" could easily make an argument along these lines.
Strong disagree. Best practice is to evaluate the likelihood of scenarios along with potential negative impacts and your contrived scenario fails on both counts. It would not survive review. If you are lucky, your senior engineer may consider it a joke. Unlucky and you might be the joke for suggesting it. Either way you'd be liable for such a decision, were it to make it into a production vehicle and result in a death.
It's really clear why people would want a non-hybrid code point.
To me it really isn't. TLS has no need for it. But let's focus the context for some US government organisations that want this for their FIPS maturity level they're aiming for. Why would these organisations want a weaker algorithm for TLS than what is standardised; more importantly how does it benefit deployment except save a tiny bit of computation and eliminate some ECC code. I'm not going to jump the shark and say it is nefarious, but I will throw in my 2 cents and say it doesn't help security and is unnecessary.
And this gets back to the Dual-EC argument, right? Dual-EC was standardized as this weird government thing that maybe you technically need for FIPS, but obviously if you're seriously designing a cryptosystem you wouldn't choose it. And that seems to be GP's position on non-hybrid PQ as well -- just that the reason for not choosing it is "it introduces risk for very little benefit" instead of "it is obviously a bumbling attempt at introducing a backdoor".
> Dual-EC was standardized as this weird government thing that maybe you technically need for FIPS, but obviously if you're seriously designing a cryptosystem you wouldn't choose it.
Unless NSA pays you $10 million, as they did to RSA, to make said obviously bumbling attempt the default in their security products.
Yeah. Like, the argument here is that the reason government agencies push stuff into standards is because they do in fact want people to use it. "Well government purchasing is just Like That, surely no one will actually use this option in the real world" is an even weaker counter-argument if the option is not obviously backdoored.
I skimmed the article, but it doesn't make too much sense. It says:
>Surveillance agency NSA and its partner GCHQ are trying to have standards-development organizations endorse weakening ECC+PQ down to just PQ.
The NSA spends about half of its resources attempting to hack the FBI and erase its evidence against them in the matter of keeping my wife and me from communicating. The other half of the staff are busy commenting online about how unfair this is, and attempting to get justice.
There are no NSA resources left for actions like the one I quoted. I don't think NSA is involved in it.
According to the New York Times in 2013, based on Snowden documents, the NSA allocates $250 million every year for the actions you quoted. They call it the “SIGINT Enabling Project”.
DJB has been complaining about this NSA position since 2022 (I guess long before it was an issue at the TLS WG):
https://blog.cr.yp.to/20220805-nsa.html
I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications. If it isn't some sort of gimmick to allow NSA to break these, it's sure showing a huge amount of confidence in relatively recently developed mechanisms.
It feels kind of like saying "oh, now that we can detect viruses in sewage, hospitals should stop bothering to report possible epidemic outbreaks, because that's redundant with the sewage monitoring capability". (Except worse, because it involves some people who may secretly be pursuing goals that are the opposite of everyone else's.)
Edit: DJB said in that 2022 post
> I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications.
Why is that so surprising? Adopting new cryptography by running it in a hybrid mode with the cryptography it's replacing is generally not standard practice and multi-algorithm schemes are pretty niche at best (TrueCrypt/VeraCrypt are the only non-PQ cases that come to mind, although I'm sure there are others). Now you could certainly argue that PQ algorithms are untested and risky in a way that was not true of any other new algorithm and thus a hybrid scheme makes the most sense, but it's not such an obviously correct argument that anyone arguing otherwise must be either stupid or malicious.
There are probably other periods of time when I might have advocated running hybrids of different families of primitives, although I'm not sure that I was ever following the details closely enough to have actually advocated for that.
The cool thing is the dramatic security improvements against certain unknown unknowns for approximately linear additional work and space. Seems like a pretty great advantage for the defender, although seriously arguing that quantitatively requires some way to reason about the unknown unknowns (the reductio ad absurdum being that we would need to use every relevant primitive ever published in every protocol¹).
I see PQC as somehow very discontinuous with existing cryptography, both in terms of the attacks it tries to mitigate and the methods it uses to resist them. This might be wrong. Maybe it's fair to consider it an evolutionary advance in cryptographic primitive design.
The casual argument from ignorance is that lattices are apparently either somewhat harder to understand, or just less-studied overall, than other structures that public-key primitives have been built on, to the extent that we would probably currently not use them at all in practical cryptography if it weren't for the distinctive requirements of resistance to quantum algorithms. I understand that this isn't quantitative or even particularly qualitative (for instance, I don't have any idea of what about lattices is actually harder to understand).
Essentially, in this view, we're being forced into using weird esoteric stuff much earlier than we'd like because it offers some hope of defending against other weird esoteric stuff. Perhaps this is reinforced by, for example, another LWE submission having been called "NewHope", connoting to me that LWE was thought even by many of its advocates to offer urgently-needed "hope", but maybe not "confidence".
I'd like not to have to have that argument only in terms of vibes (and DJB does have some more concrete arguments that the security of SIKE was radically overestimated, while the security of LWE methods was moderately overestimated, so we need to figure out how to model how much of the problem was identified by the competition process and how much may remain to be discovered). I guess I just need to learn more math!
¹ I think I remember someone at CCC saying with respect to the general risk of cryptographic backdoors that we should use hybrids of mechanisms that were created by geopolitical rivals, either to increase the chance that at least one party did honest engineering, or to decrease the chance that any party knows a flaw in the overall system! This is so bizarre and annoying as a pure matter of math or engineering, but it's not like DJB is just imagining the idea that spy agencies sometimes want to sabotage cryptography, or have budgets and staff dedicated to doing so.
Expand on "recently-developed mechanisms".
I don't have a good sense of what to point to as the "mechanism".
https://en.wikipedia.org/wiki/Lattice-based_cryptography#His...
2005 (LWE), 2012 (LWE for key exchange), earlier (1990s for lattice math in general), 2017 (Kyber submission), later (competition modifications to Kyber)?
I can see where one could see the mathematics as moderately mature (comparable in age to ECC, but maybe less intensively studied?). As above, I don't know quite how to think about whether the "thing" here is properly "lattices", "LWE", "LWE-KEX", "Kyber", or "the parameters and instantiation of Kyber from the NIST PQ competition". Depending where we focus our attention there, I suppose this gives us some timeframe from the 1980s (published studies of computational complexity of lattice-related algorithms) to "August 2024" (adoptions of NIST PQ FIPS documents).
Edit: The other contextual thing that freaks out DJB, for those who might not be familiar, is that one of the proposed standards NIST was considering, SIKE, made it all the way through to the final (fourth) round of consideration, whereupon it was completely broken by a couple of researchers bringing to bear mathematical insight. Now SIKE had a very different architecture than the other proposals in the fourth round, so it seems like a portion of the debate is whether the undetected mathematical problems in SIKE are symptomatic of "the NIST competition came extraordinarily close to approving something that was totally broken, so maybe it wasn't actually that great at evaluating candidate algorithms, or at least maybe the mathematics community's understanding of post-quantum key exchange algorithms is still immature" or more symptomatic of "SIKE had such a weird and distinctive architecture that it was hard to understand or analyze, or hard to motivate relevant experts to understand or analyze it, unlike other candidate algorithms that were and are much better understood". It seems like DJB is saying the former and you're saying the latter.
There is so much here to debate about. A) Never trust the cyber feds. B) The NSA is not the place anyone thinks, it’s a Wild West in the most bizarre of places, trust me from experience. C) Cryptology concerns more of than security and exchanging messages or packets, sometimes you don’t even know what kind of thing (living) can and has been decrypted. D) The NSA plays very, very, very dirty. It is like a digital CIA, they are in everything (i.e. cyber spies in various roles at tech/telecom/manufacturer company xyz). E) NEVER LISTEN TO THE DAMN NSA / DRIVEN BY A CULTURE OF EXPLOITATION
>sometimes you don’t even know what kind of thing (living) can and has been decrypted.
?
Ok, aside from not trusting the NSA, could you expand on why someone should trust you?
> expand on why someone should trust you
The point is to trust no one and no thing that we cannot examine freely, closely, and transparently. And to maintain healthy skepticism of any entity that claims to have a virtuous process to do its business.
GGP stated:
And the point is “trust me: trust no one, but especially not them” is the meaning you are ignoring.
No it's not. The NSA has been the Federal Govt's designated expert on cryptography since the end of WW2. You are pretending that the current set of NIST standards and every previous NIST standard has not had incredibly intimate contact with the NSA.
You're lived experience tells you to trust the NSA, at least as it relates to NIST standards.
It's a touch odd to make a big deal of the fact that you've filed a complaint and fail to mention that it was formally rejected three days before you published the post: https://datatracker.ietf.org/group/iesg/appeals/artifact/146
Lots of respect to both you and the author, but the rejection gives no real response to any of the issues I see raised in the document.
It failed to raise my confidence at all.
> The IESG has concluded that there were no process failures by the SEC ADs. The IESG declines to directly address the complaint on the TLS WG document adoption matter. Instead, the appellant should refile their complaint with the SEC ADs in a manner which conforms to specified process.
I feel like if your argument is that the rules weren't followed, you have a pretty strong obligation to follow the rules in submitting your complaint.
Having served on boards, rejections on procedural grounds which fail to address engineering concerns which have been raised stink of a cop-out.
Have you read the complaint? It's not about engineering concerns, it's about whether procedures were followed correctly.
> Have you read the complaint? It's not about engineering concerns, it's about whether procedures were followed correctly.
This complaint? https://cr.yp.to/2025/20250812-non-hybrid.pdf
Engineering concerns start in section 2 and continue through section 4.
It seems you haven't read it.
There's a bunch of content that's not actually the complaint, and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.
> and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.
Ah, yes, procedural complaints such as "The draft creates security risks." and "There are no principles supporting the adoption decision.", and "The draft increases software complexity."
I don't know what complaint you're reading, but you're working awful hard to ignore the engineering concerns presented in the one I've read and linked to.
As is made clear from the fact that those issues all link to the mailing list, these are not novel issues. They were raised during discussion, taken into account, and the draft authors concluded they were answered adequately. Complaining about them at this point is fundamentally a complaint that the process failed to take these issues into account appropriately, and the issues should be revisited. Given that this was raised to the IESG, who are not the point of contact for engineering issues, the response is focused on that. There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.
> There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.
This is the retort of every bureaucracy which fails to do the right thing, and signals to observers that procedure is being used to overrule engineering best practices. FYI.
I'm thankful for the work djb has put in to these complaints, as well as his attempts to work through process, successful or not, as otherwise I wouldn't be aware of these dangerous developments.
Excuses of any kind ring hollow in the presence of historical context around NSA and encryption standardization, and the engineering realities.
Hey, look, you're free to read the mailing list archives and observe that every issue Dan raised was discussed at the time, he just disagreed with the conclusions reached. He made a complaint to the ADs, who observed that he was using an email address with an autoresponder that asserted people may have to pay him $250 for him to read their email, and they (entirely justifiably) decided not to do that. Dan raised the issue to the next level up, who concluded that the ADs had behaved entirely reasonably in this respect and didn't comment on the engineering issues because it's not their job to in this context.
It's not a board's job to handle every engineering complaint themselves, simply because they are rarely the best suited people to handle engineering complaints. When something is raised to them it's a matter of determining whether the people whose job it is to make those decisions did so appropriately, and to facilitate review if necessary. In this case the entire procedural issue is clear - Dan didn't raise a complaint in the appropriate manner, there's still time for him to do so, there's no problem, and all the other complaints he made about the behaviour of the ADs were invalid.
> you're free to read the mailing list archives and observe that every issue Dan raised was discussed at the time
As was https://en.wikipedia.org/wiki/Dual_EC_DRBG which was ratified over similar objections.
That made it no less of a backdoor.
> it's not their job
As I said about excuses.
They're adhering to their charter. If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you - that's not the process, that's not his job, he's going to trust me to make good decisions unless presented with evidence I've misbehaved.
But as has been pointed out elsewhere, the distinction between the Dual EC DRBG objections and here are massive. The former had an obvious technical weakness that provided a clear mechanism for a back door, and no technical justification for this was ever meaningfully presented, and also it wasn't an IETF discussion. The counterpoints to Dan's engineering complaints (such as they are) are easily accessible to everyone, Dan just chose not to mention them.
> unless presented with evidence
The complaint seems well referenced with evidence of poor engineering decisions to me.
> Dual EC DRBG ... had an obvious technical weakness that provided a clear mechanism for a back door
Removing an entire layer of well tested encryption qualifies as an obvious technical weakness to me. And as I've mentioned elsewhere in these comments, opens users up to a https://en.wikipedia.org/wiki/Downgrade_attack should flaws in the new cipher be found. There is a long history of such flaws being discovered, even after deployment. Several examples of which DJB references.
I see no cogent reason for such recklessness, and many reasons to avoid it.
Continued pointing toward "procedure" seems to cede the case.
Why don't we hybridise all crypto? We'd get more security if we required RSA+ECDSA+ED25519 at all times, right? Or is the answer that the benefits are small compared to the drawbacks? I am unqualified to provide an answer, but I suspect you are also, and the answer we have from a whole bunch of people who are qualified is that they think the benefits aren't worth it. So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.
FWIW, https://blog.cr.yp.to/20240102-hybrid.html reads to me like a more direct attempt to engage with the counterarguments.
I am curious what the costs are seen to be here. djb seems to make a decent argument that the code complexity and resource usage costs are less of an issue here, because PQ algorithms are already much more expensive/hard to implement then elliptic curve crypto. (So instead of the question being "why don't we triple our costs to implement three algorithms based on pretty much the same ideas", it's "why don't we take a 10% efficiency hit to supplement the new shiny algorithm with an established well-understood one".)
On the other hand, it seems pretty bad if personal or career cost was a factor here. The US government is, for better or worse, a pretty major stakeholder in a lot of companies. Like realistically most of the people qualified to opine on this have a fed in their reporting chain and/or are working at a company that cares about getting federal contracts. For whatever reason the US government is strongly anti-hybrid, so the cost of going against the grain on this might not feel worth it to them.
Which insinuations do you think are ludicrous? Is it not a matter of public record at this point that the NSA and NIST have lied to weaken cryptography standards?
The entirely unsupported insinuation that the customer Cisco is describing is the NSA. What's even supposed to be the motivation there? The NSA want weak crypto so they're going to buy a big pile of Ciscos that they'll never use but which will make people think it's secure? There are others, but on its own that should already be a red flag.
The article links a statement from an NSA official that explicitly says the NSA has been asking vendors for this, which seems like fairly strong support to me.
>So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.
As a response to this only, while djb's recent blog posts have adopted a slightly crackpotish writing style, PQC hybridization is not a fringe idea, and is not deployed because of djb's rants.
Over in Europe, German BSI and French ANSSI both strongly recommend hybrid schemes. As noted in the blog, previous Google and Cloudflare experiments have deployed hybrids. This was at an earlier stage in the process, but the long history of lattices that is sometimes being used as a (reasonable) argument against hybrids applied equally when those experiments were deployed, so here I'm arguing that the choice made at the time is still reasonably today, since the history hasn't changed.
Yes, there is also a more general "lots of PQC fell quite dramatically" sentiment at play that doesn't attempt to separate SIKE and MLKEM. That part I'm happy to see criticized, but I think the broader point stands. Hybrids are a reasonable position, actually. It's fine.
> Why don't we hybridise all crypto?
So you've constructed a strawman. Another indication of ceding the argument.
> and the answer we have from a whole bunch of people who are qualified
The ultimate job of a manager or a board is to take responsibility for the decisions of the organization. All of your comments in this thread center around abdicating that responsibility to others.
> This isn't actually an engineering hill I'd die on
Could have fooled me.
> we basically have djb against the entire world
Many of your comments indicate to me that clashing personalities may be interfering with making the right engineering decision.
If the argument is "Why adopt a protocol that may rely on a weak algorithm without any additional protection" then I think it's up to you to demonstrate why that argument doesn't apply to any other scenario as well.
Again with the strawmen.
"Why adopt a protocol that may rely on a weak algorithm without any additional protection"
Does not accurately represent the situation at hand. And that seems intentional.
"Why weaken an existing protocol in ways we know may be exploitable?" is a more accurate representation. And I believe the burden of evidence lies on those arguing to do so.
Kyber is not known to be weaker than any other well used algorithm.
Another strawman. No one in this thread said Kyber was known to be weaker. Just that elliptic curve cryptography is well tested, better understood as a consequence of being used in production longer, and that removing it opens up transmissions made without both to attacks on the less widely used algorithm which would not otherwise be successful.
It really seems like you're trying not to hear what's been said.
ML-KEM as standardized by NIST is weaker than Kyber.
As a friendly reminder, you're arguing with an apologist for the security-flawed approach that the NSA advocates for and wants.
There are absolutely NSA technical and psychological operations personnel who are on HN not just while at work, but for work, and this site is entirely in-scope for them to use rhetoric to try to advance their agenda, even in bad faith.
I'm not saying mjg59 is an NSA propagandist / covert influencer / astroturf / sockpuppet account, but they sure fail the duck test for sounding and acting like one.
Appreciated. I'll only note that if this is the kind of resistance DJB encountered when raising his objections it goes a long way toward explaining why he might choose to publish his complaints publicly and lends additional credibility to his position.
It has certainly affected my perception of the individuals involved.
Matthew Garrett is not a secret NSA propagandist.
People can reasonably disagree with the djb position. His blog posts are notoriously divisive, and that doesn't make everyone on the other side a secret NSA influencer.
Please assume good faith, or discussions turn into personal attacks and wild accusations.
I fully agree Matthew Garrett is not a secret NSA propagandist. There is a much simpler explanation.
In 2016, Isis Lovecruft was romantically involved with Jacob Appelbaum. Isis lost a coveted PhD student spot studying under Bernstein to… Jacob Appelbaum. Isis broke up with Jacob and accused him of sexual abuse in a spectacularly public manner.
Isis became romantically involved with Henry de Valence, another Bernstein PhD student. Valence became acquainted with Appelbaum. Later, under Isis’ direction, Valence published a wild screed full of bizarre accusations trying to get Appelbaum expelled and Bernstein fired. When this failed, Isis dumped Valence and publicly accused him of sexual abuse.
Isis Lovecruft is now married to Matthew Garrett. Obviously Matthew is going to work to discredit Bernstein, because if he fails, he knows what the next two steps are.
I didn't claim mjg59 is NSA. I said their arguments function like NSA advocacy. Whether that's by design or coincidence doesn't change the effect. When someone consistently advances positions that serve surveillance state interests using procedural deflection to avoid security substance, noting that pattern isn't a personal attack - it's public, transparent, community-led threat assessment. Pointing out behavior that is functionally indistinguishable from NSA discourse manipulation in a community technical forum - in a conversation about NSA discourse manipulation in community technical forums, no less - isn't a personal attack, it's a social IDS system firing off an alert for a known-bad signature detection.
The effect of claiming that people act like NSA propagandists is indistinguishable from claiming they are an NSA propagandist, except that the wording allows you to weasel out of it.
This turns a thread about cryptography into a thread about attacking someone's particular posting style. This is not going to advance the discussion in any sort of useful direction, the only thing this can do is divide people further while cementing existing positions.
If your IDS thinks well-known free software people are NSA agents because they disagree in a style you don't like, the problem is with the IDS.
If someone doesn't want to be characterized as sounding like an NSA advocate, perhaps they should consider not advocating for NSA objectives.
Anyway, sounds like I'm being dismissed for being "divisive" despite raising substantive security concerns, just like djb. Readers: form your own conclusions about the repetitive patterns here; don't listen to the people telling you not to trust your own eyes.
Note the hallmarks: zero engagement with the substance of the critique (functional equivalence), ad-hom strawman attacks against my character as a response to a misrepresentation of my position, emotional manipulation techniques: demanding focus on tone / civility, maligning moral character of opponent (accusations of divisiveness), still trying to reframe a critique about behavior into an attack against identity that it isn't.
This quacking of theirs just gives them out as a duck many of us suspected them to be.
It is uncouth to accuse a person of being an X without evidence.
It is dishonest to state categorically that a person is not an X unless a person is in the position to know.
A pattern of behavior is a kind of evidence and the observed pattern of behavior does not seem to be in dispute.
There is no evidence presented that the person making a categorical statement is in a position to know about anyone's role or lack of a role in the NSA's clandestine activities.
> If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you
Well if your working in a standards development organisation then your manager probably should.
It looks like (in the US at least) standards development organisations have to have (and follow) very robust transparency processes to not be default-liable for individual decisions.
(Unlike most organisations, such as where where you and your manager from your scenario come from)
This is just a bureaucracy making up fake excuses. qsecretary, the autoresponder, is way less annoying than having to create a new account everywhere on each SaaS platform. At least you know your mail arrived.
Everyone has no issues forcing other people to use 2FA, which preferably requires a smartphone, but a simple reply to qsecretary is something heinous.
The $250 are for spam and everyone apart from bureaucrats who want to smear someone as a group knows that this is 1990s bravado and hyperbole.
If you have nothing to hide, feel free to mail me your unlocked phone.
It's still nice that it was put up for completeness. And as we know this stuff has a long sordid history of people who are proponents of weakening encryption not giving up easily.
This is quite concerning, and respect to DJB for fighting against it. However, I have to wonder...who would this actually compromise that matters to NSA?
* Targets with sufficient technical understanding would use hybrids anyway.
* Average users and unsophisticated targets can already be monitored through PRISM which makes cryptography moot.
So...what's their actual end game here?
The vast majority of organisations just use whatever default security settings their Cisco router or web browser comes with.
The NSA starts by requiring some insecure protocols be supported, and then when support is widespread they start requiring it be made a default by requiring compliance testing be done with default config.
They also historically have extremely deep access to networks, and even if a given corp doesn't allow them to put a box inside the corp's own network, they control / have access to many or all of the links between most corps' datacenters.
From this privileged network position, if both sides support weaker crypto that NSA lobbied for, they can MitM the initial connection and omit the hybrid methods from the client's TLS ClientHello, and then client/server proceed to negotiate into a cipher that NSA prefers.
Pretty sure this isn't possible?? There must be some way to use a hash of the clientHello later in the key exchange process to make sure the connection fails if the hello is tampered with...?
I think the point is... even if you can't get everyone to adopt your backdoored technology, you are much better off if 30% of the market adopts it than if >1%...
Intelligence is a numbers game, they never get everything, but if your net is wide enough and you don't give up, you'll catch a lot of fish over time
Coupled with QUANTUMINSERT, it would enable a https://en.wikipedia.org/wiki/Downgrade_attack even for folks who might otherwise be using stronger encryption methods.
99% of global TLS traffic?
What's quite concerning? Be specific.
It is concerning that the IETF is moving forward with a proposal that weakens security and is of questionable technical merit, with the most reasonable explanation being that this is the result of efforts by government surveillance agencies to enable or potentially enable monitoring of encrypted communications supposedly protected by this standard; additionally, it is concerning that disagreeing with this decision is being met with censure and outright hostility by leaders of the IETF.
I'm asking how, specifically, it "weakens security" and is of "questionable technical merits". The IETF isn't a government body.
they have 2 options:
1. adopt hybrid/dual encryption. This is safe against a break of the PQC layer which seems entirely plausible given that the algorithms are young, the implementations are younger, and there has been significant weakening of the algorithms in the past decade.
2. Adopt PQC without a backup layer. This approach is ~5% faster (PQC algorithms are pretty slow), with the cost of breaking encryption for everyone on the internet if any flaw in the PQC algorithms or implementations is found.
It's hard to answer your question without repeating the arguments made in the post itself.
Are you implying that djb blew the matter out of proportion?
Poor quality analogy: should ed25519 only have been incorporated into protocols in conjunction with another cryptographic primitive? Surely requiring a hybrid with ecdsa would be more secure? Why did djb not argue for everyone using ed25519 to use a hybrid? Was he trying to reduce security?
The reason this is a poor quality analogy is that fundamentally ecdsa and ed25519 are sufficiently similar that people had a high degree of confidence that there was no fundamental weakness in ed25519, and so it's fine - whereas for PQC the newer algorithms are meaningfully mathematically distinct, and the fact that SIKE turned out to be broken is evidence that we may not have enough experience and tooling to be confident that any of them are sufficiently secure in themselves and so a protocol using PQC should use a hybrid algorithm with something we have more confidence in. And the counter to that is that SIKE was meaningfully different in terms of what it is and does and cryptographers apparently have much more confidence in the security of Kyber, and hybrid algorithms are going to be more complicated to implement correctly, have worse performance, and so on.
And the short answer seems to be that a lot of experts, including several I know well and would absolutely attest are not under the control of the NSA, seem to feel that the security benefits of a hybrid approach don't justify the drawbacks. This is a decision where entirely reasonable people could disagree, and there are people other than djb who do disagree with it. But only djb has engaged in a campaign of insinuating that the NSA has been controlling the process with the goal of undermining security.
> seem to feel that the security benefits of a hybrid approach don't justify the drawbacks.
The problem with this statement to me is that we know of at least 1/4 finalists in the post quantum cryptography challenge is broken, so it's very hard to assign a high probability that the rest of the algorithms will be secure from another decade of advancement (this is not helped by the fact that since the beginning of the contest, the lattice based methods have lost a signficant number of bits as better attacks have been discovered).
The mere idea that that they want to do this makes me want a 3rd layer of encryption on top of the other 2.
Encryption layers are actually pretty cheap for the vast majority of ciphers and applications.
Seems dumb not to have like 10.
Make sure you absolutely have fresh entropy for all ten of your encryption layers. Re-using secrets and randomness between different encryption algorithms can leak a lot of data!
Nothing is as cheap (and secure at the same time) as hardware-accelerated AES. Thats why its often the only encryption-layer used.
> Nothing is as cheap as hardware-accelerated AES.
Yes, and at the same time all of modern crypto is incredibly cheap and can be added as wished on almost every application without any visible extra costs.
So the answer to the GP is not that trivial one. The actual answer is about software complexity making errors more likely, and similar encryption schemes not really adding any resiliency.
It sounds like there is probably some ongoing drama here, but aside from that: this post has convinced me that standards this important need to be decided on by organizations that aren't a government.
I wonder who else could reasonably host a standardization process? Maybe the Linux Foundation? All the cryptography talent seems to be working on ZK proofs at the moment in the Ethereum ecosysetem; I think if Vitalik organized a contest like NIST people would pay attention.
The most important thing is to incentivize attackers to break the cryptography on dummy examples instead of in the wild. Ideally: before the algorithm is standardized. The Ethereum folks are well setup to offer bounties for this. If a cryptographer can make FU money through responsible disclosure, then there is less incentive to sell the exploit to dishonest parties.
The same applies to the raving push to replace RSA with ECC. A long trusted algorithm suddenly became ill-trusted, too complex to implement right, too slow, too unfashionable, and the influx of these accusations was too synchronized and templated to look like something organic.
Ah FIPS, the bastion of security standards.
I misread that as "the bassoon of security standards" and it sent my brain in a whole other direction
> The post-quantum algorithm might turn out to be breakable even with today's computers
This implies that what is actually being offered is Security Through Ignorance.
Is this encryption sound? Maybe, who knows! Let's wait and find out!
The chilling part is that a guy named Wouters threatens to ban Bernstein in a characteristically rude CoC message:
https://mailarchive.ietf.org/arch/msg/tls/RK1HQB7Y-WFBxQaAve...
Trust the process!
I used to be such a fan of this guy. But he's turned into Ed Zitron, the same long rambling rants, except about cryptography, and except that he knows what he's talking about, and he knows that you have to know literally nothing at all about the field he's commenting on to associated Dual EC with anything happening in PQ. And if you know anything about the field, trying to compare MLKEM with SIKE is the same deal. It's really sad.
Dual EC wasn't a shockingly clever, CS-boundary-pushing hack (and NSA has apparently deployed at least one of those in the last 20 years). It was an RNG (not a key agreement protocol) based on asymmetric public key cryptography, a system where you could look at it and just ask "where's the private key?" There wasn't a ton of academic research trying to pick apart flaws in Dual EC because why would there be? Who would ever use it?
(It turns out: a big chunk of the industry, which all ran on ultra-closed source code and was much less cryptographically literate that most people thought. I was loudly wrong about this at the time!)
MLKEM is a standard realization of CRYSTALS-Kyber, an algorithm submitted to the NIST PQ contest by a team of some of the biggest names in academic PQ cryptography, including Peter Schwabe, a prior collaborator of Bernstein. Nobody is looking at MLKEM and wondering "huh, where's the private key?".
MLKEM is based on cryptographic ideas that go back to the 1990s, and were intensively studied in the 2000s. It's not oddball weird cryptography. It is to the lineage of lattice cryptography roughly what Ed25519 was to elliptic curve cryptography at the time of Ed25519's adoption.
Utterly unlike SIKE, which isn't a lattice algorithm at all, but rather a supersingular isogeny algorithm, a cryptographic primitive based on an entirely new problem class, and an extremely abstruse one at that. The field had been studying lattice cryptography intensively for decades by the time MLKEM came to pass. That's not remotely true of isogeny cryptography. Isogenies were taken seriously not because of confidence in the hardness of isogenies, but because of ergonomics: they were a drop-in replacement for Diffie Hellman in a way MLKEM isn't.
These are all things Bernstein is counting on you not knowing when you read this piece.
The SIKE comparison is not particularly inconsistent since Bernstein has been banging the drum that structured lattices may not be as secure as thought for years now.
Currently the best attacks on NTRU, Kyber, etc, are essentially the same generic attacks that work for something like Frodo, which works on unstructured lattices. And while the resistance of unstructured attacks is pretty well studied at this point, it is not unreasonable to suspect that the algebraic structure in the more efficient lattice schemes can lead to more efficient attacks. How efficient? Who knows.
Without wanting to engage much more deeply on this topic let me just say I concede any cryptography point 'pbsd makes.
As I read it, the point of mentioning Dual EC is to show a previous case where NSA have acted in a way that reduces security for hand-wavy reasons, in addition to the DES case where they did the same.
And now, in a world where QR + pre-QR algos are typically being introduced in a layered fashion, they're saying "let's add another option, to reduce the number of options" which at least looks very suspicious
Practical quantum computers are probably not very close, but you can certainly use the fear of them as a chance to introduce a new back-door. If you did, you'd have to behave exactly as the NSA is doing right now.
Bernstein wasn't the only objector. There were 7 objectors and 20 proponents.
Dual EC isn't the only comparison he's making. He's also making a comparison to DES, which had an obvious weakness: 53 bit limitation, similar to the obvious weakness of non-hybrid. In neither case is there a secret backdoor. At the time of DES, the NSA publicly said they used it, to make others confident in it. Similarly, the NSA is saying "we do not anticipate supporting hybrid in NSS", which will make people confident in non-hybrid. But in the background, NSA actually uses something more secure (using 2 layers of encryption themselves).
Thanks, I'll use Bernstein's recommendations. His article is not rambling: Mailing list discussions are just tedious to recap.
I wonder what your strategy here is. Muddying the waters and depict Bernstein as a renegade? You have made too many big-state and big-money apologist posts for that to work.
ML-KEM and SIKE were both candidates in the PQ competition which ML-KEM won. SIKE was considered such a strong contender that it was used in production TLS experiments at scale by Google and Cloudflare. (I guess you didn’t read past the second paragraph?)
You find it offensive now to compare ML-KEM and SIKE because SIKE was so thoroughly broken and demonstrated to be worse than pre-quantum crypto. But ML-KEM may already be broken this thoroughly by NSA and friends, and they’re keeping it secret because shipping bad crypto to billions of people enables SIGINT. The idea that your professional crypto acquaintances might be on the NSA’s payroll clearly disturbs you enough that you dismiss it out of hand.
Bernstein is proposing more transparency because that is what was promised after the Dual-EC debacle. Do you disagree with Bernstein because he advocates for transparency (which could prevent bad crypto shipping), or because of his rhetorical style?
I find the comparison risible because SIKE is based on an entirely different and novel problem class, and the vibe I get from Bernstein is that he thinks lattice cryptography is alien enough to people who don't work in this space that they'll miss the fact that cryptosystems based on ring-LWE hardness have been worked on by giants in the field since the mid-1990s.
You seem blind to the obvious corollary to that fact, which is if cryptosystems based on ring-LWE hardness have been worked on by giants for 30 years, then those same cryptosystems have been cryptanalyzed for 30 years, and a significant chunk of cryptanalytic research stays in NSA’s Classified Mathematics Library.
You’ve admitted you were “loudly wrong” when you announced Dual-EC couldn’t be an NSA cryptography backdoor. Snowden let us all know the NSA spends $250 million every year secretly convincing/bribing the private sector to use bad cryptography. Despite that history, you are still convinced there’s no way ML-KEM is an NSA cryptographic backdoor and that all the bizarre procedural errors in the PQ crypto contest are mere coincidences.
[checks my text messages] Lucy just texted me, Thomas. She’s outside waiting for you to kick her football.
See, this is what I mean; this is the kind of logic Bernstein knows he's engaging with when he writes these things.
To use his analogy though, why remove seatbelts? It's like saying we have IPv6 now, why do we need IPv4 support.
For the same reason your Toyota Camry doesn't have a roll cage.
I'd use a hybrid if I was designing a system; I am deeply suspicious of all cryptography, and while I don't think Kyber is going to collapse, I wouldn't bet against 10-15 years of periodic new implementation bugs nobody knew to look for.
But I'm cynical about cryptography. It's really clear why people would want a non-hybrid code point.
Let me just say this once as clearly as I can: I sort of don't give a shit about any of this. A pox on all their houses. I think official cryptographic standards are a force for evil. More good is going to be done for the world by systems that implement well enough to become de facto standards. More WireGuards, fewer RFCs. Certainly, I can't possibly give even a millifuck about what NIST wants.
But I also can't be chill about these blog posts Bernstein writes where it's super clear his audience is not his colleagues in cryptography research, but rather a lay audience that just assumes anything he writes must be true and important. It's gross, because you can see the wires he's using to hold these arguments together (yes, even I can see them), and I don't like it when people insult their audiences this way.
> For the same reason your Toyota Camry doesn't have a roll cage.
It does though. It's just been engineered integral to the unibody. And there are crumple zones, airbags, seat belts, ABS, emergency braking systems, collision sensors, and more layered defenses in addition.
No sane engineer would argue that removing these layers of defense would make the car safer.
If you remove enough of them, then the car is much lighter: in some scenarios (such as when the car has no occupants), that makes it much safer. Of course, these scenarios are relatively rare – but a "sane engineer" could easily make an argument along these lines.
Strong disagree. Best practice is to evaluate the likelihood of scenarios along with potential negative impacts and your contrived scenario fails on both counts. It would not survive review. If you are lucky, your senior engineer may consider it a joke. Unlucky and you might be the joke for suggesting it. Either way you'd be liable for such a decision, were it to make it into a production vehicle and result in a death.
Which is why many engineers wear the ring.
Folks do love to argue though.
It's really clear why people would want a non-hybrid code point.
To me it really isn't. TLS has no need for it. But let's focus the context for some US government organisations that want this for their FIPS maturity level they're aiming for. Why would these organisations want a weaker algorithm for TLS than what is standardised; more importantly how does it benefit deployment except save a tiny bit of computation and eliminate some ECC code. I'm not going to jump the shark and say it is nefarious, but I will throw in my 2 cents and say it doesn't help security and is unnecessary.
And this gets back to the Dual-EC argument, right? Dual-EC was standardized as this weird government thing that maybe you technically need for FIPS, but obviously if you're seriously designing a cryptosystem you wouldn't choose it. And that seems to be GP's position on non-hybrid PQ as well -- just that the reason for not choosing it is "it introduces risk for very little benefit" instead of "it is obviously a bumbling attempt at introducing a backdoor".
> Dual-EC was standardized as this weird government thing that maybe you technically need for FIPS, but obviously if you're seriously designing a cryptosystem you wouldn't choose it.
Unless NSA pays you $10 million, as they did to RSA, to make said obviously bumbling attempt the default in their security products.
https://en.wikipedia.org/wiki/Dual_EC_DRBG#Timeline_of_Dual_...
https://www.reuters.com/article/us-usa-security-rsa-idUSBRE9...
Or unless the presence of such less secure options in compliant implementations enables a https://en.wikipedia.org/wiki/Downgrade_attack
Yeah. Like, the argument here is that the reason government agencies push stuff into standards is because they do in fact want people to use it. "Well government purchasing is just Like That, surely no one will actually use this option in the real world" is an even weaker counter-argument if the option is not obviously backdoored.
Also: https://eprint.iacr.org/2016/376.pdf
You're arguing against deliberate cooperation.
For bigger impact, this article would deserve a tldr executive summary at the beginning I think.
I skimmed the article, but it doesn't make too much sense. It says:
>Surveillance agency NSA and its partner GCHQ are trying to have standards-development organizations endorse weakening ECC+PQ down to just PQ.
The NSA spends about half of its resources attempting to hack the FBI and erase its evidence against them in the matter of keeping my wife and me from communicating. The other half of the staff are busy commenting online about how unfair this is, and attempting to get justice.
There are no NSA resources left for actions like the one I quoted. I don't think NSA is involved in it.
According to the New York Times in 2013, based on Snowden documents, the NSA allocates $250 million every year for the actions you quoted. They call it the “SIGINT Enabling Project”.
They are not running out of resources.