Hi Jay, On 2011-06-09 13:10 , Jay Daley wrote:
To recap and add some more detail, our explanation for choosing 1280 is: [....] 3. Recommendations by standards bodies in this area come into two categories[....] Using those guidelines, a key length of 1280 to *encrypt* something now can be conservatively expected to remain secure until 2014 and optimistically until 2017 (p26).
As an engineering observation, 2014 is _really_ close (approximately 24 months from deployment), and 2017 is still uncomfortably close. I understand that your usage method means that you might realistically believe that your safe usage time is longer than what you quoted (safer use case). But as a "worst case" engineering margin, "we'll be okay for at least a couple of years" is rather too close for comfort for me. (It looks very much like "we think that this is the minimum we can get away with".) One need only look at, eg, research into MD5 weaknesses over the last few years to see how rapidly "probably safe for now" can become "oh dear, we need something else" in the cryptographic world with a single breakthrough. Thus the typical cryptographic design allows quite a lot of engineering margin.
4. We do not want to push the key size up to 2048 "just to be sure" because that imposes a greater DNS packet size and CPU cost for signature verification for end users of DNSSEC.
My (admittedly high level) understanding is that the KSK (public key portion and signatures) would typically be seen/transferred by DNS clients (recursive caches) relatively infrequently. As the KSK is used solely for signing ZSKs. ZSK signatures would be expected to appear on the wire all the time, and the size of those is reasonably a packet size consideration. But the ZSK public key itself, and the KSK signatures are surely only on the wire rarely, and thus their impact on the packets should be relatively insignificant. Possibly I'm missing something here, but I had thought that DNSSEC was designed to avoid the "certificate chain bulk" of, eg, SSL/TLS/HTTPS for this reason. FWIW, it's been a long time since 2048-bit crypto keys presented a significant workload to modern CPUs when verifying signatures. And I'd expect KSK signature verification to be sufficiently infrequent not to dominate CPU usage; ZSK signature verifications would be expected to happen "all the time", and thus a CPU concern. I could, eg, see a 2048-bit KSK (rolled, eg, yearly) and a 1024-bit ZSK (rolled fairly frequently) as a reasonable engineering trade off. But I'm struggling to see a 1280-bit KSK as a reasonable choice in 2011. Given that it appears someone has done some careful maths to determine that 1280-bits is the largest they're willing to recommend, perhaps you could explain how that translates into packet size boundaries that you're concerned about? (Eg, 512-byte UDP minimums? 1500-byte (ethernet) common packets?) And how frequently those packets are expected to be transferred (eg, every response, once per TTL, etc). Also it may help to speak to why, eg, 1536-bit wasn't being suggested if you wanted "smaller than 2048-bit" size.
We are also acutely aware that a TLD registry often sets a de facto standard followed by their registrars and registrants, which magnifies the impact of our choice.
Surely that's at least as strong an argument for taking the safe engineering approach, to avoid everyone else getting hooked on a "hopefully safe for 2-5 years, all going well" approach and immediately having to re-engineer things following a discovery that makes key cracking 10% easier. Ewen PS: I must say, for the record, that I find it extremely refreshing that their are NZRS staff who are both willing to have this discussion in public, and able to discuss the technical issues in detail.