Ewen On 9/06/2011, at 3:06 PM, Ewen McNeill wrote:
Hi Jay,
On 2011-06-09 14:44 , Jay Daley wrote:
I note that neither you, nor I, nor Dean has revoked our 1024 bit openPGP keys!
Not yet. But at least in my case, I created larger keys (4096-bit) 18 months ago and have collecting signatures on them, and been using those in preference to the 1024-bit OpenPGP keys where possible since then. So that I'd be in a position to abandon (revoke) the 1024-bit key at "any moment". Ie, my transition plan away from those 1024-bit keys is already well under way; it's just not complete yet.
Quite right - apologies if my quip offended you.
One might also observe that the risk profile of "a ccTLD" and the risk profile of "a small company"/"an individual" are somewhat different; to the extent that one might expect to spend (money, CPU time, bandwidth, etc) orders of magnitude more on security for the former over the latter.
It would, IMHO, be unfortunate to get through the deployment of DNSSEC for .nz and have to _immediately_ launch into the transition plan for a stronger version because the first deployment was no longer (perceived as) suitable.
The only transition would be to rollover to a key of a bigger size having amended the DPS to state that and let people know.
[reordered]
[...] but at the same time we don't over-engineer as it is clear that that approach introduces as many problems as it solves.
In civil engineering one of the rules of thumb is that things be designed to handle _at_minimum_ three times the maximum expected load. That's clearly "over engineering". But it's accepted Best Practice, because it provides a margin for error in case some of the estimates turn out not to be true (or the future presents something that was not anticipated -- people marching in step on the London Millennium bridge for instance).
If you were, eg, suggesting the KSK be 4096-bit keys, rolled every 2 months, or something else that was orders of magnitude more "paranoid" than common practice, then it would be right to be concerned about "over engineering". Where you want to engineer something with much less margin for error than common practice, it's reasonable to expect that others will want to look closely at the justifications for "under specifying" too. And in particular whether there is still adequate margin for error, throughout the expected deployment lifetime. I, like Dean I believe, remain to be convinced on this point.
Taking your engineering argument as a way forward - the largest RSA key to have been broken so far (that is publicly known) is 1023 bits and even that was a very special key. A 1280 bit key is 2^257 or 231,584,178,474,632,390,847,141,970,017,375,815,706,539,969,331, 281,128,078,915,168,015,826,259,279,872 times as strong as that. So let's say someone announced today that they could factor a 1024 bit key in just 1 second, it would take them 3,671,743,063,080,802,746,815,416,825,491,118,336,290,905,145,409,708,398,004,109,081,935,347 years to factor a 1280 bit key. We are already "over-engineering" but not "over-over-engineering". cheers Jay
Ewen
-- Jay Daley Chief Executive .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 931 6977 mobile: +64 21 678840