Random number generators
Hi all, As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad. There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful. I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python. Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems. -- Nathan Ward (opinions all my own, blah blah blah)
On Sat, 23 May 2015 21:39:38 +1200, Nathan Ward wrote:
but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
Could you describe what your entropy gathering algorithm is? It looks to me like it's starting a web crawl from truenet.co.nz and feeding the content of retrieved documents in to the /dev/random pool? I may be missing something but it doesn't seem like that ought to be very random. There is good wisdom (which I suspect you will have seen, but may be valuable to others) to be found in a blog post from djb last year on entropy gathering systems: http://blog.cr.yp.to/20140205-entropy.html I particularly like the point he makes about it being wrong to simultaneously think that "we can't figure out how to deterministically expand one 256-bit secret into an endless stream of unpredictable keys" while "we can figure out how to use a single key to safely encrypt many messages". -- Michael
On Sat, 23 May 2015 21:39:38 +1200, Nathan Ward wrote:
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy
... unless this is a dig at TrueNet? -- Michael
Morning.
Designing RNGs for crypto use is a finicky thing and making so much as a
single mistake can render the entire crypto system useless.
If you're just doing this for post hangover fun then cool :)
If you're serious about it then I'd suggest finding an existing team and
look to contribute to their efforts.
These guys are cool.
OpenRNG
https://m.youtube.com/watch?v=jiy1rlKdBo8
as are these guys.
CrypTech
https://cryptech.is/
Both those projects would love collaborators and have the technical ability
to peer review contributions
Have fun!
Dean
On Saturday, 23 May 2015, Nathan Ward
Hi all,
As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
-- Nathan Ward (opinions all my own, blah blah blah) _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz javascript:; http://list.waikato.ac.nz/mailman/listinfo/nznog
Totally. Apologies to anyone who didn’t get the joke straight away, I felt like I put enough clues in the first message, but maybe not. Come on though, entroception? :-) Lots have people have also privately linked me to the OneRNG project, which is actually doing legit RNGs, not just poking fun: http://onerng.info http://onerng.info/ Also, they’re in NZ, so support them. -- Nathan Ward
On 24/05/2015, at 11:40, Dean Pemberton
wrote: Morning.
Designing RNGs for crypto use is a finicky thing and making so much as a single mistake can render the entire crypto system useless.
If you're just doing this for post hangover fun then cool :)
If you're serious about it then I'd suggest finding an existing team and look to contribute to their efforts.
These guys are cool.
OpenRNG https://m.youtube.com/watch?v=jiy1rlKdBo8 https://m.youtube.com/watch?v=jiy1rlKdBo8
as are these guys. CrypTech https://cryptech.is/ https://cryptech.is/
Both those projects would love collaborators and have the technical ability to peer review contributions
Have fun!
Dean
On Saturday, 23 May 2015, Nathan Ward
mailto:nznog(a)daork.net> wrote: Hi all, As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/ http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy https://github.com/nward/trueentropy
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
-- Nathan Ward (opinions all my own, blah blah blah) _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz javascript:; http://list.waikato.ac.nz/mailman/listinfo/nznog http://list.waikato.ac.nz/mailman/listinfo/nznog
On a more serious note, if you think there is a problem with TrueNet
results. Then feel free to discuss. Id be interested in that.
Dean
On Sunday, 24 May 2015, Nathan Ward
Totally. Apologies to anyone who didn’t get the joke straight away, I felt like I put enough clues in the first message, but maybe not. Come on though, entroception? :-)
Lots have people have also privately linked me to the OneRNG project, which is actually doing legit RNGs, not just poking fun: http://onerng.info
Also, they’re in NZ, so support them.
-- Nathan Ward
On 24/05/2015, at 11:40, Dean Pemberton
javascript:_e(%7B%7D,'cvml','nznog(a)deanpemberton.com');> wrote: Morning.
Designing RNGs for crypto use is a finicky thing and making so much as a single mistake can render the entire crypto system useless.
If you're just doing this for post hangover fun then cool :)
If you're serious about it then I'd suggest finding an existing team and look to contribute to their efforts.
These guys are cool.
OpenRNG OneRNG - An Open and Verifiable hardware random number generator https://m.youtube.com/watch?v=jiy1rlKdBo8
as are these guys. CrypTech https://cryptech.is/
Both those projects would love collaborators and have the technical ability to peer review contributions
Have fun!
Dean
On Saturday, 23 May 2015, Nathan Ward
javascript:_e(%7B%7D,'cvml','nznog(a)daork.net');> wrote: Hi all,
As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
-- Nathan Ward (opinions all my own, blah blah blah) _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Hi, I’m still researching exactly how it works, and of course there doesn’t seem to be much documentation publicly available and it sounds like it’s difficult to get information by asking directly, but, empirical evidence and reading their website suggests that they’re judging how “good” an Internet connection is by downloading a bunch of files from a few servers under different conditions (i.e. some files avoid caches, some don’t, etc.). This is bad, because there is little in common between downloading a file from a server, and actual user experience of say browsing the web, sending email, or playing games. Despite this, many networks in NZ have large amounts of engineering work driven by TrueNet, because that’s the current standard that ISPs judge themselves by against the competition. While I’m not aware of any network having specific config in place for TrueNet, its results drive investment and change in areas that are perhaps not really that important. One particular area of concern that I have, is that the servers they download these test files from are sometimes not in a particularly good network position. They’re not testing “Internet” performance, they’re testing performance downloading some files from a particular network location. They used to have some files on TradeMe for this, and that was perhaps a little better, but that is, I understand, no longer the case. I should say, TradeMe was a slightly better test of user experience because it was a reasonable thing for networks to be optimised for. It is no better in terms of testing against a wider range of targets, and it doesn’t actually test real user experience. Another area of concern is the whole “cached” thing, which furthers the outdated notion that web caching is a always good idea. I’m not saying that it isn’t, every network is different, but it should be up to the network operator to decide whether caching improves their customer experience/costs/etc. and in which parts of their network. Because TrueNet don’t test customer experience, rather they test the performance of “cached” data, providers who have decided that caching isn’t for them are penalised. They also *seem* to only provide data about providers who have paid them money, though obviously I can’t back that up. If it is true, it is strange given that ComCom pay them as well. This is not technical though so isn’t really appropriate for this list, but I figured I’d mention it. If anyone can back that up, email me. I have more thoughts on this, but they’re not yet well formed.. I intend to write up something along these lines in the next couple of weeks. If other people have data/thoughts that would be useful, email me. -- Nathan Ward
On 24/05/2015, at 13:09, Dean Pemberton
wrote: On a more serious note, if you think there is a problem with TrueNet results. Then feel free to discuss. Id be interested in that.
Dean
On Sunday, 24 May 2015, Nathan Ward
mailto:nznog(a)daork.net> wrote: Totally. Apologies to anyone who didn’t get the joke straight away, I felt like I put enough clues in the first message, but maybe not. Come on though, entroception? :-) Lots have people have also privately linked me to the OneRNG project, which is actually doing legit RNGs, not just poking fun: http://onerng.info http://onerng.info/
Also, they’re in NZ, so support them.
-- Nathan Ward
On 24/05/2015, at 11:40, Dean Pemberton
> wrote: Morning.
Designing RNGs for crypto use is a finicky thing and making so much as a single mistake can render the entire crypto system useless.
If you're just doing this for post hangover fun then cool :)
If you're serious about it then I'd suggest finding an existing team and look to contribute to their efforts.
These guys are cool.
OpenRNG OneRNG - An Open and Verifiable hardware random number generator https://m.youtube.com/watch?v=jiy1rlKdBo8
as are these guys. CrypTech https://cryptech.is/ https://cryptech.is/
Both those projects would love collaborators and have the technical ability to peer review contributions
Have fun!
Dean
On Saturday, 23 May 2015, Nathan Ward
> wrote: Hi all, As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/ http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy https://github.com/nward/trueentropy
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
-- Nathan Ward (opinions all my own, blah blah blah) _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <> http://list.waikato.ac.nz/mailman/listinfo/nznog http://list.waikato.ac.nz/mailman/listinfo/nznog
On Sun, May 24, 2015 at 01:55:09PM +1200, Nathan Ward wrote:
Hi, Ia**m still researching exactly how it works, and of course there doesna**t seem to be much documentation publicly available and it sounds like ita**s difficult to get information by asking directly, but,
I asked a few questions back when it was new.
empirical evidence and reading their website suggests that theya**re judging how a**gooda** an Internet connection is by downloading a bunch of files from a few servers under different conditions (i.e. some files avoid caches, some dona**t, etc.).
Benchmarking web performance is complex. I used to be of the belief that testing mostly cached performance is a good test. But now the majority of "slow sites" are using https, and a lot of slow sites actually to be a problem with the cdn or end destination.
This is bad, because there is little in common between downloading a file from a server, and actual user experience of say browsing the web, sending email, or playing games. Despite this, many networks in NZ have large
Right. I did some of my own testing years back, and from my own testing it seemed to be that there two central isuses. One was most web sites require doing a lot of if-modified-since requests, and with high latency, as there are too many to do in one round trip time, and loss can make a /huge/ difference to performance, as not only does it delay that request, but subsequent requests. (also if there's loss you could have to wait 3 seconds for retransmit.. Linux has changed to 1 second as default now, which should make things better - normally it's sped up by acking subseqent packets but IMS requests don't trigger that) Not only do they seem to not test to enough international destinations, but they don't parallelise requests like a browser does. And that is a bit complicated to implement.
amounts of engineering work driven by TrueNet, because thata**s the current standard that ISPs judge themselves by against the competition.
It does seem to be good at testing handover congestion.
While Ia**m not aware of any network having specific config in place for TrueNet, its results drive investment and change in areas that are perhaps not really that important.
Handover congestion impacts all traffic.
One particular area of concern that I have, is that the servers they download these test files from are sometimes not in a particularly good network position. Theya**re not testing a**Interneta** performance, theya**re testing performance downloading some files from a particular network location. They used to have some files on TradeMe for this, and
Getting a good representive spread of network locations is hard.
that was perhaps a little better, but that is, I understand, no longer the case. I should say, TradeMe was a slightly better test of user experience because it was a reasonable thing for networks to be optimised for. It is no better in terms of testing against a wider range of targets, and it doesna**t actually test real user experience.
I think Amazon and Ebay would be better tests myself. :)
Another area of concern is the whole a**cacheda** thing, which furthers the outdated notion that web caching is a always good idea. Ia**m not saying that it isna**t, every network is different, but it should be up to the network operator to decide whether caching improves their customer experience/costs/etc. and in which parts of their network. Because TrueNet
I hate to say it, but I've always been really keen on caching of resources, but the vast majority of bulky content is moving towards cdn's etc, and forcing to not cache. Even things like Netflix which uses http disables caching.
dona**t test customer experience, rather they test the performance of a**cacheda** data, providers who have decided that caching isna**t for them are penalised.
Whatever you do you are going to favour some setups, and disfavour others. The biggest concern I've had is that they seem to favour burstable single threaded connections. When the majority of slowness is often on more active connections with more simultaneous access where AQM can really help.
They also *seem* to only provide data about providers who have paid them money, though obviously I cana**t back that up. If it is true, it is strange given that ComCom pay them as well. This is not technical though so isna**t really appropriate for this list, but I figured Ia**d mention it. If anyone can back that up, email me.
I think it's actually just providers that they have enough probes in the location of. I wouldn't say Truenet is malicious, I just think it's a bit simplistic.
I have more thoughts on this, but theya**re not yet well formed.. I intend to write up something along these lines in the next couple of weeks. If other people have data/thoughts that would be useful, email me.
In some ways i think it's best to keep on list? A bit of a ramble.. Ok my concerns with Truenet started when they were comparing "advertised speed" with cable and "maximum attained speed" with ADSL. And so Cable by giving burst over their advertised speed often gave results above 100%. In my own experience with cable, there often seemed to be a lot of packet loss on active connections. And my experience of using cable was that it was often slower than ADSL for normal web browsing, downloads, etc. But that it went fine with threaded downloads. I think there were two factors with Cable performance being bad. One was that the TCP/IP stack used on the transparent proxies gave lower performance than direct TCP connections from Linux to Linux, which seemed to hurt Europe performance quite noticably. As well as that normal TCP/IP connections seemed to get loss where UDP was fine. This may have been to do with the spikiness of TCP/IP where UDP is constant rate. But the end result was that TCP/IP would struggle to do even 20 megabit when UDP would do 20 megabit with 0% loss. The problem with web sites is that usually there isn't enough time to rapidly increase window sizes, even with TCP cubic fast enough to give 20 megabit/sec+ on small files. And on medium files any loss can cut out the ramping up, and force retransmits, giving a noticable hit. That is somewhere that a transparent proxy can really help with, as if the retransmits can be cut out, you can get 25%+ speed boost on medium sized files, even with no caching. The second was there seemed to be often poor DNS performance. Which is really hard to test, as you if you benchmark the same destinations it will give HIT performance, when MISS is what matters. It may have changed now, but it seemed very common in the past to use 203.96.152.4, and 203.96.152.12, hard coded, and for some reason or other, performance seemed to be significantly worse than ADSL ISP's. Which brings me to, what are you trying to test? Are you more interested in peak performance or worst case performance or median performance; testing popular sites, testing close sites, testing a specific representation, or a representation that covers multiple regions or what. There are a few solutions out there to test existing connections. I used to use this Chrome benchmarking plugin, which requires manually running, and has stopped working. There's namebench to test DNS that can go through your browser history. But as far as real-world like testing, there isn't a lot of automated stuff that can easily be setup. A good test to my mind would be to emulate browsers, run all the time regardless of other traffic on the link. Measure "pass/fail" rather than performance - say if a web site loads in 4 seconds that's fail, if it loads in 2 seconds pass. Measure availability. If a connection can't play Youtube videos 5% of the time, just call that 5% of time a fail, as some users will shift their viewing times, others will let videos buffer, and otherws will change providers. And then do you want to test large files or only medium sized files? If I'm downloading a few GB then I I am unlikely to watch it download. But for a 10mb file then I'd be waiting for it to download, so if it took 2 seconds, 5 seconds or 10 seconds it would matter to me. At the same time, if I upload a file of a few GB and I can't use the connection while it uploads then it's much more inconvenient. I realise the reason that Truenet want to wait for an idle connection is that they want reproducability - but do you really care if your internet is fast or slow while you're not using it? And I think that since Truenet has begun handover congestion has become less of an issue. And in general international performance isn't usually an issue. And if a web site is going slow, now days, I think it's often the destination, or a congestion issue internationally. So instead of "all international is slow" things have shifted to issues with Comcast, Verizon, AT&T, etc.. And a lot of these issues can even be region based in the US... like issues to Comcast in San Jose, and it can be the other end, or the connection within the US to them. And it's pretty well known now that there are a lot of peering issues in the US. Some places are worse than others, and for instance Kansas can be kind of hit and miss, but hardly any web sites are hosted there that New Zealanders would go to. Also there are things like a lot of CDN's being hosted in Asia - and connections to Japan can be routed via Australia or the US. Singapore is often routed via the US. Countries like China and Singapore the latency can get pretty bad routing via the US, but Japan often doesn't matter nearly as much. And if these CDN's are pull caches and pull from the US, and you go NZ -> US -> Singapore, then the CDN pulls from the US, there can be a significant decrease in performance compared to hitting a US CDN. So yeah, Truenet measures handover congestion fine, but to do "better" testing would require a lot of concerted effort. And I really think it's the kind of thing that has a global audience, and really would need test nodes in other countries as well, that can do things like VOIP emulation between nodes. Ben.
Nathan wrote: "...empirical evidence and reading their website suggests that they’re judging how “good” an Internet connection is by downloading a bunch of files from a few servers under different conditions (i.e. some files avoid caches, some don’t, etc.)." I had a chat with John from TrueNet a couple of weeks ago after hearing on This Way Up[1] that their nodes download RNZ's home page many times an hour. Turns out they use a bunch of sites as well as file downloads from known locations. Cheers, Richard Hulse Webmaster, Radio NZ [1] http://www.radionz.co.nz/national/programmes/thiswayup/audio/201752653/tech-...
On Sun, May 24, 2015 at 06:23:21PM +1200, Richard Hulse wrote:
Nathan wrote: "...empirical evidence and reading their website suggests that they???re judging how ???good??? an Internet connection is by downloading a bunch of files from a few servers under different conditions (i.e. some files avoid caches, some don???t, etc.)."
I had a chat with John from TrueNet a couple of weeks ago after hearing on This Way Up[1] that their nodes download RNZ's home page many times an hour. Turns out they use a bunch of sites as well as file downloads from known locations.
Is the list published somewhere? It would be nice to be able to reprodcue.. Ben.
Is the list published somewhere? It would be nice to be able to reprodcue.
It is not. :-(
?!???
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
1 click: http://list.waikato.ac.nz/pipermail/nznog/ It's even already way ahead of its time in 2021 :-) Volker -- Volker Kuhlmann is list0570 with the domain in header. http://volker.top.geek.nz/ Please do not CC list postings to me.
You may want to go and check the context of the question... He wasn't referring to the NZNOG list. 1 click:
Yes, but the particular link you're after is probably this one - http://list.waikato.ac.nz/pipermail/nznog/2015-May/021761.html Scott
On Tue 26 May 2015 08:56:25 NZST +1200, Scott Howard wrote:
You may want to go and check the context of the question...
Yep indeed. Sorry. Volker -- Volker Kuhlmann is list0570 with the domain in header. http://volker.top.geek.nz/ Please do not CC list postings to me.
well, since you warn a out using sources that can be manipulated by others (like the Internet), the old SGI lavarand was a novel source.
https://gist.github.com/AnthonyBriggs/8396607
manning
bmanning(a)karoshi.com
PO Box 12317
Marina del Rey, CA 90295
310.322.8102
On 23May2015Saturday, at 2:39, Nathan Ward
Hi all,
As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy: - HAVEGE implementations (i.e. http://www.issihosts.com/haveged/) - Audio/video sources (audio_entropyd and video_entropyd) - Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?) - Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL: https://github.com/nward/trueentropy
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
-- Nathan Ward (opinions all my own, blah blah blah) _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
participants (8)
-
Ben
-
Dean Pemberton
-
manning
-
Michael Fincham
-
Nathan Ward
-
Richard Hulse
-
Scott Howard
-
Volker Kuhlmann