Hi Kris, I will take the bait and respond with some comments and questions. :-) I am not sure that I agree that the bandwidth constraints within NZ are artificial. The reality is that bandwidth costs money - fiber, POPs, equipment, staff etc. The costs also don't scale in a linear fashion the next 10G sold may need another 100G lit up somewhere and the 100G I have seen is expensive. As the costs come down some prices will too, just remember the costs of fiber in the ground have been incurred already so that part of the equation won't be decreasing in a hurry. Prices now compared to 1, 2 or even 3 years ago are much cheaper. I am interested to hear more details how you would light up Auckland - Wellington with a 1Tbps ring for 2 million. If your indicated costs are achievable you may have a business opportunity or someone on this list may steal you idea :-p Ivan On 5/Nov/2014 9:55 a.m., Kris Price wrote:
Inline below.
Sent from my mobile
On Nov 4, 2014, at 11:59 AM, neals5
wrote: ---- On Tue, 04 Nov 2014 17:17:17 +1300 Kris Price wrote ----
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
100GE interfaces on routers cost very serious money, at least by New Zealand standards. Beginning a sentence "Provided you have the fiber" glosses over the important question of who has the fibre. It isn't most ISP's.
Do we still need the big expensive Vendor [ACJ] ports if we think about how we build access networks differently?
Out of curiosity how much does a 10G or 100G handover cost from Chorus? Would ISPs be able to adapt to an access provider that did uncontended handover - no fancy QoS, you just get as much bandwidth as you want to use and it's up to you to handle it in your network?
Glossing over he structure of the New Zealand telco sector: Just because something is artificially constrained today (bandwidth) doesn't mean that it must stay that way, and doesn't mean that we shouldn't talk about that problem knowing there are solutions to it out there. Could Chorus light that fiber and make that bandwidth available if they were shown it was viable to do so and was what their customers wanted?
What does a 10G circuit Auckland to Wellington cost these days? Today if I had a ring of fiber pairs it would be possible to light 1 Tbps for probably under 2 million. That's lit, ready to plug in and use capacity, provided the ISPs would use it. So really, but for the "who owns the fiber is artificially constraining bandwidth" issue we can wipe away the bandwidth concerns completely in NZ all the way to Southern Cross if we were organized to do so.
The wider discussion can indeed be seen as just the same things as in the past with bigger numbers. The fact that available bandwidths continue to increase dos not reduce the need for attack mitigation by network operators.
Agreed. Hence detect and mitigate - shut down your bad process, your bad VM, your bad naughty grandma with her 1Gbps set top box first. How you handle the security aspects beyond that in a better way is less of a network issue and belongs higher up the stack. The network protects the network when that fails to work.
- Donald Neal
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog