UFB 1 gig plans for retail and impact they have
Hey guys, I just wondered what people thought of the implications around 1gbps residential / business plans now becoming more and more common. I¹m seeing requests from wholesalers and retail asking about this gigatown thing and how they can get a 1Gbps service, especially unlimited. While I know 1gbps is not too common yet, the 200mbps plan is becoming more and more common and it won¹t be long before there is pressure to do 1gbps I know the unlimited part is easy to justify to the client around Œinternational & domestic¹ transit pricing, but some ask why can we not do user to user or peering traffic at 1gbps. I¹m not sure if it¹s still true, but I recall from the Chorus documentation that you could LAG a maximum of 8x10g circuits in a handover region and that was the max, so theoretically 80gbps handover. For the sake of ease and because 10Gbps holes are not cheap in high end routers like ALU 7750 SR-7¹s which we run, we will consider the average provider has a single 10g or maybe 2 per region right now. If you had say 10 or 20 users on the 1Gbps plan or even 100-200 users on the 200mbps plan and they had misconfigured routers (consider they could do 200mbps upload) opening them up to the likes of DNS amplifications etc. Now those users are maxing out the upload capacity of the handover, you have no ability to QoS the malicious users as the QoS would need to come before hitting the handover, I.e. On the CPE. Suddenly everyone on the handover is impacted from a handful of users that wanted the faster speed just because it was available and affordable. The only way to stop the attack affecting everyone would be to isolate and disconnect the end users causing the damage, be it IPOE or PPPoE, if the user was on a direct /30 IP then things are even harder to manage. The DNS amplifications that hurt Spark for a whole weekend, I can only assume was caused by such a large amount of affected devices filling up the handovers and also because it was targeted at their DNS. Imagine an attack of that magnitude, 10s of thousands of end users on 200 or even 100mbps circuits filling up a 10gig or even at this point a 80gbps handover LAG. When you talk about regional handovers up and down the country then the problem gets worse as you then obviously need to backhaul that capacity to Auckland before getting out to the internet, so this also has to be considered too. Any thoughts on the matter. Many thanks Kind regards, Barry Murphy / Chief Operating Officer +64 27 490 9712 / barry(a)vibecommunications.co.nz http://www.vibecommunications.co.nz/ https://www.facebook.com/VibeCom https://twitter.com/vibecomnz https://www.linkedin.com/company/1941512 Office: +64 9 222 0000 / Fax: 0800 842 326 Unit A7, 1 Beresford Square, Auckland, New Zealand Web: www.vibecommunications.co.nz http://www.vibecommunications.co.nz/ / Peering: AS45177 http://www.peeringdb.com/view.php?asn=45177 This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.
If you had say 10 or 20 users on the 1Gbps plan or even 100-200 users on the 200mbps plan and they had misconfigured routers (consider they could do 200mbps upload) opening them up to the likes of DNS amplifications etc.
There seems to be a shift to ISP-provided routers which will make this temporarily less common most likely. Although IPV6 is another possible vector.
Now those users are maxing out the upload capacity of the handover, you have no ability to QoS the malicious users as the QoS would need to come before hitting the handover, I.e. On the CPE.
Disconnect the user.
Suddenly everyone on the handover is impacted from a handful of users that wanted the faster speed just because it was available and affordable.
Same as any DDOS really. The cheap availability of gigabit dedicated servers etc makes it easier to DDOS with high volumes of traffic.
The DNS amplifications that hurt Spark for a whole weekend, I can only assume was caused by such a large amount of affected devices filling up the handovers and also because it was targeted at their DNS. Imagine an attack of that magnitude, 10s of thousands of end users on 200 or even 100mbps circuits filling up a 10gig or even at this point a 80gbps handover LAG.
Didn't that mostly hurt their DNS servers? Maybe their DNS servers were not overprovisioned enough. Maybe there was something silly like state, or it was going to servers that were timing out and tying up resources too much. That Cloudflare blog had something about a South American ISP advertising to be Cloudflare IP's and overloading transit in South America in general. (I imagine not all of it ..) But whenever there was oversubscibed shared links these things can happen.
When you talk about regional handovers up and down the country then the problem gets worse as you then obviously need to backhaul that capacity to Auckland before getting out to the internet, so this also has to be considered too.
Any thoughts on the matter.
Block UDP! The internet is just Facebook and Google. I think at the moment things are reasonably safe as with things like DNS and SNMP amplification attacks you can block the incoming traffic and it'll stop sending out again. It could get more complicated if malicious people were smarter about these things. Which doesn't seem to be happening quickly at least. I think in a way you can't plan for whatever may happen, and you just have to look at it from the point of view of fixing it. And often that means disconnecting the users if you can't stop the traffic with a port block. I'm actually surprised that there hasn't been more "real" traffic, like real http requests to popular web sites that look like normal requests (ie they play back normal browsing type sessions they've captured from somewhere else, or pretend to be a browser) or so forth making it harder to block. My biggest concern ATM over DDOS, is when IPV6 starts becoming widely used - a lot of people use NAT as a firewall, and when they implement IPV6 don't protect their hosts properly. And there's a bit of tie in between people wanting new faster connections and wanting to enable IPV6. Ben.
Hey Barry,
Great conversation starter and some topics that have on my mind lately.
Seems to me after having done a quick scan of the market the other day that
ISP's have fallen away around being clear about the Acceptable Use Policies
that they may have with customers. Traditionally (or least when my head was
buried *all the time* in operational matters), AUP's were built to deal
with intentional abuse of the network, spam, conscious DDoS attacks ecetera
ecetera. But as you point out with very high speed plans available and
customers being unintentional participants of abuse, the results can be
quite cataclysmic and spectacuuulaaarr.
Our friend Roland Dobbins presented a really simple summary at AUSNOG
recently on the state of play with fashionable DDoS attacks. At least half
a dozen services available on 1G connected CPE - mostly reflector attacks.
DNS, NTP, Chargen (woah, the 80's are calling), and some others too.
Seems to me we need to consider whether these services running on CPE are
considered harmful in a gb connected age and whether we make it clear that
these services (i.e open DNS resolver on a gb connected customer site) is
considered harmful by default unless the customer has explicitly asked for
that to be available and is consciously aware of the risks. I think the
challenges of operationally managing this are proportionally related to
competition and the fluidity of the market. Customers churning and turning
up with their own CPE. Sigh.
It's not until we've had the conversations with our customers and we get
sense of what is mutually acceptable before we take steps to explicitly
deny this stuff. It is all about mitigating risk for the benefit of most
but the process by which we get that permission to do so is an important
one.
I'd like to think we all care about an open internet where any one can
connect with anyone, but the engineer in me says that there are some things
we should do to ensure that it is not stupidly easy for a 3rd party to use
anyone to harm someone else.
Now I'm sure some of you will say, yeah we do that. But as far as the
public is concerned - are we making that clear enough? I think the
wholesale players need to also think about steps to ensure that AUP (what
ever that is) ripples downstream too.
cheers
jamie
On 3 November 2014 23:01, Barry Murphy
Hey guys,
I just wondered what people thought of the implications around 1gbps residential / business plans now becoming more and more common. I¹m seeing requests from wholesalers and retail asking about this gigatown thing and how they can get a 1Gbps service, especially unlimited. While I know 1gbps is not too common yet, the 200mbps plan is becoming more and more common and it won¹t be long before there is pressure to do 1gbps
I know the unlimited part is easy to justify to the client around Œinternational & domestic¹ transit pricing, but some ask why can we not do user to user or peering traffic at 1gbps.
I¹m not sure if it¹s still true, but I recall from the Chorus documentation that you could LAG a maximum of 8x10g circuits in a handover region and that was the max, so theoretically 80gbps handover. For the sake of ease and because 10Gbps holes are not cheap in high end routers like ALU 7750 SR-7¹s which we run, we will consider the average provider has a single 10g or maybe 2 per region right now.
If you had say 10 or 20 users on the 1Gbps plan or even 100-200 users on the 200mbps plan and they had misconfigured routers (consider they could do 200mbps upload) opening them up to the likes of DNS amplifications etc. Now those users are maxing out the upload capacity of the handover, you have no ability to QoS the malicious users as the QoS would need to come before hitting the handover, I.e. On the CPE. Suddenly everyone on the handover is impacted from a handful of users that wanted the faster speed just because it was available and affordable. The only way to stop the attack affecting everyone would be to isolate and disconnect the end users causing the damage, be it IPOE or PPPoE, if the user was on a direct /30 IP then things are even harder to manage.
The DNS amplifications that hurt Spark for a whole weekend, I can only assume was caused by such a large amount of affected devices filling up the handovers and also because it was targeted at their DNS. Imagine an attack of that magnitude, 10s of thousands of end users on 200 or even 100mbps circuits filling up a 10gig or even at this point a 80gbps handover LAG.
When you talk about regional handovers up and down the country then the problem gets worse as you then obviously need to backhaul that capacity to Auckland before getting out to the internet, so this also has to be considered too.
Any thoughts on the matter.
Many thanks
Kind regards, Barry Murphy / Chief Operating Officer +64 27 490 9712 / barry(a)vibecommunications.co.nz http://www.vibecommunications.co.nz/ https://www.facebook.com/VibeCom https://twitter.com/vibecomnz https://www.linkedin.com/company/1941512 Office: +64 9 222 0000 / Fax: 0800 842 326 Unit A7, 1 Beresford Square, Auckland, New Zealand Web: www.vibecommunications.co.nz http://www.vibecommunications.co.nz/ / Peering: AS45177 http://www.peeringdb.com/view.php?asn=45177 This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
I'd like to think we all care about an open internet where any one can connect with anyone, but the engineer in me says that there are some things we should do to ensure that it is not stupidly easy for a 3rd party to use anyone to harm someone else.
I agree. The days of the "any to any, open Internet" are slowly coming to
an end. One small flaw in one mass produced and mass distributed piece of
software (including software that runs on CPE) can easily snowball into
hundreds of gigabits of traffic at the "core" of the Internet (I hate that
term but I'm too tired to come up with anything else right now).
Who would have thought you could weaponise a D-Link ;)
A lot of providers still fail to implement basic ingress packet filtering
from users (BCP38). What hope is there if scope is expanded to limit or
block NTP, DNS, SSDP, SNMP etc as well?
Maybe our beloved vendors of BNG and BRAS products should step up to the
plate and give us the 'dummy' mode config for service providers that can
used for best practice secure subscriber templates?
That'll take care of our high speed users and their compromised home
networks.... What do we do about the networks that intentionally sell
bandwidth for the purposes of launching high volume unfiltered DDOS
attacks? :)
Here we all were thinking that IPv4 exhaustion would break the any-to-any
connectivity!
Macca
On Mon, Nov 3, 2014 at 10:11 PM, Jamie Baddeley
Hey Barry,
Great conversation starter and some topics that have on my mind lately.
Seems to me after having done a quick scan of the market the other day that ISP's have fallen away around being clear about the Acceptable Use Policies that they may have with customers. Traditionally (or least when my head was buried *all the time* in operational matters), AUP's were built to deal with intentional abuse of the network, spam, conscious DDoS attacks ecetera ecetera. But as you point out with very high speed plans available and customers being unintentional participants of abuse, the results can be quite cataclysmic and spectacuuulaaarr.
Our friend Roland Dobbins presented a really simple summary at AUSNOG recently on the state of play with fashionable DDoS attacks. At least half a dozen services available on 1G connected CPE - mostly reflector attacks. DNS, NTP, Chargen (woah, the 80's are calling), and some others too.
Seems to me we need to consider whether these services running on CPE are considered harmful in a gb connected age and whether we make it clear that these services (i.e open DNS resolver on a gb connected customer site) is considered harmful by default unless the customer has explicitly asked for that to be available and is consciously aware of the risks. I think the challenges of operationally managing this are proportionally related to competition and the fluidity of the market. Customers churning and turning up with their own CPE. Sigh.
It's not until we've had the conversations with our customers and we get sense of what is mutually acceptable before we take steps to explicitly deny this stuff. It is all about mitigating risk for the benefit of most but the process by which we get that permission to do so is an important one.
I'd like to think we all care about an open internet where any one can connect with anyone, but the engineer in me says that there are some things we should do to ensure that it is not stupidly easy for a 3rd party to use anyone to harm someone else.
Now I'm sure some of you will say, yeah we do that. But as far as the public is concerned - are we making that clear enough? I think the wholesale players need to also think about steps to ensure that AUP (what ever that is) ripples downstream too.
cheers
jamie
On 3 November 2014 23:01, Barry Murphy
wrote: Hey guys,
I just wondered what people thought of the implications around 1gbps residential / business plans now becoming more and more common. I¹m seeing requests from wholesalers and retail asking about this gigatown thing and how they can get a 1Gbps service, especially unlimited. While I know 1gbps is not too common yet, the 200mbps plan is becoming more and more common and it won¹t be long before there is pressure to do 1gbps
I know the unlimited part is easy to justify to the client around Œinternational & domestic¹ transit pricing, but some ask why can we not do user to user or peering traffic at 1gbps.
I¹m not sure if it¹s still true, but I recall from the Chorus documentation that you could LAG a maximum of 8x10g circuits in a handover region and that was the max, so theoretically 80gbps handover. For the sake of ease and because 10Gbps holes are not cheap in high end routers like ALU 7750 SR-7¹s which we run, we will consider the average provider has a single 10g or maybe 2 per region right now.
If you had say 10 or 20 users on the 1Gbps plan or even 100-200 users on the 200mbps plan and they had misconfigured routers (consider they could do 200mbps upload) opening them up to the likes of DNS amplifications etc. Now those users are maxing out the upload capacity of the handover, you have no ability to QoS the malicious users as the QoS would need to come before hitting the handover, I.e. On the CPE. Suddenly everyone on the handover is impacted from a handful of users that wanted the faster speed just because it was available and affordable. The only way to stop the attack affecting everyone would be to isolate and disconnect the end users causing the damage, be it IPOE or PPPoE, if the user was on a direct /30 IP then things are even harder to manage.
The DNS amplifications that hurt Spark for a whole weekend, I can only assume was caused by such a large amount of affected devices filling up the handovers and also because it was targeted at their DNS. Imagine an attack of that magnitude, 10s of thousands of end users on 200 or even 100mbps circuits filling up a 10gig or even at this point a 80gbps handover LAG.
When you talk about regional handovers up and down the country then the problem gets worse as you then obviously need to backhaul that capacity to Auckland before getting out to the internet, so this also has to be considered too.
Any thoughts on the matter.
Many thanks
Kind regards, Barry Murphy / Chief Operating Officer +64 27 490 9712 / barry(a)vibecommunications.co.nz http://www.vibecommunications.co.nz/ https://www.facebook.com/VibeCom https://twitter.com/vibecomnz https://www.linkedin.com/company/1941512 Office: +64 9 222 0000 / Fax: 0800 842 326 Unit A7, 1 Beresford Square, Auckland, New Zealand Web: www.vibecommunications.co.nz http://www.vibecommunications.co.nz/ / Peering: AS45177 http://www.peeringdb.com/view.php?asn=45177 This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
I think the CPE problems are looking up in a way - the problem has been that ADSL/VDSL are using firmware blobs and it's hard to have custom distributions on them with more attention towards security. Most of the modems just stick their own branding on top of what Broadcom give them, which is years old and includes things like vulnerable dropbear ssh client. Fortunately this isn't usually exposed to the world. Dropbear is being maintained again, but it wasn't for many years. I think the way forwards with UFB and other such connections is actually to get away from these shoddy firmwares and move to something more secure with automatic updates, and more attention to security. The main hinderance seems to be that modems don't have much flash memory on them yet so people are using prebuilt images that can't autoupdate. But it is likely that will change in the near future, as even cheap cellphones have plentiful amounts of flash on them now. The attitude towards security from these companies is obviously rather casual, but their hand has to be forced in a way if they're going to change. One attitude for these things is to just charge for bandwidth, which makes users care. Just like if you have a plumbing leak, it's up to you to try and get compensation for such, otherwise you're going to just be left with a huge bill. Ben. On Mon, Nov 03, 2014 at 10:26:58PM +1100, McDonald Richards wrote:
I'd like to think we all care about an open internet where any one can connect with anyone, but the engineer in me says that there are some things we should do to ensure that it is not stupidly easy for a 3rd party to use anyone to harm someone else. I agree. The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now). Who would have thought you could weaponise a D-Link ;) A lot of providers still fail to implement basic ingress packet filtering from users (BCP38). What hope is there if scope is expanded to limit or block NTP, DNS, SSDP, SNMP etc as well? Maybe our beloved vendors of BNG and BRAS products should step up to the plate and give us the 'dummy' mode config for service providers that can used for best practice secure subscriber templates? That'll take care of our high speed users and their compromised home networks.... What do we do about the networks that intentionally sell bandwidth for the purposes of launching high volume unfiltered DDOS attacks? :) Here we all were thinking that IPv4 exhaustion would break the any-to-any connectivity! Macca On Mon, Nov 3, 2014 at 10:11 PM, Jamie Baddeley
wrote: Hey Barry,
Great conversation starter and some topics that have on my mind lately.
Seems to me after having done a quick scan of the market the other day that ISP's have fallen away around being clear about the Acceptable Use Policies that they may have with customers. Traditionally (or least when my head was buried all the time in operational matters), AUP's were built to deal with intentional abuse of the network, spam, conscious DDoS attacks ecetera ecetera. But as you point out with very high speed plans available and customers being unintentional participants of abuse, the results can be quite cataclysmic and spectacuuulaaarr.
Our friend Roland Dobbins presented a really simple summary at AUSNOG recently on the state of play with fashionable DDoS attacks. At least half a dozen services available on 1G connected CPE -A mostly reflector attacks. DNS, NTP, Chargen (woah, the 80's are calling), and some others too.
Seems to me we need to consider whether these services running on CPE are considered harmful in a gb connected age and whether we make it clear that these services (i.e open DNS resolver on a gb connected customer site) is considered harmful by default unless the customer has explicitly asked for that to be available and is consciously aware of the risks. I think the challenges of operationally managing this are proportionally related to competition and the fluidity of the market. Customers churning and turning up with their own CPE. Sigh.
It's not until we've had the conversations with our customers and we get sense of what is mutually acceptable before we take steps to explicitly deny this stuff. It is all about mitigating risk for the benefit of most but the process by which we get that permission to do so is an important one.
I'd like to think we all care about an open internet where any one can connect with anyone, but the engineer in me says that there are some things we should do to ensure that it is not stupidly easy for a 3rd party to use anyone to harm someone else.
Now I'm sure some of you will say, yeah we do that. But as far as the public is concerned - are we making that clear enough? I think the wholesale players need to also think about steps to ensure that AUP (what ever that is) ripples downstream too.
cheers
jamie
On 3 November 2014 23:01, Barry Murphy
wrote: Hey guys,
I just wondered what people thought of the implications around 1gbps residential / business plans now becoming more and more common. IA^1m seeing requests from wholesalers and retail asking about this gigatown thing and how they can get a 1Gbps service, especially unlimited. While I know 1gbps is not too common yet, the 200mbps plan is becoming more and more common and it wonA^1t be long before there is pressure to do 1gbps
I know the unlimited part is easy to justify to the client around AA*international & domesticA^1 transit pricing, but some ask why can we not do user to user or peering traffic at 1gbps.
IA^1m not sure if itA^1s still true, but I recall from the Chorus documentation that you could LAG a maximum of 8x10g circuits in a handover region and that was the max, so theoretically 80gbps handover. For the sake of ease and because 10Gbps holes are not cheap in high end routers like ALU 7750 SR-7A^1s which we run, we will consider the average provider has a single 10g or maybe 2 per region right now.
If you had say 10 or 20 users on the 1Gbps plan or even 100-200 users on the 200mbps plan and they had misconfigured routers (consider they could do 200mbps upload) opening them up to the likes of DNS amplifications etc. Now those users are maxing out the upload capacity of the handover, you have no ability to QoS the malicious users as the QoS would need to come before hitting the handover, I.e. On the CPE. Suddenly everyone on the handover is impacted from a handful of users that wanted the faster speed just because it was available and affordable. The only way to stop the attack affecting everyone would be to isolate and disconnect the end users causing the damage, be it IPOE or PPPoE, if the user was on a direct /30 IP then things are even harder to manage.
The DNS amplifications that hurt Spark for a whole weekend, I can only assume was caused by such a large amount of affected devices filling up the handovers and also because it was targeted at their DNS. Imagine an attack of that magnitude, 10s of thousands of end users on 200 or even 100mbps circuits filling up a 10gig or even at this point a 80gbps handover LAG.
When you talk about regional handovers up and down the country then the problem gets worse as you then obviously need to backhaul that capacity to Auckland before getting out to the internet, so this also has to be considered too.
Any thoughts on the matter.
Many thanks
Kind regards, Barry Murphy / Chief Operating Officer +64 27 490 9712 / barry(a)vibecommunications.co.nz A http://www.vibecommunications.co.nz/ https://www.facebook.com/VibeComA https://twitter.com/vibecomnz https://www.linkedin.com/company/1941512 Office: +64 9 222 0000 / Fax: 0800 842 326 Unit A7, 1 Beresford Square, Auckland, New Zealand Web: www.vibecommunications.co.nz http://www.vibecommunications.co.nz/ / Peering: AS45177 http://www.peeringdb.com/view.php?asn=45177 This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Tue, Nov 4, 2014 at 12:26 AM, McDonald Richards
I agree. The days of the "any to any, open Internet" are slowly coming to an end.
I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement? Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past? Comments?
On Tue, 4 Nov 2014 08:38:56 +1300 Dean Pemberton wrote:
I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement?
A strong 'no' from me.
Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past?
We are at a time when IPv6 is poised to rescue us from NAT and give us the Internet we actually want. It makes little sense to turn the Internet in to a system for "delivering port 443✝" to centralised commercial operators (see Facebook et al and the problems inherent in this model) instead of embracing it as an open and empowering commodity, one where the greatest protocols and outcomes are likely still to be realised. ✝ At least we're finally getting the message about TLS, perhaps. -- Michael
I should point out that I mean it purely as an "any service to any service
from any host to any host" type of scenario :)
Maybe I should have phrased it as the days of unfiltered and unfettered
provision of default residential services is coming to an end.
Macca
On Mon, Nov 3, 2014 at 12:32 PM, Michael Fincham
On Tue, 4 Nov 2014 08:38:56 +1300 Dean Pemberton wrote:
I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement?
A strong 'no' from me.
Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past?
We are at a time when IPv6 is poised to rescue us from NAT and give us the Internet we actually want.
It makes little sense to turn the Internet in to a system for "delivering port 443✝" to centralised commercial operators (see Facebook et al and the problems inherent in this model) instead of embracing it as an open and empowering commodity, one where the greatest protocols and outcomes are likely still to be realised.
✝ At least we're finally getting the message about TLS, perhaps.
-- Michael
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Really? Outside of the states I'm not swing this at all. Even within the
states its just ISPs realizing there's no mutual benefit anymore, because
the ISP market there is often mono- or duo- polistic.
Jed
On 4 Nov 2014 12:53, "McDonald Richards"
I should point out that I mean it purely as an "any service to any service from any host to any host" type of scenario :)
Maybe I should have phrased it as the days of unfiltered and unfettered provision of default residential services is coming to an end.
Macca
On Mon, Nov 3, 2014 at 12:32 PM, Michael Fincham
wrote: On Tue, 4 Nov 2014 08:38:56 +1300 Dean Pemberton wrote:
I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement?
A strong 'no' from me.
Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past?
We are at a time when IPv6 is poised to rescue us from NAT and give us the Internet we actually want.
It makes little sense to turn the Internet in to a system for "delivering port 443✝" to centralised commercial operators (see Facebook et al and the problems inherent in this model) instead of embracing it as an open and empowering commodity, one where the greatest protocols and outcomes are likely still to be realised.
✝ At least we're finally getting the message about TLS, perhaps.
-- Michael
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Jed, +1 to what Macca is saying. Particularly in NZ/AU. The internet will become far more of a user—>content network affair down your side of the pacific than in places where transit is cheap and easy. Content is coming closer. You can get almost all of it in NZ, and between Caches and POP deployments, more and more arrives in NZ every year. The cost of peering at the IXs is not even worth factoring. The cost of transit in NZ is huge. While it’s reasonable on your shiny new UFB thing to expect to get full rate to all this cached content (which has a minimal cost), you can’t expect your ISP to provide you 1Gbit access to the USA from NZ without the user shelling out some serious money (or the ISP making a massive loss!). Cheers, Hoff
On 9/11/2014, at 12:00 am, mcfbbqroast .
wrote: Really? Outside of the states I'm not swing this at all. Even within the states its just ISPs realizing there's no mutual benefit anymore, because the ISP market there is often mono- or duo- polistic.
Jed
On 4 Nov 2014 12:53, "McDonald Richards"
mailto:mcdonald.richards(a)gmail.com> wrote: I should point out that I mean it purely as an "any service to any service from any host to any host" type of scenario :) Maybe I should have phrased it as the days of unfiltered and unfettered provision of default residential services is coming to an end.
Macca
On Mon, Nov 3, 2014 at 12:32 PM, Michael Fincham
mailto:michael(a)hotplate.co.nz> wrote: On Tue, 4 Nov 2014 08:38:56 +1300 Dean Pemberton wrote: I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement?
A strong 'no' from me.
Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past?
We are at a time when IPv6 is poised to rescue us from NAT and give us the Internet we actually want.
It makes little sense to turn the Internet in to a system for "delivering port 443✝" to centralised commercial operators (see Facebook et al and the problems inherent in this model) instead of embracing it as an open and empowering commodity, one where the greatest protocols and outcomes are likely still to be realised.
✝ At least we're finally getting the message about TLS, perhaps.
-- Michael
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz mailto:NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz mailto:NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 2014-11-04 08:38, Dean Pemberton wrote:
On Tue, Nov 4, 2014 at 12:26 AM, McDonald Richards
wrote: I agree. The days of the "any to any, open Internet" are slowly coming to an end.
I'm interested in teasing out people's opinions on this. Do people on the list agree with this statement? Are we at a stage now where the only future people can see is one where 'any to any' connectivity is impossible. Are we at a stage now where the concept of an Open Internet is a thing of the past?
Comments?
I see several different things happening: - As access speeds rise, and data caps are removed, any difference between a residential and business connection is disappearing. Although business connections have typically been charged a lot more for their service, and offered a higher level of service, this doesn't in practice make any difference. Residential users have come to expect that their service works 24x7, that they'll get prompt fixes for faults, and that they'll pay little more than they have in previous years for a better, faster service. How many home users these days would accept a busy tone when trying to use the internet? - Businesses are increasingly seeing an internet connection as just another pipe into the building (like water or power), and are looking for the cheapest option available to them, because the cheapest option (which is generally the residential class service) now has sufficiently high service levels and performance that it's what they want. - Many discussions about filtering things focus on the mass-market customers, who are the ones paying the least for their service. Yet they're the ones that to filter, time and money must be spent. On the other hand, there are (typically) unfiltered business connections, because businesses are trusted, or because they demand a service where they can run a webserver/email server/VPN gateway. Where I'm going with all of this is that I think in future, we have to consider all connections equal, at least as a starting point. Connection speeds, data caps, service levels etc are heading towards being equal no matter which customer segment you belong to. Which, IMHO, means we should treat them all equally; if you filter one of them, you filter them all. If you give one the option to opt out of filtering, you give them all the option. There was an article recently on the Washington Post (http://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/10/31/does-the-...) which explores why end users are also content providers, and should be treated as such - I think we should be careful of anything which tries to limit what the end users can do, because when we try to change the behaviour of the users, they route around the damage. Which is the last thing I think worth mentioning; that the internet will route around damage, whether we like it or not. We can filter things, we can try to block stuff, but unless you cut off the connectivity completely, devices and programs will still find ways to talk directly to each other. --David
On Wed, Nov 5, 2014 at 10:12 AM, David Robb
Which is the last thing I think worth mentioning; that the internet will route around damage, whether we like it or not. We can filter things, we can try to block stuff, but unless you cut off the connectivity completely, devices and programs will still find ways to talk directly to each other.
Good point well made. It's something that IT departments are having to live with and ISPs will be no different. If you don't give employees the quality of email or file storage that they have come to expect, they'll just install gmail and install dropbox. and BYO-IT-Department is born. ISPs will be the same. Try and restrict people and you'll just end up playing whack-a-mole.
ISPs will be the same. Try and restrict people and you'll just end up playing whack-a-mole
I agree that trying to restrict creative people from having free access
will result in whack-a-mole, but common sense is needed when considering
the damage that can be done with basic reflection attacks.
Should you default block the deafult SNMP port to a residential user from
the Internet? Can the CPE vendor be trusted to not leave a default "public"
community with the Internet facing interface permitted? Can the user be
trusted to secure their own network devices to prevent misuse?
Which of these things is the easiest to accomplish and provides no
reduction in experience for 99.95% of "normal" residential Internet users?
Which of them has the potential to melt down the Internet if a CPE vendor
ships 500,000+ units of equipment and leaves a door open?
Macca
On Tue, Nov 4, 2014 at 1:40 PM, Dean Pemberton
On Wed, Nov 5, 2014 at 10:12 AM, David Robb
wrote: Which is the last thing I think worth mentioning; that the internet will route around damage, whether we like it or not. We can filter things, we can try to block stuff, but unless you cut off the connectivity completely, devices and programs will still find ways to talk directly to each other.
Good point well made. It's something that IT departments are having to live with and ISPs will be no different. If you don't give employees the quality of email or file storage that they have come to expect, they'll just install gmail and install dropbox. and BYO-IT-Department is born.
ISPs will be the same. Try and restrict people and you'll just end up playing whack-a-mole. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 2014-11-05 10:50, McDonald Richards wrote:
ISPs will be the same. Try and restrict people and youll just end up playing whack-a-mole
I agree that trying to restrict creative people from having free access will result in whack-a-mole, but common sense is needed when considering the damage that can be done with basic reflection attacks.
Should you default block the deafult SNMP port to a residential user from the Internet?
Part of the point I was trying to make is that I don't think you're going to be reasonably able to distinguish residential users any more, nor should you - the effect you can have on them is no different to the effect you can have on a business, and I'm not sure you always know which is which. Which is not to say that filtering isn't worth considering, or that providing people with an opt-out-of-filtering option isn't a good idea, it's just that I think the effects should be considered without making assumptions about the end user's use-case.
Can the CPE vendor be trusted to not leave a default "public" community with the Internet facing interface permitted?
Can the user be trusted to secure their own network devices to prevent misuse?
I've yet to find anyone, be it a home-user, business, ISP, or government, that hasn't screwed up that one at some point. :)
Which of these things is the easiest to accomplish and provides no reduction in experience for 99.95% of "normal" residential Internet users? Which of them has the potential to melt down the Internet if a CPE vendor ships 500,000+ units of equipment and leaves a door open?
Should we also bring back some variant on Telepermitting, where vendors get their kit voluntarily certified as not being open to exploitation? Another thing I believe anyone filtering traffic should do is be open about it. Detail what you're filtering, why, and how to opt out, and make it publicly available. --David
On Wed, 2014-11-05 at 11:08 +1300, David Robb wrote:
Part of the point I was trying to make is that I don't think you're going to be reasonably able to distinguish residential users any more, nor should you - the effect you can have on them is no different to the effect you can have on a business, and I'm not sure you always know which is which.
Pretty simple really. I pay for a business plan. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
And how do your expectations differ from those of a residential user?
On Wed, Nov 5, 2014 at 11:31 AM, Steve Holdoway
On Wed, 2014-11-05 at 11:08 +1300, David Robb wrote:
Part of the point I was trying to make is that I don't think you're going to be reasonably able to distinguish residential users any more, nor should you - the effect you can have on them is no different to the effect you can have on a business, and I'm not sure you always know which is which.
Pretty simple really. I pay for a business plan.
-- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
OK, I'll top post... For a start I have an SLA in place! Waiting 3 days for a fix is unacceptable. For a residential client, it's just a PITA. However the question was how to distinguish businesses and residential. If a home based business is not prepared to pay for improved service, then it shouldn't be treated as a business. Steve On Wed, 2014-11-05 at 11:32 +1300, Dean Pemberton wrote:
And how do your expectations differ from those of a residential user?
On Wed, Nov 5, 2014 at 11:31 AM, Steve Holdoway
wrote: On Wed, 2014-11-05 at 11:08 +1300, David Robb wrote:
Part of the point I was trying to make is that I don't think you're going to be reasonably able to distinguish residential users any more, nor should you - the effect you can have on them is no different to the effect you can have on a business, and I'm not sure you always know which is which.
Pretty simple really. I pay for a business plan.
-- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
-- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
I remember this problem from back in the TelstraClear Cable days.
Home businesses could get much faster/cheaper connections if they
signed up for TCL Cable than for ADSL. They didn't like at all being
told they didn't have any SLA when things went down.
Dean
On Wed, Nov 5, 2014 at 11:37 AM, Steve Holdoway
OK, I'll top post...
For a start I have an SLA in place! Waiting 3 days for a fix is unacceptable. For a residential client, it's just a PITA.
However the question was how to distinguish businesses and residential. If a home based business is not prepared to pay for improved service, then it shouldn't be treated as a business.
Steve
On Wed, 2014-11-05 at 11:32 +1300, Dean Pemberton wrote:
And how do your expectations differ from those of a residential user?
On Wed, Nov 5, 2014 at 11:31 AM, Steve Holdoway
wrote: On Wed, 2014-11-05 at 11:08 +1300, David Robb wrote:
Part of the point I was trying to make is that I don't think you're going to be reasonably able to distinguish residential users any more, nor should you - the effect you can have on them is no different to the effect you can have on a business, and I'm not sure you always know which is which.
Pretty simple really. I pay for a business plan.
-- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
-- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 5/11/14 11:43, Dean Pemberton wrote:
I remember this problem from back in the TelstraClear Cable days.
Home businesses could get much faster/cheaper connections if they signed up for TCL Cable than for ADSL. They didn't like at all being told they didn't have any SLA when things went down.
And so there were such things as TCL Cable business connections (at least at one point; I had a customer that had one of them). I suspect about the only difference was the SLA. In many other areas of IT you can pay more for a SLA with faster restore (eg, hardware repairs), and it's often sold as a separate line item. I don't see a reason why that couldn't happen with Internet service too -- one has to prioritise dispatch of service people somehow. Some people may want 4-hour-restore for their house; some businesses might not care providing someone comes next week. In an Internet context both CIR and contention ratios seem obvious other things an organisation/person might legitimately choose to pay more for, in various unbundled ways. A home user might, eg, choose to live with a CIR that was just enough for, say, one TV stream (or might not, if they have children :-) ), but a business might want a larger CIR. It seems to me this too could be sold as a separate line item, with necessarily having to be tagged "home" or "business". (Although explaining CIR and just how far the CIR extended might become... non-trivial.) IMHO there's definitely room in the middle between "no CIR, capped bandwidth usage, faster burst" ("home") and "full CIR, no bandwidth cap, faster burst" ("business") for other variations. But it may well take a while to "educate the market" on what they're buying. It might also help _not_ to call them "home plans" and "business plans". At some point for many users several of these criteria become "practically unlimited" -- ie, so far beyond what they'll actually use -- that more ceases to be a selling point, so the buying distinguisher is something else other than, eg, "faster burst rate". Certainly for me beyond, say, 100Mbps to my house I care about most things other than the peak burst rate (and I only care about peak burst rate beyond 10Mbps occasionally even now). Ewen
On 5/11/14 11:37 AM, "Steve Holdoway"
OK, I'll top post...
For a start I have an SLA in place! Waiting 3 days for a fix is unacceptable. For a residential client, it's just a PITA.
If you have a POTS line, with a medical monitor attached, then losing residential POTS is more than a PITA. That was then, the Internet is now. We are moving towards, or already at, a point where the Internet is ubiquitous and always available. At least, that is both general experience and also the general expectation. Different customers might have different needs, but as David says, it's not as simple as Residential/Business anymore. -- Michael Newbery Principal Architect Vodafone New Zealand Limited ----------------------------------------------------------------------------------------------- Unless otherwise stated, any views or opinions expressed are solely those of the author and do not represent those of Vodafone New Zealand Limited.
On Wed, 2014-11-05 at 14:38 +1300, Newbery, Michael, Vodafone NZ wrote:
On 5/11/14 11:37 AM, "Steve Holdoway"
wrote: OK, I'll top post...
For a start I have an SLA in place! Waiting 3 days for a fix is unacceptable. For a residential client, it's just a PITA.
If you have a POTS line, with a medical monitor attached, then losing residential POTS is more than a PITA.
That was then, the Internet is now. We are moving towards, or already at, a point where the Internet is ubiquitous and always available. At least, that is both general experience and also the general expectation.
But it *IS* just as simple as YGWYPF Quoting special cases is not really relevant, and I'm sure there are rules in place for potentially life threatening scenarios like that. Phone lines, just like the internet are not a right, they are a paid for service. If I want to pay extra for a better one, why shouldn't I be able to? If it's your take that a basic level of internet access is your right, then don't live in Mackenzie Country! Maybe we should all be issued an IPv6 address on our birth certificate? -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
On 5/11/14 3:15 PM, "Steve Holdoway"
On Wed, 2014-11-05 at 14:38 +1300, Newbery, Michael, Vodafone NZ wrote:
On 5/11/14 11:37 AM, "Steve Holdoway"
wrote: OK, I'll top post...
For a start I have an SLA in place! Waiting 3 days for a fix is unacceptable. For a residential client, it's just a PITA.
If you have a POTS line, with a medical monitor attached, then losing residential POTS is more than a PITA.
That was then, the Internet is now. We are moving towards, or already at, a point where the Internet is ubiquitous and always available. At least, that is both general experience and also the general expectation.
But it *IS* just as simple as YGWYPF
Quoting special cases is not really relevant, and I'm sure there are rules in place for potentially life threatening scenarios like that.
Phone lines, just like the internet are not a right, they are a paid for service. If I want to pay extra for a better one, why shouldn't I be able to?
I'm not disagreeing with that, what I'm saying is that "Internet grade", which was used not so long ago as a euphemism for "no SLA" isn't appropriate any more. People expect that the Internet is as reliable as their phone line. And that reliability is pretty high. Rules in place that worked for phone lines, now have to take into account the Internet. They may not. I don't think we disagree about being able to purchase more reliability.
If it's your take that a basic level of internet access is your right, then don't live in Mackenzie Country!
Not what I said, nor meant. I'm acutely aware of the cost of providing service outside main areas. Yet, the general expectation is that the Internet is available, everywhere, all the time. Reasonable or not, people expect their cat pictures on the tops of mountains as well.
Maybe we should all be issued an IPv6 address on our birth certificate?
An IP address is a locator, not an identifier. Please don't get me started on that debate. :) :) -- Michael Newbery Principal Architect Vodafone New Zealand Limited ----------------------------------------------------------------------------------------------- Unless otherwise stated, any views or opinions expressed are solely those of the author and do not represent those of Vodafone New Zealand Limited.
I'm not disagreeing with that, what I'm saying is that "Internet grade", which was used not so long ago as a euphemism for "no SLA" isn't appropriate any more. People expect that the Internet is as reliable as their phone line. And that reliability is pretty high.
We have just moved our "Global" business to internet grade circuits, 8000 pax, 155 branch sites, 35 countries. The best we have is an SLA for the last mile. The response from the business when something happens and the ETR is "we don't know" and "our support is best effort", is lets just say "interesting" Cheers, Bill
Have to say that blocking inbound port 25 and 53 is highly recommended for all RSPs. Plus blocking outbound port 25 to only SMTP servers you run if you wanted a sense of if customers are using their connections for mass spamming. With an opt out of course. My view is that 1gb downstream and 200mb upstream plans, it’s the upstream that in some ways is more of a concern. If your customers get infected with malware and get used as a botnet that can easily overwhelm your international capacity if you are a smaller player is much more of a concern in the UFB world. The next worry is dimensioning of your customers and the required handover and connection upstream into your core and out to the interwebs. With a 10gb handover, you should probably also have at least a 10gb connection or bonded 10gb or 100gb into your core to off load to local transparent proxies (if you have them), Local CDNs (if you have them) and peering. Since you would typically run more than one LFC into the same BNG. And then the questions come up on how many subs can / should you realistically run on a single BNG. The days of 50k+ subs on a single BNG if they all have 1GB aren’t going to fly. So then you start needing more gear to support your customers. Then the whole conversation on if unlimited plans are commercially viable and the future planning on expansion come into play. I know there is plenty of work going on in Spark around this, but the 1gb plan does change things a lot in that regard. From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of McDonald Richards Sent: Wednesday, 5 November 2014 10:50 a.m. To: Dean Pemberton Cc: nznog Subject: Re: [nznog] UFB 1 gig plans for retail and impact they have
ISPs will be the same. Try and restrict people and you'll just end up playing whack-a-mole
I agree that trying to restrict creative people from having free access will result in whack-a-mole, but common sense is needed when considering the damage that can be done with basic reflection attacks.
Should you default block the deafult SNMP port to a residential user from the Internet? Can the CPE vendor be trusted to not leave a default "public" community with the Internet facing interface permitted? Can the user be trusted to secure their own network devices to prevent misuse?
Which of these things is the easiest to accomplish and provides no reduction in experience for 99.95% of "normal" residential Internet users? Which of them has the potential to melt down the Internet if a CPE vendor ships 500,000+ units of equipment and leaves a door open?
Macca
On Tue, Nov 4, 2014 at 1:40 PM, Dean Pemberton
Which is the last thing I think worth mentioning; that the internet will route around damage, whether we like it or not. We can filter things, we can try to block stuff, but unless you cut off the connectivity completely, devices and programs will still find ways to talk directly to each other.
Good point well made. It's something that IT departments are having to live with and ISPs will be no different. If you don't give employees the quality of email or file storage that they have come to expect, they'll just install gmail and install dropbox. and BYO-IT-Department is born. ISPs will be the same. Try and restrict people and you'll just end up playing whack-a-mole. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nzmailto:NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Wed, 2014-11-05 at 14:14 +1300, Peter Lambrechtsen wrote:
Have to say that blocking inbound port 25 and 53 is highly recommended for all RSPs. Plus blocking outbound port 25 to only SMTP servers you run if you wanted a sense of if customers are using their connections for mass spamming. With an opt out of course. Given that mail servers also listen on 587 ( thanks billg ) and 465, isn't blocking just 25/tcp just a bit pointless?
Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa
On 03/11/14 22:26, McDonald Richards wrote:
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL. "OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth" The world didn't end. The same rules apply today that applied back then.
Sure - we had the conversation then, when 1.5Mbit of saturation didn't also
exhaust firewall state tables, CPU and memory resources of everything in
the service path.
What we do have now, that we didn't have then, are bot-nets for hire and
parties who intentionally exploit, infect, test and document these hosts
for hire as weapons while the end users in a lot of cases have no idea that
it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
On 03/11/14 22:26, McDonald Richards wrote:
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider. The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul. Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
I don¹t believe this to be true, you can¹t just police the users data at
the access port, the data is already consuming the full capacity of the
connection before you can police it, your policing will do nothing.
You¹d need chorus to do the policing before it reached you which is not
likely as it¹s a layer 2 service, so you¹d have to police at the egress of
each CPE, if you were in control of it.
I understand (and we do it) that you can scan your own network, detect
where open relays or open DNS servers are and firewall on your ingress
from transit and peering upstreams to ensure the downstream clients aren¹t
the source of the attack, but it simply takes one nasty worm someone
wasn¹t expecting and you haven¹t blocked and bam your 10gig is full, the
only fix is to disconnect the affected users you cannot police them.
While there is cheap tin these days such as mikrotiks, second hand ciscos
or junipers etc for the small entrants of sub 500 users. To get scale you
need big devices that can scale in size to support 100,000+ subscribers
like what we use, Alcatel Lucent 7750. Your cost per 10G port is around
$15k USD per port, this considering you¹ve already spent around $100k on
the chassis. While these prices may seem like nothing to the likes of
Telecom or Vodafone, majority of those on the list that operate an ISP,
adding an extra 10G handover for a UFB location at $10-15k plus backhaul
is not really cheap I don¹t believe, not when you¹re competing with mass
market products where people are price conscious. While we don¹t compete
for such services, some of our wholesalers do, at the end of the day we
have to point out the quantity vs quality points for them to understand.
The problem still lies, if you have a 10gig handover and you have 10
infected downstream customers that have 1Gbps access circuits pumping out
1gbps of data, they can easily consume the size of the handover and affect
everyone else on that handover until you disconnect their session. With
the theoretical maximum capacity being 80Gbps per region (say telecom can
have a maximum of 80Gbps to service the whole of auckland), it would only
take 80 of their 100¹s of thousands of customers to consume all their
Auckland region UFB handover. With the amount of data Telecom would be
passing through their routers consuming 80gbps of data, it would take some
time for some one to pick those 80 infected customers our of 100,000
customers and then disconnect each, all while the 100,000 other customers
have packet loss and bad connectivity.
I guess the fix is to have 100gbps handovers, but even then these would
likely go into aggregation switches then back to the BNG/BRAS, you¹d have
to have multiple 100gbps handovers directly into your BNG/BRAS.
Kind regards,
Barry Murphy / Chief Operating Officer
From: Kris Price
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL. "OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth" The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
You also need to consider that even if you only "contribute" 1-2 gigs of
attack traffic that don't interrupt your normal operations, somebody is
transiting it and somebody on the other end is receiving it in multiples.
Macca
On Tue, Nov 4, 2014 at 5:13 PM, Barry Murphy wrote: I don¹t believe this to be true, you can¹t just police the users data at
the access port, the data is already consuming the full capacity of the
connection before you can police it, your policing will do nothing.
You¹d need chorus to do the policing before it reached you which is not
likely as it¹s a layer 2 service, so you¹d have to police at the egress of
each CPE, if you were in control of it. I understand (and we do it) that you can scan your own network, detect
where open relays or open DNS servers are and firewall on your ingress
from transit and peering upstreams to ensure the downstream clients aren¹t
the source of the attack, but it simply takes one nasty worm someone
wasn¹t expecting and you haven¹t blocked and bam your 10gig is full, the
only fix is to disconnect the affected users you cannot police them. While there is cheap tin these days such as mikrotiks, second hand ciscos
or junipers etc for the small entrants of sub 500 users. To get scale you
need big devices that can scale in size to support 100,000+ subscribers
like what we use, Alcatel Lucent 7750. Your cost per 10G port is around
$15k USD per port, this considering you¹ve already spent around $100k on
the chassis. While these prices may seem like nothing to the likes of
Telecom or Vodafone, majority of those on the list that operate an ISP,
adding an extra 10G handover for a UFB location at $10-15k plus backhaul
is not really cheap I don¹t believe, not when you¹re competing with mass
market products where people are price conscious. While we don¹t compete
for such services, some of our wholesalers do, at the end of the day we
have to point out the quantity vs quality points for them to understand. The problem still lies, if you have a 10gig handover and you have 10
infected downstream customers that have 1Gbps access circuits pumping out
1gbps of data, they can easily consume the size of the handover and affect
everyone else on that handover until you disconnect their session. With
the theoretical maximum capacity being 80Gbps per region (say telecom can
have a maximum of 80Gbps to service the whole of auckland), it would only
take 80 of their 100¹s of thousands of customers to consume all their
Auckland region UFB handover. With the amount of data Telecom would be
passing through their routers consuming 80gbps of data, it would take some
time for some one to pick those 80 infected customers our of 100,000
customers and then disconnect each, all while the 100,000 other customers
have packet loss and bad connectivity. I guess the fix is to have 100gbps handovers, but even then these would
likely go into aggregation switches then back to the BNG/BRAS, you¹d have
to have multiple 100gbps handovers directly into your BNG/BRAS. Kind regards,
Barry Murphy / Chief Operating Officer From: Kris Price There are networks out there that cope with these issues. Develop means to
monitor and detect DDoS and police users in near real time at the access
port. Think about what happens when someone tries to launch a DDoS from a
cloud provider. The related aspect to this is we can, if we choose provide very high
amounts of bandwidth with very low over sub ratios. Network equipment is
now a commodity. Provided you have the fiber you can light vast amounts of
bandwidth for surprisingly low cost,
not just in the access but also the long haul. Sent from my mobile On Nov 3, 2014, at 6:23 PM, McDonald Richards
Sure - we had the conversation then, when 1.5Mbit of saturation didn't
also exhaust firewall state tables, CPU and memory resources of everything
in the service path. What we do have now, that we didn't have then, are bot-nets for hire and
parties who intentionally exploit, infect, test and document these hosts
for hire as weapons while the end users in a lot of cases have no idea
that it's happening outside of a slower
Internet connection. On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an
end. One small flaw in one mass produced and mass distributed piece
of software (including software that runs on CPE) can easily snowball
into hundreds of gigabits of traffic at the "core" of the Internet (I
hate that term but I'm too tired to come up with anything else right
now). We had this same conversation when people started moving from dial-up to
DSL. "OMG a single user on 1.5 Mbit/s can saturate our entire server farm
bandwidth" The world didn't end. The same rules apply today that applied back then.
_______________________________________________
NZNOG mailing list
NZNOG(a)list.waikato.ac.nz
http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________
NZNOG mailing list
NZNOG(a)list.waikato.ac.nz
http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________
NZNOG mailing list
NZNOG(a)list.waikato.ac.nz
http://list.waikato.ac.nz/mailman/listinfo/nznog
My honest thoughts are that these things will be fixed when they're an
issue.
Right now UFB Gigabit connections still coat hundreds at retail, and its
going to stay that way for a while.
As they get more affordable things will change to deal with the issues you
point out, for example LFCs might offer a filtering system for ISPs which
filters out the bad packet a before they get to backbones.
If gigabit becomes widespread I imagine handovers will be upgraded to 40 or
even 100g.
I know it seems wrong to say that we can deal with it later, but I think
its the right choice on this situation. Just my 2 cents.
~ Jed
On 4 Nov 2014 19:13, "Barry Murphy"
I don¹t believe this to be true, you can¹t just police the users data at the access port, the data is already consuming the full capacity of the connection before you can police it, your policing will do nothing. You¹d need chorus to do the policing before it reached you which is not likely as it¹s a layer 2 service, so you¹d have to police at the egress of each CPE, if you were in control of it.
I understand (and we do it) that you can scan your own network, detect where open relays or open DNS servers are and firewall on your ingress from transit and peering upstreams to ensure the downstream clients aren¹t the source of the attack, but it simply takes one nasty worm someone wasn¹t expecting and you haven¹t blocked and bam your 10gig is full, the only fix is to disconnect the affected users you cannot police them.
While there is cheap tin these days such as mikrotiks, second hand ciscos or junipers etc for the small entrants of sub 500 users. To get scale you need big devices that can scale in size to support 100,000+ subscribers like what we use, Alcatel Lucent 7750. Your cost per 10G port is around $15k USD per port, this considering you¹ve already spent around $100k on the chassis. While these prices may seem like nothing to the likes of Telecom or Vodafone, majority of those on the list that operate an ISP, adding an extra 10G handover for a UFB location at $10-15k plus backhaul is not really cheap I don¹t believe, not when you¹re competing with mass market products where people are price conscious. While we don¹t compete for such services, some of our wholesalers do, at the end of the day we have to point out the quantity vs quality points for them to understand.
The problem still lies, if you have a 10gig handover and you have 10 infected downstream customers that have 1Gbps access circuits pumping out 1gbps of data, they can easily consume the size of the handover and affect everyone else on that handover until you disconnect their session. With the theoretical maximum capacity being 80Gbps per region (say telecom can have a maximum of 80Gbps to service the whole of auckland), it would only take 80 of their 100¹s of thousands of customers to consume all their Auckland region UFB handover. With the amount of data Telecom would be passing through their routers consuming 80gbps of data, it would take some time for some one to pick those 80 infected customers our of 100,000 customers and then disconnect each, all while the 100,000 other customers have packet loss and bad connectivity.
I guess the fix is to have 100gbps handovers, but even then these would likely go into aggregation switches then back to the BNG/BRAS, you¹d have to have multiple 100gbps handovers directly into your BNG/BRAS.
Kind regards, Barry Murphy / Chief Operating Officer
From: Kris Price
Date: Tuesday, 4 November 2014 5:17 pm Cc: nznog Subject: Re: [nznog] UFB 1 gig plans for retail and impact they have There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote:
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Barry, have I summed up your points correctly below"
1) You can't do nothing because as access link speeds increase at a
greater rate than aggregation of core link speeds there is an
unacceptable risk that single digit numbers of users will cause
significant link saturation.
2) You can't filter at the edge because in most cases you don't own
the edge. You don't own the CPE in any meaningful way, nor do you own
the access network. By the time the traffic arrives at your handover
it's already causing you problems.
3) There needs to be some way to remotely disconnect customers at L2,
in the access network, who are causing issues.
Macca, your points are:
1) The environment is fundamentally different than it was in the
past. Single digit numbers of home users have not historically had
the ability to cause substantial issues in table resource, CPU and
memory across the access, aggregation and core networks.
2) The only way to mitigate this is to limit what the end users are
able to do from a service point of view "unfiltered and unfettered
provision of default residential services is coming to an end"
Have I missed anything?
On Tue, Nov 4, 2014 at 7:13 PM, Barry Murphy
I don¹t believe this to be true, you can¹t just police the users data at the access port, the data is already consuming the full capacity of the connection before you can police it, your policing will do nothing. You¹d need chorus to do the policing before it reached you which is not likely as it¹s a layer 2 service, so you¹d have to police at the egress of each CPE, if you were in control of it.
I understand (and we do it) that you can scan your own network, detect where open relays or open DNS servers are and firewall on your ingress from transit and peering upstreams to ensure the downstream clients aren¹t the source of the attack, but it simply takes one nasty worm someone wasn¹t expecting and you haven¹t blocked and bam your 10gig is full, the only fix is to disconnect the affected users you cannot police them.
While there is cheap tin these days such as mikrotiks, second hand ciscos or junipers etc for the small entrants of sub 500 users. To get scale you need big devices that can scale in size to support 100,000+ subscribers like what we use, Alcatel Lucent 7750. Your cost per 10G port is around $15k USD per port, this considering you¹ve already spent around $100k on the chassis. While these prices may seem like nothing to the likes of Telecom or Vodafone, majority of those on the list that operate an ISP, adding an extra 10G handover for a UFB location at $10-15k plus backhaul is not really cheap I don¹t believe, not when you¹re competing with mass market products where people are price conscious. While we don¹t compete for such services, some of our wholesalers do, at the end of the day we have to point out the quantity vs quality points for them to understand.
The problem still lies, if you have a 10gig handover and you have 10 infected downstream customers that have 1Gbps access circuits pumping out 1gbps of data, they can easily consume the size of the handover and affect everyone else on that handover until you disconnect their session. With the theoretical maximum capacity being 80Gbps per region (say telecom can have a maximum of 80Gbps to service the whole of auckland), it would only take 80 of their 100¹s of thousands of customers to consume all their Auckland region UFB handover. With the amount of data Telecom would be passing through their routers consuming 80gbps of data, it would take some time for some one to pick those 80 infected customers our of 100,000 customers and then disconnect each, all while the 100,000 other customers have packet loss and bad connectivity.
I guess the fix is to have 100gbps handovers, but even then these would likely go into aggregation switches then back to the BNG/BRAS, you¹d have to have multiple 100gbps handovers directly into your BNG/BRAS.
Kind regards, Barry Murphy / Chief Operating Officer
From: Kris Price
Date: Tuesday, 4 November 2014 5:17 pm Cc: nznog Subject: Re: [nznog] UFB 1 gig plans for retail and impact they have There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote:
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
That captures the key points. I'm not about restricting freedoms, just
protecting from derp.
Macca
On Tue, Nov 4, 2014 at 12:16 PM, Dean Pemberton
Barry, have I summed up your points correctly below"
1) You can't do nothing because as access link speeds increase at a greater rate than aggregation of core link speeds there is an unacceptable risk that single digit numbers of users will cause significant link saturation. 2) You can't filter at the edge because in most cases you don't own the edge. You don't own the CPE in any meaningful way, nor do you own the access network. By the time the traffic arrives at your handover it's already causing you problems. 3) There needs to be some way to remotely disconnect customers at L2, in the access network, who are causing issues.
Macca, your points are:
1) The environment is fundamentally different than it was in the past. Single digit numbers of home users have not historically had the ability to cause substantial issues in table resource, CPU and memory across the access, aggregation and core networks. 2) The only way to mitigate this is to limit what the end users are able to do from a service point of view "unfiltered and unfettered provision of default residential services is coming to an end"
Have I missed anything?
On Tue, Nov 4, 2014 at 7:13 PM, Barry Murphy
wrote: I don¹t believe this to be true, you can¹t just police the users data at the access port, the data is already consuming the full capacity of the connection before you can police it, your policing will do nothing. You¹d need chorus to do the policing before it reached you which is not likely as it¹s a layer 2 service, so you¹d have to police at the egress of each CPE, if you were in control of it.
I understand (and we do it) that you can scan your own network, detect where open relays or open DNS servers are and firewall on your ingress from transit and peering upstreams to ensure the downstream clients aren¹t the source of the attack, but it simply takes one nasty worm someone wasn¹t expecting and you haven¹t blocked and bam your 10gig is full, the only fix is to disconnect the affected users you cannot police them.
While there is cheap tin these days such as mikrotiks, second hand ciscos or junipers etc for the small entrants of sub 500 users. To get scale you need big devices that can scale in size to support 100,000+ subscribers like what we use, Alcatel Lucent 7750. Your cost per 10G port is around $15k USD per port, this considering you¹ve already spent around $100k on the chassis. While these prices may seem like nothing to the likes of Telecom or Vodafone, majority of those on the list that operate an ISP, adding an extra 10G handover for a UFB location at $10-15k plus backhaul is not really cheap I don¹t believe, not when you¹re competing with mass market products where people are price conscious. While we don¹t compete for such services, some of our wholesalers do, at the end of the day we have to point out the quantity vs quality points for them to understand.
The problem still lies, if you have a 10gig handover and you have 10 infected downstream customers that have 1Gbps access circuits pumping out 1gbps of data, they can easily consume the size of the handover and affect everyone else on that handover until you disconnect their session. With the theoretical maximum capacity being 80Gbps per region (say telecom can have a maximum of 80Gbps to service the whole of auckland), it would only take 80 of their 100¹s of thousands of customers to consume all their Auckland region UFB handover. With the amount of data Telecom would be passing through their routers consuming 80gbps of data, it would take some time for some one to pick those 80 infected customers our of 100,000 customers and then disconnect each, all while the 100,000 other customers have packet loss and bad connectivity.
I guess the fix is to have 100gbps handovers, but even then these would likely go into aggregation switches then back to the BNG/BRAS, you¹d have to have multiple 100gbps handovers directly into your BNG/BRAS.
Kind regards, Barry Murphy / Chief Operating Officer
From: Kris Price
Date: Tuesday, 4 November 2014 5:17 pm Cc: nznog Subject: Re: [nznog] UFB 1 gig plans for retail and impact they have There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote:
The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
---- On Tue, 04 Nov 2014 17:17:17 +1300 Kris Price wrote ----
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
100GE interfaces on routers cost very serious money, at least by New Zealand standards. Beginning a sentence "Provided you have the fiber" glosses over the important question of who has the fibre. It isn't most ISP's. The wider discussion can indeed be seen as just the same things as in the past with bigger numbers. The fact that available bandwidths continue to increase dos not reduce the need for attack mitigation by network operators. - Donald Neal
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Inline below. Sent from my mobile
On Nov 4, 2014, at 11:59 AM, neals5
wrote: ---- On Tue, 04 Nov 2014 17:17:17 +1300 Kris Price wrote ----
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
100GE interfaces on routers cost very serious money, at least by New Zealand standards. Beginning a sentence "Provided you have the fiber" glosses over the important question of who has the fibre. It isn't most ISP's.
Do we still need the big expensive Vendor [ACJ] ports if we think about how we build access networks differently? Out of curiosity how much does a 10G or 100G handover cost from Chorus? Would ISPs be able to adapt to an access provider that did uncontended handover - no fancy QoS, you just get as much bandwidth as you want to use and it's up to you to handle it in your network? Glossing over he structure of the New Zealand telco sector: Just because something is artificially constrained today (bandwidth) doesn't mean that it must stay that way, and doesn't mean that we shouldn't talk about that problem knowing there are solutions to it out there. Could Chorus light that fiber and make that bandwidth available if they were shown it was viable to do so and was what their customers wanted? What does a 10G circuit Auckland to Wellington cost these days? Today if I had a ring of fiber pairs it would be possible to light 1 Tbps for probably under 2 million. That's lit, ready to plug in and use capacity, provided the ISPs would use it. So really, but for the "who owns the fiber is artificially constraining bandwidth" issue we can wipe away the bandwidth concerns completely in NZ all the way to Southern Cross if we were organized to do so.
The wider discussion can indeed be seen as just the same things as in the past with bigger numbers. The fact that available bandwidths continue to increase dos not reduce the need for attack mitigation by network operators.
Agreed. Hence detect and mitigate - shut down your bad process, your bad VM, your bad naughty grandma with her 1Gbps set top box first. How you handle the security aspects beyond that in a better way is less of a network issue and belongs higher up the stack. The network protects the network when that fails to work.
- Donald Neal
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Hi Kris, Chorus doesn't currently offer 100G handovers, but if you are loaded with cash I am sure an account manager will be willing to talk with you. Your proposal for uncontended access services sounds like DFAS (dark fibre access service) which is available. It does cost more than UFB, however. Jonathon -----Original Message----- From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of Kris Price Sent: Wednesday, 5 November 2014 9:56 a.m. To: neals5 Cc: nznog Subject: Re: [nznog] UFB 1 gig plans for retail and impact they have Inline below. Sent from my mobile
On Nov 4, 2014, at 11:59 AM, neals5
wrote: ---- On Tue, 04 Nov 2014 17:17:17 +1300 Kris Price wrote ----
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
100GE interfaces on routers cost very serious money, at least by New Zealand standards. Beginning a sentence "Provided you have the fiber" glosses over the important question of who has the fibre. It isn't most ISP's.
Do we still need the big expensive Vendor [ACJ] ports if we think about how we build access networks differently? Out of curiosity how much does a 10G or 100G handover cost from Chorus? Would ISPs be able to adapt to an access provider that did uncontended handover - no fancy QoS, you just get as much bandwidth as you want to use and it's up to you to handle it in your network? Glossing over he structure of the New Zealand telco sector: Just because something is artificially constrained today (bandwidth) doesn't mean that it must stay that way, and doesn't mean that we shouldn't talk about that problem knowing there are solutions to it out there. Could Chorus light that fiber and make that bandwidth available if they were shown it was viable to do so and was what their customers wanted? What does a 10G circuit Auckland to Wellington cost these days? Today if I had a ring of fiber pairs it would be possible to light 1 Tbps for probably under 2 million. That's lit, ready to plug in and use capacity, provided the ISPs would use it. So really, but for the "who owns the fiber is artificially constraining bandwidth" issue we can wipe away the bandwidth concerns completely in NZ all the way to Southern Cross if we were organized to do so.
The wider discussion can indeed be seen as just the same things as in the past with bigger numbers. The fact that available bandwidths continue to increase dos not reduce the need for attack mitigation by network operators.
Agreed. Hence detect and mitigate - shut down your bad process, your bad VM, your bad naughty grandma with her 1Gbps set top box first. How you handle the security aspects beyond that in a better way is less of a network issue and belongs higher up the stack. The network protects the network when that fails to work.
- Donald Neal
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog This communication, including any attachments, is confidential and may be legally privileged. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. No confidentiality or privilege is waived or lost by any mis-transmission or error. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.
Hi Kris, I will take the bait and respond with some comments and questions. :-) I am not sure that I agree that the bandwidth constraints within NZ are artificial. The reality is that bandwidth costs money - fiber, POPs, equipment, staff etc. The costs also don't scale in a linear fashion the next 10G sold may need another 100G lit up somewhere and the 100G I have seen is expensive. As the costs come down some prices will too, just remember the costs of fiber in the ground have been incurred already so that part of the equation won't be decreasing in a hurry. Prices now compared to 1, 2 or even 3 years ago are much cheaper. I am interested to hear more details how you would light up Auckland - Wellington with a 1Tbps ring for 2 million. If your indicated costs are achievable you may have a business opportunity or someone on this list may steal you idea :-p Ivan On 5/Nov/2014 9:55 a.m., Kris Price wrote:
Inline below.
Sent from my mobile
On Nov 4, 2014, at 11:59 AM, neals5
wrote: ---- On Tue, 04 Nov 2014 17:17:17 +1300 Kris Price wrote ----
There are networks out there that cope with these issues. Develop means to monitor and detect DDoS and police users in near real time at the access port. Think about what happens when someone tries to launch a DDoS from a cloud provider.
The related aspect to this is we can, if we choose provide very high amounts of bandwidth with very low over sub ratios. Network equipment is now a commodity. Provided you have the fiber you can light vast amounts of bandwidth for surprisingly low cost, not just in the access but also the long haul.
100GE interfaces on routers cost very serious money, at least by New Zealand standards. Beginning a sentence "Provided you have the fiber" glosses over the important question of who has the fibre. It isn't most ISP's.
Do we still need the big expensive Vendor [ACJ] ports if we think about how we build access networks differently?
Out of curiosity how much does a 10G or 100G handover cost from Chorus? Would ISPs be able to adapt to an access provider that did uncontended handover - no fancy QoS, you just get as much bandwidth as you want to use and it's up to you to handle it in your network?
Glossing over he structure of the New Zealand telco sector: Just because something is artificially constrained today (bandwidth) doesn't mean that it must stay that way, and doesn't mean that we shouldn't talk about that problem knowing there are solutions to it out there. Could Chorus light that fiber and make that bandwidth available if they were shown it was viable to do so and was what their customers wanted?
What does a 10G circuit Auckland to Wellington cost these days? Today if I had a ring of fiber pairs it would be possible to light 1 Tbps for probably under 2 million. That's lit, ready to plug in and use capacity, provided the ISPs would use it. So really, but for the "who owns the fiber is artificially constraining bandwidth" issue we can wipe away the bandwidth concerns completely in NZ all the way to Southern Cross if we were organized to do so.
The wider discussion can indeed be seen as just the same things as in the past with bigger numbers. The fact that available bandwidths continue to increase dos not reduce the need for attack mitigation by network operators.
Agreed. Hence detect and mitigate - shut down your bad process, your bad VM, your bad naughty grandma with her 1Gbps set top box first. How you handle the security aspects beyond that in a better way is less of a network issue and belongs higher up the stack. The network protects the network when that fails to work.
- Donald Neal
Sent from my mobile
On Nov 3, 2014, at 6:23 PM, McDonald Richards
wrote: Sure - we had the conversation then, when 1.5Mbit of saturation didn't also exhaust firewall state tables, CPU and memory resources of everything in the service path.
What we do have now, that we didn't have then, are bot-nets for hire and parties who intentionally exploit, infect, test and document these hosts for hire as weapons while the end users in a lot of cases have no idea that it's happening outside of a slower Internet connection.
On Mon, Nov 3, 2014 at 5:53 PM, Jeremy Visser
wrote: On 03/11/14 22:26, McDonald Richards wrote: The days of the "any to any, open Internet" are slowly coming to an end. One small flaw in one mass produced and mass distributed piece of software (including software that runs on CPE) can easily snowball into hundreds of gigabits of traffic at the "core" of the Internet (I hate that term but I'm too tired to come up with anything else right now).
We had this same conversation when people started moving from dial-up to DSL.
"OMG a single user on 1.5 Mbit/s can saturate our entire server farm bandwidth"
The world didn't end. The same rules apply today that applied back then. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
participants (19)
-
Barry Murphy
-
Ben
-
Bill Walker
-
David Robb
-
Dean Pemberton
-
Ewen McNeill
-
Ivan Walker
-
Jamie Baddeley
-
Jeremy Visser
-
Jonathon Exley
-
Kris Price
-
McDonald Richards
-
mcfbbqroast .
-
Michael Fincham
-
neals5
-
Newbery, Michael, Vodafone NZ
-
Peter Lambrechtsen
-
Steve Holdoway
-
Tim Hoffman