On behalf of Thomas Knoll:
---------- Forwarded message ----------
Date: Wed, 25 Mar 2009 02:02:42 +0100 (MET)
From: Thomas M. Knoll
Speaking with my exchange operator hat on (nzix.net): None of the NZIX exchanges support QoS at present. The standard connection to the exchange is an access port (untagged) which means we have no way of accepting 802.1p information from our customers. In the interests of hardening the exchange fabric we use a number of features (storm control, spanning tree root guard, spanning tree bpdu guard, port security, etc). Not all of these features are supported on trunk interfaces[1]. The exchanges are made up of a mixture of Cisco hardware, mostly 29xx and 35xx switches, but more recently some 3750 and ME-3400 switches. Most of these switches have only very limited QoS capabilities (2 queues?). Our approach is to over engineer the exchange fabric so congestion only really occurs at the customer access port, and that can be addressed by getting a faster port :) Another interesting difference between the NZIX and other exchanges is we operate route servers which most of the exchange participants use so there are very few bi-lateral peering sessions. We use Quagga on linux boxes for the route servers. Would we require any special options to be configured to allow propagation of the CoS BGP attributes? A couple more general questions: Why is it necessary to use a layer 2 QoS marking at all? I assume most of the traffic crossing exchanges is IPv4 with a handful of IPv6 (and maybe MPLS?), all of which have their own QoS marking schemes (TOS/DiffServ/EXP). Are there any special expectations around how QoS marked traffic will be handled by an exchange? e.g. minimum/maximum bandwidth, queuing algorithms (wrr, red), strict queuing etc, number of queues. [1] This may not be true with recent hardware/IOS revisions, but was certainly true in the past. Thanks, Dylan On Thu, 2009-03-26 at 03:16 +1300, Brian E Carpenter wrote:
On behalf of Thomas Knoll:
---------- Forwarded message ---------- Date: Wed, 25 Mar 2009 02:02:42 +0100 (MET) From: Thomas M. Knoll
To: nznog(a)list.waikato.ac.nz Subject: QoS-IXPs Dear Sirs,
excuse me please for bothering you with the following information, but I thought it could be of use for you as well.
Background: I am currently proposing a class of service signalling extension to BGP in order to enhance the current best effort AS interconnection into a simple class-based interconnection. This primitive traffic separation scheme also encompasses lower layer QoS schemes in a consistent manner, which makes it attractive for QoS interested peering partners to meet on peering platforms with Ethernet QoS support.
Therefore, I am trying to find out about Internet exchange points, which would a least transparently (untouched) transfer user priority bits across their platform and might even have 802.1p enqueueing and scheduling enabled on their switches.
Having said that, I will point you to the current QoS capable IXP list on the web. http://www.bgp-qos.org/qos-ixp/
Furthermore, if you are interested to be recognized by QoS interested peering partners as potential peering platform, I would cordially invite you to register there as well. http://www.bgp-qos.org/qos-ixp/add.php
Thank you in advance, Thomas Knoll
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Following Dylan's comments: Reading the request, I thought that the reference to 802.1p was as an extra thing,
might even have 802.1p enqueueing and scheduling enabled on their switches
with QoS by implication being something else, most probably DSCP. However, maybe the intent *is* to enable 1.p at the IX. Our (TCL's) view is as follows: 1. We don't care what the incoming 802.1p bits are. By the time they reach us, it's already too late to do anything. 2. The IX is a layer three interconnect. So, we don't care about the enclosing Ethernet frame, only the crunchy goodness of the IP pocket within. 3. We do not know, nor we we care to know, about the nature of the IX network beyond the NNI itself. What happens within is entirely up to the IX operator. Because of that: 1. We completely ignore the 802.1p bits on incoming frames. 2. We set the 802.1p bits on all outgoing frames to zero. Where we do QoS, we base it entirely on the Diffserv bits in the IP header. We would assume that the IX operator, if it cares about 802.1p, will likewise set the bits based on Diffserv. I suppose we might contemplate setting the outgoing bits for the benefit of the IX, but that would of course require some agreement on the mapping. -- Michael Newbery IP Architect TelstraClear Limited TelstraClear. Simple Solutions. Everyday Residential 0508 888 800 Business 0508 249 999 Enterprise & Government 0508 400 300 This email contains information which may be confidential and subject to copyright. If you are not the intended recipient you must not use, distribute or copy this email or attachments. If you have received this email in error please notify us immediately by return email and delete this email and any attachments. TelstraClear Limited accepts no responsibility for changes made to this email or to any attachments after transmission from TelstraClear Limited. It is your responsibility to check this email and any attachments for viruses. Emails are not secure. They can be intercepted, amended, lost or destroyed and may contain viruses. Anyone who communicates with TelstraClear Limited by email is taken to accept these risks.
Our (TCL's) view is as follows: 1. We don't care what the incoming 802.1p bits are. By the time they reach us, it's already too late to do anything. 2. The IX is a layer three interconnect. So, we don't care about the enclosing Ethernet frame, only the crunchy goodness of the IP pocket within. 3. We do not know, nor we we care to know, about the nature of the IX network beyond the NNI itself. What happens within is entirely up to the IX operator.
Because of that: 1. We completely ignore the 802.1p bits on incoming frames. 2. We set the 802.1p bits on all outgoing frames to zero.
Where we do QoS, we base it entirely on the Diffserv bits in the IP header. We would assume that the IX operator, if it cares about 802.1p, will likewise set the bits based on Diffserv.
I suppose we might contemplate setting the outgoing bits for the benefit of the IX, but that would of course require some agreement on the mapping.
-- Michael Newbery IP Architect TelstraClear Limited
Oh Michael, I'm really sorry about this, but someone has to say it :-) - TCL doesn't peer at the IX's so its all rather moot what TCL's view would be !, I'm sure all the IX peers would welcome you back with open arms QOS or not however ;-)
Dear Mr. Hall and Mr. Newbery, thank you for your replies and the current IX setup description. If I understand you right, the big majority of the customer base is untagged and some (e.g. TCL) is tagged. --- 1. We completely ignore the 802.1p bits on incoming frames. 2. We set the 802.1p bits on all outgoing frames to zero. --- For the untagged community, 802.1p is of course not available and the mentioned features are in place. The switch fabric is over engineered and so are the customer ports. Within this constellation, there is no chance and no need for 802.1p. Full stop. For the remaining (tagged) parties, the over engineering rule applies as well and everything works just fine. Comming to the possible use cases of 802.1p at an exchange. 1) prioritized forwarding in the case of congestion -> Given the over engineering, this will not happen in the switching fabric. -> Several sources exiting the switch to one customer port can lead to congestion in busy hours, which will require a port upgrade. With 802.1p, this upgrade could be delayed by some months because of the prioritized forwarding. Here is, where the number of queues is of interest. 2) QoS-enabled peering with layer 3 DSCPs If a peer C distinguishes e.g. 4 DSCPs and peers with A (running 12 DSCPs) and B (running 3 DSCPs) there needs to be set up a mapping between A+C and B+C on how to map the DSCPs arriving at C's port. However, neither the IP destination nor the IP source address can give C the clue, which peer the arriving packet came from. There are two options: a) match the sending MAC b) agree on the marking in the encapsulating layer, hence the 802.1p. All that is entirely independant from the CoS concept I am proposing at the IETF. Dylan, you are right, that I am sending class of service marking and mapping (DSCP<->EXP<->802.1p<->VC) information within BGP attributes, which in the common route server case, will not be been exchanged mutually between peers, but rather between the route server and all of its clients. Those marking attributes are of transitive type with IANA number 0x04 and will be relayed by the route server. As of quagga 0.99.10, no extra action was required to get the few attribute bytes relayed to the clients. Again, thank you all for reading and for the posted replies, which I hope to have answered in this lengthy post. Kind regards, Thomas On Thu, 26 Mar 2009, Dylan Hall wrote:
Speaking with my exchange operator hat on (nzix.net):
None of the NZIX exchanges support QoS at present.
The standard connection to the exchange is an access port (untagged) which means we have no way of accepting 802.1p information from our customers.
In the interests of hardening the exchange fabric we use a number of features (storm control, spanning tree root guard, spanning tree bpdu guard, port security, etc). Not all of these features are supported on trunk interfaces[1].
The exchanges are made up of a mixture of Cisco hardware, mostly 29xx and 35xx switches, but more recently some 3750 and ME-3400 switches. Most of these switches have only very limited QoS capabilities (2 queues?).
Our approach is to over engineer the exchange fabric so congestion only really occurs at the customer access port, and that can be addressed by getting a faster port :)
Another interesting difference between the NZIX and other exchanges is we operate route servers which most of the exchange participants use so there are very few bi-lateral peering sessions. We use Quagga on linux boxes for the route servers. Would we require any special options to be configured to allow propagation of the CoS BGP attributes?
A couple more general questions:
Why is it necessary to use a layer 2 QoS marking at all? I assume most of the traffic crossing exchanges is IPv4 with a handful of IPv6 (and maybe MPLS?), all of which have their own QoS marking schemes (TOS/DiffServ/EXP).
Are there any special expectations around how QoS marked traffic will be handled by an exchange? e.g. minimum/maximum bandwidth, queuing algorithms (wrr, red), strict queuing etc, number of queues.
[1] This may not be true with recent hardware/IOS revisions, but was certainly true in the past.
Thanks,
Dylan
On Thu, 2009-03-26 at 03:16 +1300, Brian E Carpenter wrote:
On behalf of Thomas Knoll:
---------- Forwarded message ---------- Date: Wed, 25 Mar 2009 02:02:42 +0100 (MET) From: Thomas M. Knoll
To: nznog(a)list.waikato.ac.nz Subject: QoS-IXPs Dear Sirs,
excuse me please for bothering you with the following information, but I thought it could be of use for you as well.
Background: I am currently proposing a class of service signalling extension to BGP in order to enhance the current best effort AS interconnection into a simple class-based interconnection. This primitive traffic separation scheme also encompasses lower layer QoS schemes in a consistent manner, which makes it attractive for QoS interested peering partners to meet on peering platforms with Ethernet QoS support.
Therefore, I am trying to find out about Internet exchange points, which would a least transparently (untouched) transfer user priority bits across their platform and might even have 802.1p enqueueing and scheduling enabled on their switches.
Having said that, I will point you to the current QoS capable IXP list on the web. http://www.bgp-qos.org/qos-ixp/
Furthermore, if you are interested to be recognized by QoS interested peering partners as potential peering platform, I would cordially invite you to register there as well. http://www.bgp-qos.org/qos-ixp/add.php
Thank you in advance, Thomas Knoll
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 27/3/09 10:06 AM, "Thomas M. Knoll"
Dear Mr. Hall and Mr. Newbery,
thank you for your replies and the current IX setup description. If I understand you right, the big majority of the customer base is untagged and some (e.g. TCL) is tagged.
Currently, all is untagged. The TCL position is just that---were QoS to be enabled, then that would be our proposal.
Comming to the possible use cases of 802.1p at an exchange.
2) QoS-enabled peering with layer 3 DSCPs If a peer C distinguishes e.g. 4 DSCPs and peers with A (running 12 DSCPs) and B (running 3 DSCPs) there needs to be set up a mapping between A+C and B+C on how to map the DSCPs arriving at C's port. However, neither the IP destination nor the IP source address can give C the clue, which peer the arriving packet came from. There are two options: a) match the sending MAC b) agree on the marking in the encapsulating layer, hence the 802.1p.
The peers REALLY need to work out what they are going to do with each others' QoS first. If A peers with B, then they must bilaterally or globally agree how DSCPs are handled---does AF31 even mean the same thing to each of them? Then they need to agree what DSCPs will be sent, and how to handle out of spec packets: options include dropping; remarking; shutting down the peering session; treating them as if they were in another CoS; etc.. If there were multiple bilateral policies with different peers in place, then having them all fetch up on the same port does not seem sensible to me. At the very least you would separate them into different VLANs.
All that is entirely independant from the CoS concept I am proposing at the IETF. Dylan, you are right, that I am sending class of service marking and mapping (DSCP<->EXP<->802.1p<->VC) information within BGP attributes, which in the common route server case, will not be been exchanged mutually between peers, but rather between the route server and all of its clients. Those marking attributes are of transitive type with IANA number 0x04 and will be relayed by the route server. As of quagga 0.99.10, no extra action was required to get the few attribute bytes relayed to the clients.
Therefore, I am trying to find out about Internet exchange points, which would a least transparently (untouched) transfer user priority bits across their platform and might even have 802.1p enqueueing and scheduling enabled on their switches.
My view is that the packet CoS marking should (must) be preserved end to end, but that what happens on each link is up to the parties running each link. In particular, there is no guarantee that 802.1p bits will transit untouched. For example, some equipment reserves the highest priority to itself for management/maintenance, so it may be considered unacceptable to have user traffic with 0x7 as a priority. At each layer 2 link, the appropriate 802.1p/EXP/... bits will be set based on the DSCP bits/VC/... etc.. If you do need to relay frames across the network, then you encapsulate them in L2TP or some such. -- Michael Newbery IP Architect TelstraClear Limited TelstraClear. Simple Solutions. Everyday Residential 0508 888 800 Business 0508 249 999 Enterprise & Government 0508 400 300 This email contains information which may be confidential and subject to copyright. If you are not the intended recipient you must not use, distribute or copy this email or attachments. If you have received this email in error please notify us immediately by return email and delete this email and any attachments. TelstraClear Limited accepts no responsibility for changes made to this email or to any attachments after transmission from TelstraClear Limited. It is your responsibility to check this email and any attachments for viruses. Emails are not secure. They can be intercepted, amended, lost or destroyed and may contain viruses. Anyone who communicates with TelstraClear Limited by email is taken to accept these risks.
On Fri, Mar 27, 2009 at 10:06 AM, Thomas M. Knoll wrote: The switch fabric is over engineered and so are the customer ports.
Within this constellation, there is no chance and no need for 802.1p. Full
stop. Is there not the consideration of failure or unanticipated traffic event
scenarios which mean that prioritisation of certain traffic would be
desirable? In the abnormal event of congestion, would not the peers like
traffic marked as high priority to be preserved over the L2 matrix?
--
r
From a technical perspective, how do you implement it? How do you ensure fairness? What resources do I need to provide to my customers to ensure
I agree that there are always situations where congestion can occur, some more theoretical, and some more real world (Olympics, Elections, Hillary Funeral, etc...). The problem I perceive is an arms race, the sort of thing that the network neutrality people are all scared of. What happens during one of these events if a party A observes that they're getting congestion delivering their content to end users. They're not in a position to magic more bandwidth out of thin air, at least not at short notice, so instead they up the priority of their traffic. Like magic the problem is solved, the exchange fabric prioritises their traffic ahead of everyone elses. Problem occurs when party B, sick of their end users complaining also decides to up their priority. Party A reacts in kind and ups theirs further. Meanwhile the other participants in the exchange, getting annoyed with the deteriorating performance start playing the same games. Before you know it, any potential benefit of QoS has been lost (and a certain amount of credibility). This doesn't even touch on the sensitive topics of "Mr Exchange operator, I'm your biggest customer so I expect you to give me higher priority", or "I'm the exchange operator and I'd like to promote my own product, so let me just tweak that setting". The above is somewhat contrived, but I believe the point is sound. Making QoS work between two parties is easy, they sit down around a table and hammer out an agreement. When you start talking about a neutral exchange with many participants how do you get everyone to agree? Does the exchange operator define a set of rules, maybe an AUP and impose it on all participants? Do you solicit feedback from the users? How do you deal with people breaching those rules? they can take advantage of it? (lets be blunt here, QoS on many platforms is far from trivial). What if some participants want to opt out of using QoS, how do you ensure fairness for them? I don't see any of the above preventing the exchange passing CoS attributes around in BGP, but honouring them is another matter. Dylan On Fri, 2009-03-27 at 12:35 +1300, Richard Wade wrote:
On Fri, Mar 27, 2009 at 10:06 AM, Thomas M. Knoll
wrote: The switch fabric is over engineered and so are the customer ports. Within this constellation, there is no chance and no need for 802.1p. Full stop.
Is there not the consideration of failure or unanticipated traffic event scenarios which mean that prioritisation of certain traffic would be desirable? In the abnormal event of congestion, would not the peers like traffic marked as high priority to be preserved over the L2 matrix? -- r _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Heh... Just read your reply as well Dylan and it says the same as
mine, although I managed to trim the answer slightly :-)
Cheers - N
On Fri, Mar 27, 2009 at 1:30 PM, Dylan Hall
I agree that there are always situations where congestion can occur, some more theoretical, and some more real world (Olympics, Elections, Hillary Funeral, etc...).
The problem I perceive is an arms race, the sort of thing that the network neutrality people are all scared of. [super snip!]
On Fri, Mar 27, 2009 at 12:35 PM, Richard Wade
Is there not the consideration of failure or unanticipated traffic event scenarios which mean that prioritisation of certain traffic would be desirable? In the abnormal event of congestion, would not the peers like traffic marked as high priority to be preserved over the L2 matrix?
Good question, but it raises the ugly issue of who decides what traffic is high priority... Unless there are commercial arrangements in place, I'd suggest that the optimum strategy for all peers would be to mark ALL their traffic as high priority. [note 1] Cheers - N [Note 1 - it's called tragedy of the commons and it would happen]
On 27/03/2009, at 1:38 PM, Neil Gardner wrote:
Good question, but it raises the ugly issue of who decides what traffic is high priority... Unless there are commercial arrangements in place, I'd suggest that the optimum strategy for all peers would be to mark ALL their traffic as high priority.
I'd suggest something like the following as policy: - two classes, "bulk" and "priority" - IX members are permitted to send n% of their IX connection bandwidth as "priority" (matrix configuration enforced) This way, the IX members can't swamp the matrix with more than n% of the connectivity they pay for. Furthermore, it is up to the IX members to decide which traffic they put in to this n% and this may vary for each peer. Priority traffic between ISPs 'x' and 'y' may be VoIP, by their own agreement. Priority traffic from Domestic Content Provider 'x' and all its peers may be streaming media. It is up to the IX and its members to decide what n% is reasonable, and scale the matrix accordingly. Other traffic classes or priorities may be added, but considering too much initially just complicates the matter. -- r
Rik Wade wrote:
I'd suggest something like the following as policy: - two classes, "bulk" and "priority" - IX members are permitted to send n% of their IX connection bandwidth as "priority" (matrix configuration enforced)
I have ports on IXes where other companies have 40x the bandwidth I do. (I have 1GE, they have 40GE). If the other party was to be allowed 10% of their traffic as "high priority" then easily I could have my entire port taken over by one organisation, even if that was NOT WHAT I WANTED. Priority for IXes is pointless. No one that I'm aware of does differential priority on their Internet networks which can be accessed externally. (Nothing like making a DDoS really effective). Why? Because priority is about trust relationships. Fundementally the Internet is untrustworthy. Therefore I can't trust any markings coming externally. How do I know a peer is really trustworthy or that their customers are? If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes. MMC
On 2009-03-27 22:14, Matthew Moyle-Croft wrote: ...
I have ports on IXes where other companies have 40x the bandwidth I do. (I have 1GE, they have 40GE). If the other party was to be allowed 10% of their traffic as "high priority" then easily I could have my entire port taken over by one organisation, even if that was NOT WHAT I WANTED.
I agree that simple priority is totally broken for exactly this reason. Actually this is why Diffserv (RFC2474/RFC2475 etc) was designed. You need classification and traffic shaping at every ingress, so that you can share the capacity fairly (i.e. neutrally). My personal prejudice is that this is useful to do at points where bandwidth is precious, but in an IXP I'd be surprised if classification and shaping hardware with enough throughput at every ingress would work out cheaper than adding bandwidth.
Priority for IXes is pointless. No one that I'm aware of does differential priority on their Internet networks which can be accessed externally. (Nothing like making a DDoS really effective). Why? Because priority is about trust relationships. Fundementally the Internet is untrustworthy. Therefore I can't trust any markings coming externally. How do I know a peer is really trustworthy or that their customers are?
That's why you'd have to classify and shape at *every* ingress. There are ideas about doing that as a way to limit DOS traffic, but it won't be free. Brian
If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes.
MMC
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Matthew Moyle-Croft wrote:
Priority for IXes is pointless. No one that I'm aware of does differential priority on their Internet networks which can be accessed externally. (Nothing like making a DDoS really effective). Why? Because priority is about trust relationships. Fundementally the Internet is untrustworthy. Therefore I can't trust any markings coming externally. How do I know a peer is really trustworthy or that their customers are?
I'm inclined to agree here. The contractual issues (ignoring the technical) would also be quite interesting, where you have 3 or more parties involved, all guaranteeing (?) those QoS capabilities. I don't see a practical use for IXP enabling quality of service capabilities, except perhaps for the signalling protocols used such as BGP. This has a separate set of headaches as well.
If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes.
If you want QoS enabled interconnects, this is very surely what private interconnects are for, with their associated contractual obligations and agreed capability sets. aj
On 27/03/2009, at 2:14 AM, Matthew Moyle-Croft wrote:
Rik Wade wrote:
I'd suggest something like the following as policy: - two classes, "bulk" and "priority" - IX members are permitted to send n% of their IX connection bandwidth as "priority" (matrix configuration enforced)
I have ports on IXes where other companies have 40x the bandwidth I do. (I have 1GE, they have 40GE). If the other party was to be allowed 10% of their traffic as "high priority" then easily I could have my entire port taken over by one organisation, even if that was NOT WHAT I WANTED.
Technical point, I would expect that high priority traffic would be limited in capacity on egress, to say 10% of your port. So, they might be able to fill up the remainder of the high priority egress queue on your port, but I doubt they would be able to fill up the entire port.
Priority for IXes is pointless. No one that I'm aware of does differential priority on their Internet networks which can be accessed externally. (Nothing like making a DDoS really effective). Why? Because priority is about trust relationships. Fundementally the Internet is untrustworthy. Therefore I can't trust any markings coming externally. How do I know a peer is really trustworthy or that their customers are?
If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes.
I'm going to have to think about this some more before replying - I'm torn. -- Nathan Ward
Nathan Ward wrote:
Technical point, I would expect that high priority traffic would be limited in capacity on egress, to say 10% of your port. So, they might be able to fill up the remainder of the high priority egress queue on your port, but I doubt they would be able to fill up the entire port.
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes.
I'm going to have to think about this some more before replying - I'm torn.
What's to think about? IXes aren't appropriate places for people to make up and enforce onto others QoS policies which are arbitrary. As I said before - the main reasons are: - No trust - no one should trust my DSCP bits and I'm not trusting anyone elses. - No use - no one uses externally accessible QoS on their Internet networks - No Common policy - what works for me (see example of VOIP provider above) doesn't work for every one and vise versa It really comes down to - what problem are we solving? IXes are big ethernet (usually) fabrics which are about not having a mass of cross connects and ports. I don't expect my cross connects to make packet dropping decisions for me, so I don't expect IXes to either. If people want to bilaterally trust DSCP bits, then that's peachy for them - but the IX isn't the place to enforce it. MMC
On 2009-03-28 10:51, Matthew Moyle-Croft wrote:
Nathan Ward wrote:
Technical point, I would expect that high priority traffic would be limited in capacity on egress, to say 10% of your port. So, they might be able to fill up the remainder of the high priority egress queue on your port, but I doubt they would be able to fill up the entire port.
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
Actually, to cope with potentially unbounded amounts of audio and video, I don't see a physically possible alternative to traffic policing for conformance with an SLA, unless the SLA allows for 100% of the installed ingress bandwidth to be real-time packets. With this type of traffic, the traditional all-you-can-eat approach to traffic is neither fair nor neutral. Yes, that's a change. But ignoring this problem won't vanish it, IMHO. Brian
If people want to organise standard passing of priority bits for non-Internet traffic, then that's all well and good. But I suspect the relationship will have to be very different to the nature of Internet IXes.
I'm going to have to think about this some more before replying - I'm torn.
What's to think about? IXes aren't appropriate places for people to make up and enforce onto others QoS policies which are arbitrary. As I said before - the main reasons are:
- No trust - no one should trust my DSCP bits and I'm not trusting anyone elses. - No use - no one uses externally accessible QoS on their Internet networks - No Common policy - what works for me (see example of VOIP provider above) doesn't work for every one and vise versa
It really comes down to - what problem are we solving? IXes are big ethernet (usually) fabrics which are about not having a mass of cross connects and ports. I don't expect my cross connects to make packet dropping decisions for me, so I don't expect IXes to either. If people want to bilaterally trust DSCP bits, then that's peachy for them - but the IX isn't the place to enforce it. MMC
------------------------------------------------------------------------
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Brian E Carpenter wrote:
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
Actually, to cope with potentially unbounded amounts of audio and video, I don't see a physically possible alternative to traffic policing for conformance with an SLA, unless the SLA allows for 100% of the installed ingress bandwidth to be real-time packets. With this type of traffic, the traditional all-you-can-eat approach to traffic is neither fair nor neutral.
What SLA? As I said in my previous post. IXes are just a subsitute for lots of cross connects etc. Cross connects don't make packet dropping decisions, so neither should an IX. If people are running out of bandwidth on their IX port or the IX fabric then it sounds like more traffic needs to be rolled off onto PNIs. If people want to bilaterally trust their DSCP bits, then fine - all power to them, but don't enforce some crazy SLA based QOS madness on an IX I want to just pass packets for me. MMC
Yes, that's a change. But ignoring this problem won't vanish it, IMHO.
Brian
Matthew, I said earlier that I prefer there to be enough bandwidth. But when there isn't? Brian On 2009-03-28 11:10, Matthew Moyle-Croft wrote:
Brian E Carpenter wrote:
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
Actually, to cope with potentially unbounded amounts of audio and video, I don't see a physically possible alternative to traffic policing for conformance with an SLA, unless the SLA allows for 100% of the installed ingress bandwidth to be real-time packets. With this type of traffic, the traditional all-you-can-eat approach to traffic is neither fair nor neutral.
What SLA? As I said in my previous post. IXes are just a subsitute for lots of cross connects etc. Cross connects don't make packet dropping decisions, so neither should an IX. If people are running out of bandwidth on their IX port or the IX fabric then it sounds like more traffic needs to be rolled off onto PNIs.
If people want to bilaterally trust their DSCP bits, then fine - all power to them, but don't enforce some crazy SLA based QOS madness on an IX I want to just pass packets for me.
MMC
Yes, that's a change. But ignoring this problem won't vanish it, IMHO.
Brian
Brian E Carpenter wrote:
Matthew,
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
Then this won't help because no one's solved the actual issue with QoS on the internet which is about trust. Fundementally the issue is that defining QoS policies at an IX means that a third party is making traffic engineering decisions for me. A third part(y|ies) that I do not trust. Beyond this is the issue that no one trusts packet markings on the internet anyway. How do I know if you behave rationally and mark packets in a meaningful way that abides by some rules that I accept as being valid? Or do you mark all packets as EF on exit from your network to ensure your customers get the best experience? (Heck, I know one ISP that munges the markings to figure out what traffic is domestic peering vs transit). People always seem to focus on the easy bit - which is arbitrarily coming up with rules which can be implemented in an ethernet switch but always seem to come unstuck at the "well, what's the trust relationship?". It's the Internet. There's no trust and the beer is virtual at best. If you've run out of bandwidth TO an IX then dropping packets is your problem. If the IX has run out of switch capacity then I'd suggest selecting a new IX. When I run out or get close I move traffic off of the IX onto PNIs, upgrade ports or join another IX and move traffic there. I don't outsource traffic engineering to other people. MMC
Brian
On 2009-03-28 11:10, Matthew Moyle-Croft wrote:
Brian E Carpenter wrote:
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
Actually, to cope with potentially unbounded amounts of audio and video, I don't see a physically possible alternative to traffic policing for conformance with an SLA, unless the SLA allows for 100% of the installed ingress bandwidth to be real-time packets. With this type of traffic, the traditional all-you-can-eat approach to traffic is neither fair nor neutral.
What SLA? As I said in my previous post. IXes are just a subsitute for lots of cross connects etc. Cross connects don't make packet dropping decisions, so neither should an IX. If people are running out of bandwidth on their IX port or the IX fabric then it sounds like more traffic needs to be rolled off onto PNIs.
If people want to bilaterally trust their DSCP bits, then fine - all power to them, but don't enforce some crazy SLA based QOS madness on an IX I want to just pass packets for me.
MMC
Yes, that's a change. But ignoring this problem won't vanish it, IMHO.
Brian
In fact I think we agree about the facts, because in the model I'm describing (policing and shaping) there has to be an SLA, and the SLA is a formal trust agreement. If there's no trust agreement in place, you're completely correct. It's just that I'm not at all convinced that the current model is sustainable with future traffic patterns. I will now retreat to my ivory tower. Peace. Regards Brian Carpenter University of Auckland On 2009-03-28 11:57, Matthew Moyle-Croft wrote:
Brian E Carpenter wrote:
Matthew,
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
Then this won't help because no one's solved the actual issue with QoS on the internet which is about trust. Fundementally the issue is that defining QoS policies at an IX means that a third party is making traffic engineering decisions for me. A third part(y|ies) that I do not trust. Beyond this is the issue that no one trusts packet markings on the internet anyway. How do I know if you behave rationally and mark packets in a meaningful way that abides by some rules that I accept as being valid? Or do you mark all packets as EF on exit from your network to ensure your customers get the best experience? (Heck, I know one ISP that munges the markings to figure out what traffic is domestic peering vs transit).
People always seem to focus on the easy bit - which is arbitrarily coming up with rules which can be implemented in an ethernet switch but always seem to come unstuck at the "well, what's the trust relationship?". It's the Internet. There's no trust and the beer is virtual at best.
If you've run out of bandwidth TO an IX then dropping packets is your problem. If the IX has run out of switch capacity then I'd suggest selecting a new IX.
When I run out or get close I move traffic off of the IX onto PNIs, upgrade ports or join another IX and move traffic there. I don't outsource traffic engineering to other people.
MMC
Brian
On 2009-03-28 11:10, Matthew Moyle-Croft wrote:
Brian E Carpenter wrote:
Say I'm a VOIP provider. All of my traffic is voice, so, usually gets high priority markings. Therefore I expect most of my in/outbound traffic is marked as a high priority. You're saying that I need to pay for a 10x bigger port to ensure that I can pass/receive all my traffic correctly without the IX switches dropping it because it doesn't conform.
Actually, to cope with potentially unbounded amounts of audio and video, I don't see a physically possible alternative to traffic policing for conformance with an SLA, unless the SLA allows for 100% of the installed ingress bandwidth to be real-time packets. With this type of traffic, the traditional all-you-can-eat approach to traffic is neither fair nor neutral.
What SLA? As I said in my previous post. IXes are just a subsitute for lots of cross connects etc. Cross connects don't make packet dropping decisions, so neither should an IX. If people are running out of bandwidth on their IX port or the IX fabric then it sounds like more traffic needs to be rolled off onto PNIs.
If people want to bilaterally trust their DSCP bits, then fine - all power to them, but don't enforce some crazy SLA based QOS madness on an IX I want to just pass packets for me.
MMC
Yes, that's a change. But ignoring this problem won't vanish it, IMHO.
Brian
On Sat, 28 Mar 2009, Brian E Carpenter wrote:
Matthew,
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
When there isn't enough bandwidth then QoS is a temporary solution. The answer is to get more bandwidth. Otherwise you're making the choice between delaying or dropping lower class traffic, and eventually[1] that'll impact on the performance of that class sufficiently that those using it will give up. For short term problems eg abnormal traffic flows causing congestion, sure. For long term traffic management - get bigger tubes. --David [1] A few more percent utilisation?
On Sun, 2009-03-29 at 09:00 +1300, David Robb wrote:
On Sat, 28 Mar 2009, Brian E Carpenter wrote:
Matthew,
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
When there isn't enough bandwidth then QoS is a temporary solution. The answer is to get more bandwidth. Otherwise you're making the choice between delaying or dropping lower class traffic, and eventually[1] that'll impact on the performance of that class sufficiently that those using it will give up.
For short term problems eg abnormal traffic flows causing congestion, sure. For long term traffic management - get bigger tubes.
That's right. If it's a tossup between the costs of engineering time to develop (and sustain(!)) a plan for QoS in those times of need versus a more conservative position around standing up a bigger tube, I'd go for a bigger tube everytime. Consider this. Lets your current guideline is 70% average load plus some QoS for the unexpected events, and after that you order a bigger pipe. Quantify the cost of developing a decent QoS plan to deal with those events. I'd assert that's a significant amount of time. And in today's environment where resources of those sorts of skills are short, that's fairly significant. Compare that to say taking a position where it's 50% of average load triggers that a decision to make the tubes bigger. Let's say that decision means you order bigger tubes 12 months earlier than the 70%+QoS plan. Is the collective incremental opex spend on a bigger tube for those 12 months greater than the capital you would have invested in developing the QoS plan? I'd assert that it is not. Really this sort of thing is an argument between two schools of thought. The School of Brute Force vs The School of Bandwidth Scarcity. I know which side I am on. But that's just me. Finally, an IX architecture comparative to the networks that connect to it is simple. I suspect that if the networks connecting to it are able to get over any Service Level Commitments to their customers, such that the IX is the bottleneck, like others have said, choose another IX. jamie
If you also add the cost of buying a decent CPU based device that can perform complex QOS in comparison to say a decent gigabit switch, those numbers will add up even faster. -----Original Message----- From: jamie baddeley [mailto:jamie.baddeley(a)vpc.co.nz] Sent: Sunday, 29 March 2009 8:37 p.m. To: David Robb Cc: nznog Subject: Re: [nznog] QoS-IXPs On Sun, 2009-03-29 at 09:00 +1300, David Robb wrote:
On Sat, 28 Mar 2009, Brian E Carpenter wrote:
Matthew,
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
When there isn't enough bandwidth then QoS is a temporary solution. The answer is to get more bandwidth. Otherwise you're making the choice between delaying or dropping lower class traffic, and eventually[1] that'll impact on the performance of that class sufficiently that those using it will give up.
For short term problems eg abnormal traffic flows causing congestion, sure. For long term traffic management - get bigger tubes.
That's right. If it's a tossup between the costs of engineering time to develop (and sustain(!)) a plan for QoS in those times of need versus a more conservative position around standing up a bigger tube, I'd go for a bigger tube everytime. Consider this. Lets your current guideline is 70% average load plus some QoS for the unexpected events, and after that you order a bigger pipe. Quantify the cost of developing a decent QoS plan to deal with those events. I'd assert that's a significant amount of time. And in today's environment where resources of those sorts of skills are short, that's fairly significant. Compare that to say taking a position where it's 50% of average load triggers that a decision to make the tubes bigger. Let's say that decision means you order bigger tubes 12 months earlier than the 70%+QoS plan. Is the collective incremental opex spend on a bigger tube for those 12 months greater than the capital you would have invested in developing the QoS plan? I'd assert that it is not. Really this sort of thing is an argument between two schools of thought. The School of Brute Force vs The School of Bandwidth Scarcity. I know which side I am on. But that's just me. Finally, an IX architecture comparative to the networks that connect to it is simple. I suspect that if the networks connecting to it are able to get over any Service Level Commitments to their customers, such that the IX is the bottleneck, like others have said, choose another IX. jamie _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 28-Mar-2009, at 16:00, David Robb wrote:
On Sat, 28 Mar 2009, Brian E Carpenter wrote:
I said earlier that I prefer there to be enough bandwidth. But when there isn't?
When there isn't enough bandwidth then QoS is a temporary solution. The answer is to get more bandwidth. Otherwise you're making the choice between delaying or dropping lower class traffic, and eventually[1] that'll impact on the performance of that class sufficiently that those using it will give up.
It's perhaps easier to see that this is the case if you use the phrase "selective throwing away of customer packets" instead of "quality of service". The goal of an ISP is surely to deliver packets for customers and bill them for doing so. Nobody wants to throw away customer packets if there's a way to avoid it. Nobody wants "quality of service" if there's a way to avoid it.
For short term problems eg abnormal traffic flows causing congestion, sure. For long term traffic management - get bigger tubes.
Amen. Joe
Most of my experience is in private networks. I've used QoS a lot for companies that run voice on their network. Predominately I've used QoS to guarantee delay, latency and jitter. I've also used it to protect certain types of network traffic (ssh and routing protocols, so I can guarantee they will work under excessive load). I don't think I have ever used it to determine traffic to drop (although it can obviously be used for that). If a customer is reaching the point where their pipes are so saturated they are losing traffic then they should upgrade to the next size pipe. On lower speed pipes you can get high jitter and delay every when the pipe is not saturated due to the serialisation delay. I've never ran into a case where the "pipes" where so big that QoS was not required. Never. You can really tell on a VoIP networks when QoS is not enabled, and I can't imagine a customer not wanting it turned on. So I can't agree with several of the comments below. However, I do agree, the public Internet is quite different. In a private network you have end to end trust, so QoS is so much easier to setup. If we tried to establish a QoS model in a public network then I can guarantee it that someone will abuse it. And I can only see it being seconds before P2P software notice QoS markings has an impact, and start marking all their own traffic. I tend to agree the only solution is bigger pipes. The only special case I can think of is when you can't get bigger pipes (or rather, the cost is too prohibitive). I also suspect that if the demand was so great for QoS fabric then a market would spring up to support the financial demand. But I think this would more likely be done with private point to point links, rather than a QoS IX. And I would surely like an ISP to deliver my packets as fast as they can, but let's face it, that's not what happens. They choose the most economic route for packet delivery, not the fastest. And even then, if it's the wrong type of packets there is a chance they'll police them even further guaranteeing packet loss will occur (which is a response to the ISP experiencing congestion and demand exceeding what they can supply - oh oh, artificial QoS!). -----Original Message----- From: Joe Abley [mailto:jabley(a)hopcount.ca] Sent: Tuesday, 31 March 2009 5:38 a.m. To: David Robb Cc: nznog Subject: Re: [nznog] QoS-IXPs ... It's perhaps easier to see that this is the case if you use the phrase "selective throwing away of customer packets" instead of "quality of service". The goal of an ISP is surely to deliver packets for customers and bill them for doing so. Nobody wants to throw away customer packets if there's a way to avoid it. Nobody wants "quality of service" if there's a way to avoid it.
For short term problems eg abnormal traffic flows causing congestion, sure. For long term traffic management - get bigger tubes.
Amen. Joe
On 31/3/09 9:00 AM, "Philip D'Ath"
Predominately I've used QoS to guarantee delay, latency and jitter. I've also used it to protect certain types of network traffic (ssh and routing protocols, so I can guarantee they will work under excessive load). I don't think I have ever used it to determine traffic to drop (although it can obviously be used for that). If a customer is reaching the point where their pipes are so saturated they are losing traffic then they should upgrade to the next size pipe.
I've never ran into a case where the "pipes" where so big that QoS was not required. Never.
Mostly agree Philip. On a point to point link, sufficient bandwidth means not having to worry about QoS. However, for a multipoint you can always create a situation where there is insufficient bandwidth, though in practice, sufficient bandwidth generally works. There is a view that QoS is simply priority, and that QoS is a way of managing bandwidth. I disagree with both of those views. For some types of traffic, it's better to delay than to drop: "better late than never", while for other types---such as isochronous streams---a late packet is a missing packet: "Better never than late". In fact, for such a stream, it's almost equally bad to deliver the packet too early. I'd much rather define traffic classes so that the appropriate characteristics are delivered, as required by the application---and furthermore, that it's up to the application to determine what these are. A trap with the QoS = Priority view is that you use it to manage insufficient bandwidth. If the network is 80% full, that means it's 20% underutilised, and by using 'smart' equipment you can use that 'spare/unused/wasted' 20%. I.e., a 'smart, slow' network is better than a 'dumb, fast' network. My experience is the opposite: 'dumb and fast' is not only better than 'smart and slow', it's also considerably cheaper. That '20% unused' isn't unused---it's just the way packet networks operate. Of course, sometimes physics steps in and you are stuck with a slow copper line and Shannon-Hartley---at which point QoS may be the least worst option. -- Michael Newbery IP Architect TelstraClear Limited TelstraClear. Simple Solutions. Everyday Residential 0508 888 800 Business 0508 249 999 Enterprise & Government 0508 400 300 This email contains information which may be confidential and subject to copyright. If you are not the intended recipient you must not use, distribute or copy this email or attachments. If you have received this email in error please notify us immediately by return email and delete this email and any attachments. TelstraClear Limited accepts no responsibility for changes made to this email or to any attachments after transmission from TelstraClear Limited. It is your responsibility to check this email and any attachments for viruses. Emails are not secure. They can be intercepted, amended, lost or destroyed and may contain viruses. Anyone who communicates with TelstraClear Limited by email is taken to accept these risks.
On 30 Mar 2009, at 16:00, Philip D'Ath wrote:
I've never ran into a case where the "pipes" where so big that QoS was not required. Never.
That's presumably why you have needed to worry about QoS. Buy bigger pipes. :-) If you're constrained (by what the telco will give you, by what you can afford, by what your vendor will support) to pipes that are too small, then sure, it makes sense to concern yourself about which of your customers' packets you are going to give inferior service to in order to let the higher-value packets get the best treatment possible given the inadequate network. If you're not constrained to pipes that are too small, then QoS doesn't buy you much once you've escaped the last mile. What's the serialisation delay of even a 4k jumbo frame on a gigabit ethernet interface? If you can eliminate that tiny jitter by turning on QoS features everywhere, will anybody notice the delay resulting from the end-points' jitter buffers given the codec and propagation latency that's already there? Remember, these are people who think that call quality over GSM is just fine. Might the operational impact of having to manage those QoS features perhaps have more impact on the customer, in the long run? No doubt there are times when adding bandwidth isn't practical, for whatever reason. However, in my experience people who worry about QoS in their core usually have been sold a problem by a vendor, rather than a solution. Joe
I can set up a Gigabit desktop switch, plug in server and some workstations, and run them through VoIP phones generating 96Kb/s audio streams, and cause the audio streams to break down in quality - even though the pipes are considerably larger than the traffic I am trying to protect. I suspect shifting to 10Gb/s pipes would make no difference. So I'll stay with the contention it doesn't matter how big the pipes are, QoS can be used to provide delay, latency, jitter (and loss) guarantees to make applications work properly. But this is for private networks, not the Internet. And I do agree, the Internet is not "policable" to allow QoS to work. It's the very principle of the Internet. :-) -----Original Message----- From: Joe Abley [mailto:jabley(a)hopcount.ca] ...
I've never ran into a case where the "pipes" where so big that QoS was not required. Never.
That's presumably why you have needed to worry about QoS. Buy bigger pipes. :-) ...
On 30 Mar 2009, at 16:59, Philip D'Ath wrote:
I can set up a Gigabit desktop switch, plug in server and some workstations, and run them through VoIP phones generating 96Kb/s audio streams, and cause the audio streams to break down in quality - even though the pipes are considerably larger than the traffic I am trying to protect.
I suspect shifting to 10Gb/s pipes would make no difference.
Seems like a good example of needing to spend money on QoS because your network is inadequate. I suspect you might need to find a gigabit switch that doesn't have "desktop" in its name :-) Joe
Next step up from "desktop" switch is "web managed". The "web managed" switches typically have QoS built into them (and are not configurable) :-) You need to go up a level or two again to get switches with QoS you can switch off and that have the performance to make it unnecessary. -----Original Message----- From: Joe Abley [mailto:jabley(a)hopcount.ca] Sent: Tuesday, 31 March 2009 10:12 a.m. To: Philip D'Ath Cc: nznog Subject: Re: [nznog] QoS-IXPs On 30 Mar 2009, at 16:59, Philip D'Ath wrote:
I can set up a Gigabit desktop switch, plug in server and some workstations, and run them through VoIP phones generating 96Kb/s audio streams, and cause the audio streams to break down in quality - even though the pipes are considerably larger than the traffic I am trying to protect.
I suspect shifting to 10Gb/s pipes would make no difference.
Seems like a good example of needing to spend money on QoS because your network is inadequate. I suspect you might need to find a gigabit switch that doesn't have "desktop" in its name :-) Joe _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Joe Abley wrote:
On 30 Mar 2009, at 16:00, Philip D'Ath wrote:
I've never ran into a case where the "pipes" where so big that QoS was not required. Never.
That's presumably why you have needed to worry about QoS. Buy bigger pipes. :-)
If you're constrained (by what the telco will give you, by what you can afford, by what your vendor will support) to pipes that are too small, then sure, it makes sense to concern yourself about which of your customers' packets you are going to give inferior service to in order to let the higher-value packets get the best treatment possible given the inadequate network.
We have an interesting network that's constrained. The last mile in places here desperately needs replacing, telco's that take their time with bandwidth upgrades, a user population that all arrives on the same day, and leaving on the same day, making capacity planning a nightmare, and all those other interesting realworld issues. The eventual goal is to have enough bandwidth, but getting from here to there is where the Fun Is(tm). (If it was easy, everyone would do it right?) Our solution was to develop a custom QoS solution based on Stochastic Fair Queuing, where instead of binning by 5 tuple, we bin only by destination MAC address. This means that users get exactly 1/nth of the available bandwidth. This worked out awesomely well, as in, we couldn't have asked for better. Users who are mostly idle get low latency, even tho other users on the network are busy torrenting (and experiencing high latency). Users who do "multithreaded downloading" etc don't get more than their fair share of the bandwidth. One improvement we'd like to make is to change it so within a "bin" (aka User), we reorder traffic so that we deliver traffic with v4 ToS bits asking to be delivered last is deprefed. This means the user still gets 1/nth of the traffic, but the individual user can use the ToS bits to select which traffic they receive first. Setting your ToS to say all your traffic is high priority doesn't get you any more of the link, and in fact would have no benefit. Setting your VoIP traffic to have low latency ToS, and your bulk downloads to have a "low cost" ToS, would deliver your VoIP packets first, and then when there are no VoIP packets, would get your bulk download packets. This seems to be the fairest possible solution we could come up with while we work on improving the amount of bandwidth through our network.
On Tue, Mar 31, 2009 at 11:46 AM, Perry Lorier
Joe Abley wrote:
On 30 Mar 2009, at 16:00, Philip D'Ath wrote:
I've never ran into a case where the "pipes" where so big that QoS was not required. Never.
That's presumably why you have needed to worry about QoS. Buy bigger pipes. :-)
Another option might be to put your voice traffic on a totally separate switching infrastructure from your data traffic. This will stop nasty big data packets from impinging on your call quality. Of course this isn't going to be a solution for everyone as everyone's network design is different. Cheers, Blair
I guess that will work if you have enough money to buy 100% additional hardware, and there won't be any physical connection between the two networks (if there is, then you run the same danger). However, deploying QoS would be significantly cheaper. From: Blair Harrison [mailto:nznog(a)jedi.school.nz] ... Another option might be to put your voice traffic on a totally separate switching infrastructure from your data traffic. This will stop nasty big data packets from impinging on your call quality. Of course this isn't going to be a solution for everyone as everyone's network design is different. Cheers, Blair
Perry Lorier wrote:
One improvement we'd like to make is to change it so within a "bin" (aka User), we reorder traffic so that we deliver traffic with v4 ToS bits asking to be delivered last is deprefed. This means the user still gets 1/nth of the traffic, but the individual user can use the ToS bits to select which traffic they receive first. Setting your ToS to say all your traffic is high priority doesn't get you any more of the link, and in fact would have no benefit. Setting your VoIP traffic to have low latency ToS, and your bulk downloads to have a "low cost" ToS, would deliver your VoIP packets first, and then when there are no VoIP packets, would get your bulk download packets.
This seems to be the fairest possible solution we could come up with while we work on improving the amount of bandwidth through our network.
But in the context of the discussion it's still an example from inside a network with a single administrative control. The nearest parallel in an IXP would be for peers to act on bilaterally agreed DSCP's while the IXP gives them whatever bandwidth (i.e. interface capacity) the peers are paying for. Which from the IXP's viewpoint would not be new. - Donald Neal -- Donald Neal | "If you turn on American TV, there's a Research Officer | huge choice of nothing you want to see and WAND | unfortunately I think that's the case The University of Waikato | here now as well." - Dominic West
On 31/3/09 5:37 AM, "Joe Abley"
It's perhaps easier to see that this is the case if you use the phrase "selective throwing away of customer packets" instead of "quality of service".
The goal of an ISP is surely to deliver packets for customers and bill them for doing so.
Nobody wants to throw away customer packets if there's a way to avoid it.
Nobody wants "quality of service" if there's a way to avoid it.
A favourite quote: "QoS is what you have when you don't have enough bandwidth" -- Unknown. I think it might have been Rich Salz but I can't locate the reference. -- Michael Newbery IP Architect TelstraClear Limited Tel: +64-4-920 3102 Mobile: +64-29-920 3102 Fax: +64-4-920 3361 TelstraClear. Simple Solutions. Everyday Residential 0508 888 800 Business 0508 249 999 Enterprise & Government 0508 400 300 This email contains information which may be confidential and subject to copyright. If you are not the intended recipient you must not use, distribute or copy this email or attachments. If you have received this email in error please notify us immediately by return email and delete this email and any attachments. TelstraClear Limited accepts no responsibility for changes made to this email or to any attachments after transmission from TelstraClear Limited. It is your responsibility to check this email and any attachments for viruses. Emails are not secure. They can be intercepted, amended, lost or destroyed and may contain viruses. Anyone who communicates with TelstraClear Limited by email is taken to accept these risks.
Michael Newbery wrote:
A favourite quote:
"QoS is what you have when you don't have enough bandwidth" -- Unknown. I think it might have been Rich Salz but I can't locate the reference.
"Nobody asks for QoS when they have enough bandwidth" -- Rich Seifert, co-author of the DIX Ethernet specification (1980) -- don
For the purposes of Citylink exchanges (WIX, APE, CHIX, PNIX, DIX and any
others i'm not aware of) I'd just like to say that Matthew summed this up
perfectly. You will never get an agreement between *all* parties who
exchange traffic there. Who is to say my traffic is more important than
yours if you have no formal agreement as to what is important? It _could_
work if implemented in bilateral peering sessions over an IX exchangge but
won't if it's a many-to-many peering session.
Jonathan (cf beer)
On Fri, Mar 27, 2009 at 9:28 PM, Rik Wade
On 27/03/2009, at 1:38 PM, Neil Gardner wrote:
Good question, but it raises the ugly issue of who decides what traffic is high priority... Unless there are commercial arrangements in place, I'd suggest that the optimum strategy for all peers would be to mark ALL their traffic as high priority.
I'd suggest something like the following as policy: - two classes, "bulk" and "priority" - IX members are permitted to send n% of their IX connection bandwidth as "priority" (matrix configuration enforced)
This way, the IX members can't swamp the matrix with more than n% of the connectivity they pay for. Furthermore, it is up to the IX members to decide which traffic they put in to this n% and this may vary for each peer. Priority traffic between ISPs 'x' and 'y' may be VoIP, by their own agreement. Priority traffic from Domestic Content Provider 'x' and all its peers may be streaming media.
It is up to the IX and its members to decide what n% is reasonable, and scale the matrix accordingly. Other traffic classes or priorities may be added, but considering too much initially just complicates the matter. -- r
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 26 Mar 2009, at 00:54, Dylan Hall wrote:
Why is it necessary to use a layer 2 QoS marking at all?
With my European operator hat[0] on - hello - every now and then, I notice that someone within the UK community requests similar features. Normally this is followed by a flurry of responses along the lines of, "This exchange is a neutral layer two segment, it should forward frames, all frames, to their intended destination in the most neutral way possible." So how do we handle capacity issues ? Simple, it says in our terms that you are forbidden to congest your port. There are carrot and stick ways to do this - at LONAP we hassle you until you do :-) and at LINX they invoice you a surcharge for congesting your port. We sell the exchange on the benefits of peering - improved speed of access, reduced latency, more capacity. If participants are congesting their ports, then these benefits nolonger ring true. Your exchange ports are one of the easiest places to have capacity on your network, so my opinion, an operator should work hard to keep spare capacity. Exchanges can help by making the cost per bit at capacity lower for busy 10GE ports than for busy 1GE ports. [0] Board member, LONAP - www.lonap.net, and noc/peering@ a large number of networks connected to LONAP, LINX, AMS-IX. Best wishes Andy -- Regards, Andy Davidson, CTO, NetSumo Limited T: +44 (0) 20 7993 1700 W: http://www.netsumo.com
participants (23)
-
Alastair Johnson
-
Andy Davidson
-
Bill Walker
-
Blair Harrison
-
Brian E Carpenter
-
David Robb
-
Don Stokes
-
Donald Neal
-
Dylan Hall
-
jamie baddeley
-
Joe Abley
-
Jonathan Woolley
-
Matthew Moyle-Croft
-
Michael Newbery
-
Nathan Ward
-
Neil Gardner
-
Perry Lorier
-
Philip D'Ath
-
Regan Murphy
-
Richard Wade
-
Rik Wade
-
Thomas M. Knoll
-
Tony Wicks