NZIX OpenFlow adventures phase 2

Hi Everyone, So the OpenFlow router has been up for a wee white now so it seemed about time to let you all poke at it. Phase 2 of the project is to: Deploy an OF switch on an exchange point which can function as a route server. This switch should be able to BGP peer with any peers which currently use the exchange and allow for the exchange of IPv4 prefixes as required. I've used the information on the WIX Peer list webpage (http://wix.nzix.net/peers.html) to pre-configure passive[1] BGP peering sessions for all existing WIX participants. So feel free to use your existing WIX IP addresses and AS numbers to connect to IP: 202.7.0.119 Aut-num: 9483 Things to look for: 1) It should pass you the routes that it learns from the route servers with the Flatnet AS in the path but with the original next hops. This longer path should mean that you won't explicitly prefer the routes from me. Bit of safety there. 2) It will pass you any other routes that people pass me directly. As I mentioned before we've got a few more plans. The next phase will start to take us a bit beyond what the current exchanges offer. In the new year we are going to deploy a similar switch at the APE. This hardware will become part of the single SDN controlled switch that we currently have in Wellington. When you peer with the OpenFlow switch at the WIX you'll get APE prefixes as well. Peer at the APE, you'll get WIX prefixes. Once we can get some traffic on the fabric and a presence at multiple places then we can really start to innovate. . BGP prefix communities based on the millisecond difference between your peering port and the prefix next hop . BGP aware RPF on the exchange (solves the pointing default problem) Got any other ideas. Throw them in the mix and we'll see what we can do. Have a Merry Christmas everyone. Regards, Dean [1] Passive means that I'll never reach out and initiate a session with you. You need to initiate the session with me. Just safer incase people think that I'm TCP Port 179 hax0r-ing them.

Hey Dean, What are you using for the OFP controller? NOX/POX/Big Switch? Cheers, Truman On 18 Dec, 2012, at 10:40 PM, Dean Pemberton wrote:
Hi Everyone,
So the OpenFlow router has been up for a wee white now so it seemed about time to let you all poke at it. Phase 2 of the project is to: Deploy an OF switch on an exchange point which can function as a route server. This switch should be able to BGP peer with any peers which currently use the exchange and allow for the exchange of IPv4 prefixes as required.
I've used the information on the WIX Peer list webpage (http://wix.nzix.net/peers.html) to pre-configure passive[1] BGP peering sessions for all existing WIX participants.
So feel free to use your existing WIX IP addresses and AS numbers to connect to
IP: 202.7.0.119 Aut-num: 9483
Things to look for: 1) It should pass you the routes that it learns from the route servers with the Flatnet AS in the path but with the original next hops. This longer path should mean that you won't explicitly prefer the routes from me. Bit of safety there. 2) It will pass you any other routes that people pass me directly.
As I mentioned before we've got a few more plans. The next phase will start to take us a bit beyond what the current exchanges offer.
In the new year we are going to deploy a similar switch at the APE. This hardware will become part of the single SDN controlled switch that we currently have in Wellington. When you peer with the OpenFlow switch at the WIX you'll get APE prefixes as well. Peer at the APE, you'll get WIX prefixes.
Once we can get some traffic on the fabric and a presence at multiple places then we can really start to innovate.
. BGP prefix communities based on the millisecond difference between your peering port and the prefix next hop . BGP aware RPF on the exchange (solves the pointing default problem)
Got any other ideas. Throw them in the mix and we'll see what we can do.
Have a Merry Christmas everyone.
Regards, Dean
[1] Passive means that I'll never reach out and initiate a session with you. You need to initiate the session with me. Just safer incase people think that I'm TCP Port 179 hax0r-ing them. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog

Hi, Dean -- On 19/12/2012 03:40, "Dean Pemberton" <nznog(a)deanpemberton.com> wrote:
Deploy an OF switch on an exchange point which can function as a route server. This switch should be able to BGP peer with any peers which currently use the exchange and allow for the exchange of IPv4 prefixes as required.
This is really interesting, thanks for posting your news to the list. Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools. Andy

Hi Andy, On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
The OpenFlow application (RouteFlow) converses with Quagga to handle routing protocols, and translates Quagga's routes into OpenFlow rules. This theoretically allows us to tie any number of OpenFlow switches together as a single distributed router. As Dean says, the next step is the proof-of-concept implementation with hardware at APE and WIX.

Hi Andy A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control. While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future. What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over. One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck. OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature. I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same. So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward. I know where I'm heading. Regards, Dean On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy

Interesting! On the IX the interesting application (to me) you mention is the scale out BGP router (and similar functionality). But this is internal to the IX and just appears as a big router to the customers. If the customers were to start managing flow table entries on the IX fabric to suit their own purposes it could raise hell, thus anything exposed will naturally have to be a product developed by the IX, and consumed by connected providers and their customers. And having to steer traffic based on aggregate flows becomes necessary due those small tables on commodity hardware, which quickly destroys the fine grained control that a lot of people talk about as being a selling point of SDN. This brings you to look at what innovation SDN has brought us so far (at least in public), and it looks like most of the "innovation" will look very similar to existing services just rebuilt with a different control mechanism. These aren't really new services, they're just SDN applications that build analogues of existing services in a different way. Sure they're subtly different, tailored to the needs of the business, and they can be adapted/enhanced to do new tricks quickly by coders. Or they can quickly be extended to parts of the network you couldn't do this on before, e.g. you could wait 10 years for IETF to finish NVo3 or you could just build your own simple equivalent (as people have been doing) and deploy it to your end points/servers yourself. But providing any meaningful east/west connectivity between different SDNs and the applications running on them is going to require some form of standard, falling back naturally to the existing routing protocols. So basically how you make SDN on an IX a publicly consumed feature is pretty interesting if possible. PS: Not to sound like I'm ragging on SDN too much, I'm a fan. If used internally it can make your life easier through simplified and giving you centralized control over your network. It makes using commodity hardware much easier, there's a potential future there where you buy a commodity piece of equipment meeting the hardware specs you need, slot it in your network, it auto discovers the controller, and is automatically integrated into your topology, and so on. We ideally will see a Debian for routers that will run the limited openflow control plane and other things. I'm just having trouble envisioning how east/west can happen without standards, and standard seem somewhat antithetical to SDN which is all about enabling speed and flexibility. On 1/6/2013 4:08 PM, Dean Pemberton wrote:
Hi Andy
A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control.
While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future.
What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over.
One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck.
OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature.
I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same.
So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward.
I know where I'm heading.
Regards, Dean
On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog

Was debating this a bit at lunch. Thought about this, and I'm not sure that making the IX fabric L3 aware is a good idea. There are lots of people that do private peering over ethernet IXPs, so they can prefer certain providers over others, or people that don't use the public route reflectors at all, and I'm sure a number of other cases. A system that makes flow decisions in the IX fabric based on what their route reflectors see isn't going to allow that, at first glance. I'm also not sure about scale - The pronto switches all do about 12k routes according to their data sheets, and I've seen number of flows at roughly the same sort of number - though less for IPv6 flows, I think. The APE LG says it's got about 11000 routes already. Maybe some efficiency can be gained if you've got OF switches at the edge, where they only have to know maybe a few thousand prefixes/flows for their attached customers with a default pointing towards a "core" layer, but you'd still need that "core" layer.. I suppose you'd also aggregate a number of prefixes as you turned them in to flow table entries, but you're sort of scraping the barrel at that point.. Dean - how have you got it running now? Is the flow controller pre-emptively putting flow entries in to the switch, or is it putting them in as required by packets passing through? I imagine the latter would work well for scale in your situation as you're only going to be hitting a few prefixes, but you wouldn't see much efficiency running it like that passing general IXP traffic I don't imagine… Anyway, given the above, I wonder if there's other ways to use OF in an IXP type environment, maybe nothing more other than avoiding STP? You're essentially just re-inventing/re-implementing VPLS though. On 8/01/2013, at 11:24 AM, Kris Price <nznog(a)punk.co.nz> wrote:
Interesting! On the IX the interesting application (to me) you mention is the scale out BGP router (and similar functionality). But this is internal to the IX and just appears as a big router to the customers.
If the customers were to start managing flow table entries on the IX fabric to suit their own purposes it could raise hell, thus anything exposed will naturally have to be a product developed by the IX, and consumed by connected providers and their customers.
And having to steer traffic based on aggregate flows becomes necessary due those small tables on commodity hardware, which quickly destroys the fine grained control that a lot of people talk about as being a selling point of SDN.
This brings you to look at what innovation SDN has brought us so far (at least in public), and it looks like most of the "innovation" will look very similar to existing services just rebuilt with a different control mechanism. These aren't really new services, they're just SDN applications that build analogues of existing services in a different way. Sure they're subtly different, tailored to the needs of the business, and they can be adapted/enhanced to do new tricks quickly by coders. Or they can quickly be extended to parts of the network you couldn't do this on before, e.g. you could wait 10 years for IETF to finish NVo3 or you could just build your own simple equivalent (as people have been doing) and deploy it to your end points/servers yourself.
But providing any meaningful east/west connectivity between different SDNs and the applications running on them is going to require some form of standard, falling back naturally to the existing routing protocols.
So basically how you make SDN on an IX a publicly consumed feature is pretty interesting if possible.
PS: Not to sound like I'm ragging on SDN too much, I'm a fan. If used internally it can make your life easier through simplified and giving you centralized control over your network. It makes using commodity hardware much easier, there's a potential future there where you buy a commodity piece of equipment meeting the hardware specs you need, slot it in your network, it auto discovers the controller, and is automatically integrated into your topology, and so on. We ideally will see a Debian for routers that will run the limited openflow control plane and other things. I'm just having trouble envisioning how east/west can happen without standards, and standard seem somewhat antithetical to SDN which is all about enabling speed and flexibility.
On 1/6/2013 4:08 PM, Dean Pemberton wrote:
Hi Andy
A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control.
While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future.
What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over.
One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck.
OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature.
I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same.
So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward.
I know where I'm heading.
Regards, Dean
On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog

The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things. The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch On Tue, Jan 8, 2013 at 1:51 PM, Nathan Ward <nznog(a)daork.net> wrote:
Was debating this a bit at lunch. Thought about this, and I'm not sure that making the IX fabric L3 aware is a good idea. There are lots of people that do private peering over ethernet IXPs, so they can prefer certain providers over others, or people that don't use the public route reflectors at all, and I'm sure a number of other cases. A system that makes flow decisions in the IX fabric based on what their route reflectors see isn't going to allow that, at first glance.
I'm also not sure about scale - The pronto switches all do about 12k routes according to their data sheets, and I've seen number of flows at roughly the same sort of number - though less for IPv6 flows, I think. The APE LG says it's got about 11000 routes already. Maybe some efficiency can be gained if you've got OF switches at the edge, where they only have to know maybe a few thousand prefixes/flows for their attached customers with a default pointing towards a "core" layer, but you'd still need that "core" layer.. I suppose you'd also aggregate a number of prefixes as you turned them in to flow table entries, but you're sort of scraping the barrel at that point..
Dean - how have you got it running now? Is the flow controller pre-emptively putting flow entries in to the switch, or is it putting them in as required by packets passing through? I imagine the latter would work well for scale in your situation as you're only going to be hitting a few prefixes, but you wouldn't see much efficiency running it like that passing general IXP traffic I don't imagine…
Anyway, given the above, I wonder if there's other ways to use OF in an IXP type environment, maybe nothing more other than avoiding STP? You're essentially just re-inventing/re-implementing VPLS though.
On 8/01/2013, at 11:24 AM, Kris Price <nznog(a)punk.co.nz> wrote:
Interesting! On the IX the interesting application (to me) you mention is the scale out BGP router (and similar functionality). But this is internal to the IX and just appears as a big router to the customers.
If the customers were to start managing flow table entries on the IX fabric to suit their own purposes it could raise hell, thus anything exposed will naturally have to be a product developed by the IX, and consumed by connected providers and their customers.
And having to steer traffic based on aggregate flows becomes necessary due those small tables on commodity hardware, which quickly destroys the fine grained control that a lot of people talk about as being a selling point of SDN.
This brings you to look at what innovation SDN has brought us so far (at least in public), and it looks like most of the "innovation" will look very similar to existing services just rebuilt with a different control mechanism. These aren't really new services, they're just SDN applications that build analogues of existing services in a different way. Sure they're subtly different, tailored to the needs of the business, and they can be adapted/enhanced to do new tricks quickly by coders. Or they can quickly be extended to parts of the network you couldn't do this on before, e.g. you could wait 10 years for IETF to finish NVo3 or you could just build your own simple equivalent (as people have been doing) and deploy it to your end points/servers yourself.
But providing any meaningful east/west connectivity between different SDNs and the applications running on them is going to require some form of standard, falling back naturally to the existing routing protocols.
So basically how you make SDN on an IX a publicly consumed feature is pretty interesting if possible.
PS: Not to sound like I'm ragging on SDN too much, I'm a fan. If used internally it can make your life easier through simplified and giving you centralized control over your network. It makes using commodity hardware much easier, there's a potential future there where you buy a commodity piece of equipment meeting the hardware specs you need, slot it in your network, it auto discovers the controller, and is automatically integrated into your topology, and so on. We ideally will see a Debian for routers that will run the limited openflow control plane and other things. I'm just having trouble envisioning how east/west can happen without standards, and standard seem somewhat antithetical to SDN which is all about enabling speed and flexibility.
On 1/6/2013 4:08 PM, Dean Pemberton wrote:
Hi Andy
A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control.
While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future.
What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over.
One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck.
OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature.
I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same.
So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward.
I know where I'm heading.
Regards, Dean
On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird,
Quagga
or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sam Russell Network Engineer Research & Education Advanced Network NZ Ltd ddi: +64 4 913 6365 mob: +64 21 750 819 fax: +64 4 916 0064 http://www.reannz.co.nz

If you hit the limit of number of flows, what happens? Does it reject the flow, or does it delete older ones to make room for it? Given there is no default route, what happens if you get a packet that doesn't match a flow? Does it drop it, or punt it to the controller? (ie. is there a default flow to drop) If it deletes older flows, and punts non-matching packets to the controller, that sounds like there's a potential for really bad performance spikes as you approach the upper limits of the flow table. If it deletes older flows and drops non-matching packets, then that's worse. If you simply reject new flows, then that's a bit better, but you've got to make sure that the route reflectors never re-advertise prefixes unless they're installed in to the flow tables successfully, or you drop packets, or punt them to the controller as above. Some interesting issues to consider! Agreed re. VPLS/etc. - but you've got to make sure your switches have reliable connectivity to your controller(s). In a network like WIX, that might be hard, not sure. On 8/01/2013, at 2:20 PM, Sam Russell <sam.russell(a)reannz.co.nz> wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
On Tue, Jan 8, 2013 at 1:51 PM, Nathan Ward <nznog(a)daork.net> wrote: Was debating this a bit at lunch. Thought about this, and I'm not sure that making the IX fabric L3 aware is a good idea. There are lots of people that do private peering over ethernet IXPs, so they can prefer certain providers over others, or people that don't use the public route reflectors at all, and I'm sure a number of other cases. A system that makes flow decisions in the IX fabric based on what their route reflectors see isn't going to allow that, at first glance.
I'm also not sure about scale - The pronto switches all do about 12k routes according to their data sheets, and I've seen number of flows at roughly the same sort of number - though less for IPv6 flows, I think. The APE LG says it's got about 11000 routes already. Maybe some efficiency can be gained if you've got OF switches at the edge, where they only have to know maybe a few thousand prefixes/flows for their attached customers with a default pointing towards a "core" layer, but you'd still need that "core" layer.. I suppose you'd also aggregate a number of prefixes as you turned them in to flow table entries, but you're sort of scraping the barrel at that point..
Dean - how have you got it running now? Is the flow controller pre-emptively putting flow entries in to the switch, or is it putting them in as required by packets passing through? I imagine the latter would work well for scale in your situation as you're only going to be hitting a few prefixes, but you wouldn't see much efficiency running it like that passing general IXP traffic I don't imagine…
Anyway, given the above, I wonder if there's other ways to use OF in an IXP type environment, maybe nothing more other than avoiding STP? You're essentially just re-inventing/re-implementing VPLS though.
On 8/01/2013, at 11:24 AM, Kris Price <nznog(a)punk.co.nz> wrote:
Interesting! On the IX the interesting application (to me) you mention is the scale out BGP router (and similar functionality). But this is internal to the IX and just appears as a big router to the customers.
If the customers were to start managing flow table entries on the IX fabric to suit their own purposes it could raise hell, thus anything exposed will naturally have to be a product developed by the IX, and consumed by connected providers and their customers.
And having to steer traffic based on aggregate flows becomes necessary due those small tables on commodity hardware, which quickly destroys the fine grained control that a lot of people talk about as being a selling point of SDN.
This brings you to look at what innovation SDN has brought us so far (at least in public), and it looks like most of the "innovation" will look very similar to existing services just rebuilt with a different control mechanism. These aren't really new services, they're just SDN applications that build analogues of existing services in a different way. Sure they're subtly different, tailored to the needs of the business, and they can be adapted/enhanced to do new tricks quickly by coders. Or they can quickly be extended to parts of the network you couldn't do this on before, e.g. you could wait 10 years for IETF to finish NVo3 or you could just build your own simple equivalent (as people have been doing) and deploy it to your end points/servers yourself.
But providing any meaningful east/west connectivity between different SDNs and the applications running on them is going to require some form of standard, falling back naturally to the existing routing protocols.
So basically how you make SDN on an IX a publicly consumed feature is pretty interesting if possible.
PS: Not to sound like I'm ragging on SDN too much, I'm a fan. If used internally it can make your life easier through simplified and giving you centralized control over your network. It makes using commodity hardware much easier, there's a potential future there where you buy a commodity piece of equipment meeting the hardware specs you need, slot it in your network, it auto discovers the controller, and is automatically integrated into your topology, and so on. We ideally will see a Debian for routers that will run the limited openflow control plane and other things. I'm just having trouble envisioning how east/west can happen without standards, and standard seem somewhat antithetical to SDN which is all about enabling speed and flexibility.
On 1/6/2013 4:08 PM, Dean Pemberton wrote:
Hi Andy
A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control.
While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future.
What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over.
One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck.
OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature.
I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same.
So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward.
I know where I'm heading.
Regards, Dean
On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson <andy(a)nosignal.org> wrote:
This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sam Russell
Network Engineer Research & Education Advanced Network NZ Ltd ddi: +64 4 913 6365 mob: +64 21 750 819 fax: +64 4 916 0064

VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though. If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN. You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way. What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits. On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch

I'm interested to hear more about what you mean by complexity? I've heard of some large layer 2 networks. They are "simple" (aka "familiar") but also have some operationally "complex" aspects. For the sake of argument, what is more "complex" about directly programming a forwarding element to do exacrtly what you want? Why is that complexity? On Wed, Jan 9, 2013 at 12:51 PM, Kris Price <nznog(a)punk.co.nz> wrote:
VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though.
If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN.
You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way.
What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits.
On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
______________________________**_________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/**mailman/listinfo/nznog<http://list.waikato.ac.nz/mailman/listinfo/nznog>

Probably getting a little crosstalk between the subject of what are the services/products offered (whats desired and what those look like), versus how those are actually implemented (SDN versus existing protocols). Agreed, SDN can be used to make implementing and managing services simpler and more flexible (as I said in this and last email). But WRT complexity below I was questioning why would you want to make the IXPs service offerings more complex. i.e. why more features, more choices, more bandwidth control, QoS, bilateral VPNs, etc, as opposed to just bigger pipe when bigger pipes in NZ are cheap and easy. Unless of course you want to be more than an IXP. On 1/9/2013 2:22 PM, Josh Bailey wrote:
I'm interested to hear more about what you mean by complexity?
I've heard of some large layer 2 networks. They are "simple" (aka "familiar") but also have some operationally "complex" aspects.
For the sake of argument, what is more "complex" about directly programming a forwarding element to do exacrtly what you want? Why is that complexity?
On Wed, Jan 9, 2013 at 12:51 PM, Kris Price <nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz>> wrote:
VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though.
If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN.
You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way.
What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits.
On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
_________________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <mailto:NZNOG(a)list.waikato.ac.nz> http://list.waikato.ac.nz/__mailman/listinfo/nznog <http://list.waikato.ac.nz/mailman/listinfo/nznog>

There's no killer app at the moment, but ask yourself this: What could I do with an IXP if I could unit test every network change that happened for correctness? Route exchange could be conditional and brokered by the IXP, minimising the chance of traffic going through one way, but not having a route back through the same path. A mix of source and destination-based routing can be employed, and tested. New configs can be unit tested against old configs to make sure that as well as the new prefix/feature, all of the current network behaviour works correctly. On Wed, Jan 9, 2013 at 3:19 PM, Kris Price <nznog(a)punk.co.nz> wrote:
Probably getting a little crosstalk between the subject of what are the services/products offered (whats desired and what those look like), versus how those are actually implemented (SDN versus existing protocols).
Agreed, SDN can be used to make implementing and managing services simpler and more flexible (as I said in this and last email).
But WRT complexity below I was questioning why would you want to make the IXPs service offerings more complex. i.e. why more features, more choices, more bandwidth control, QoS, bilateral VPNs, etc, as opposed to just bigger pipe when bigger pipes in NZ are cheap and easy. Unless of course you want to be more than an IXP.
On 1/9/2013 2:22 PM, Josh Bailey wrote:
I'm interested to hear more about what you mean by complexity?
I've heard of some large layer 2 networks. They are "simple" (aka "familiar") but also have some operationally "complex" aspects.
For the sake of argument, what is more "complex" about directly programming a forwarding element to do exacrtly what you want? Why is that complexity?
On Wed, Jan 9, 2013 at 12:51 PM, Kris Price <nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz>> wrote:
VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though.
If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN.
You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way.
What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits.
On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
______________________________**___________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <mailto:NZNOG(a)list.waikato.ac.**nz<NZNOG(a)list.waikato.ac.nz>
http://list.waikato.ac.nz/__**mailman/listinfo/nznog<http://list.waikato.ac.nz/__mailman/listinfo/nznog> <http://list.waikato.ac.nz/**mailman/listinfo/nznog<http://list.waikato.ac.nz/mailman/listinfo/nznog>
______________________________**_________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/**mailman/listinfo/nznog<http://list.waikato.ac.nz/mailman/listinfo/nznog>
-- Sam Russell Network Engineer Research & Education Advanced Network NZ Ltd ddi: +64 4 913 6365 mob: +64 21 750 819 fax: +64 4 916 0064 http://www.reannz.co.nz

Very interested in hearing more about this unit testing idea. Haven't we always had the ability to hack our own BGP route servers to do whatever we liked if we wanted some features like brokered route exchanges? And we've had mechanisms to test bidirectional forwarding, it's just this stuff has always been a PITA to script up. What interests me about where things are going with the use of OpenFlow is that the forwarding hardware is now stripped bare, so I'm curious about what's going on and what people are doing with this new found ability in terms of the data plane. I like to know what the packets are doing on the wire so to speak. On 1/9/2013 3:39 PM, Sam Russell wrote:
There's no killer app at the moment, but ask yourself this: What could I do with an IXP if I could unit test every network change that happened for correctness?
Route exchange could be conditional and brokered by the IXP, minimising the chance of traffic going through one way, but not having a route back through the same path. A mix of source and destination-based routing can be employed, and tested. New configs can be unit tested against old configs to make sure that as well as the new prefix/feature, all of the current network behaviour works correctly.
On Wed, Jan 9, 2013 at 3:19 PM, Kris Price <nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz>> wrote:
Probably getting a little crosstalk between the subject of what are the services/products offered (whats desired and what those look like), versus how those are actually implemented (SDN versus existing protocols).
Agreed, SDN can be used to make implementing and managing services simpler and more flexible (as I said in this and last email).
But WRT complexity below I was questioning why would you want to make the IXPs service offerings more complex. i.e. why more features, more choices, more bandwidth control, QoS, bilateral VPNs, etc, as opposed to just bigger pipe when bigger pipes in NZ are cheap and easy. Unless of course you want to be more than an IXP.
On 1/9/2013 2:22 PM, Josh Bailey wrote:
I'm interested to hear more about what you mean by complexity?
I've heard of some large layer 2 networks. They are "simple" (aka "familiar") but also have some operationally "complex" aspects.
For the sake of argument, what is more "complex" about directly programming a forwarding element to do exacrtly what you want? Why is that complexity?
On Wed, Jan 9, 2013 at 12:51 PM, Kris Price <nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz> <mailto:nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz>>> wrote:
VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though.
If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN.
You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way.
What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits.
On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
___________________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <mailto:NZNOG(a)list.waikato.ac.nz> <mailto:NZNOG(a)list.waikato.ac.__nz <mailto:NZNOG(a)list.waikato.ac.nz>> http://list.waikato.ac.nz/____mailman/listinfo/nznog <http://list.waikato.ac.nz/__mailman/listinfo/nznog> <http://list.waikato.ac.nz/__mailman/listinfo/nznog <http://list.waikato.ac.nz/mailman/listinfo/nznog>>
_________________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <mailto:NZNOG(a)list.waikato.ac.nz> http://list.waikato.ac.nz/__mailman/listinfo/nznog <http://list.waikato.ac.nz/mailman/listinfo/nznog>
-- Sam Russell
Network Engineer Research & Education Advanced Network NZ Ltd ddi: +64 4 913 6365 mob: +64 21 750 819 fax: +64 4 916 0064

Perhaps, but I was getting at complexity reduction and increased precision in control were interesting benefits to explore, even if you wanted to do the same things as you to do today. For example, might be interesting to explore the possibility of a fabric that is a bit harder to kill with a broadcast storm for example. I heard that might be useful somewhere. Just an observation. On Wed, Jan 9, 2013 at 3:19 PM, Kris Price <nznog(a)punk.co.nz> wrote:
Probably getting a little crosstalk between the subject of what are the services/products offered (whats desired and what those look like), versus how those are actually implemented (SDN versus existing protocols).
Agreed, SDN can be used to make implementing and managing services simpler and more flexible (as I said in this and last email).
But WRT complexity below I was questioning why would you want to make the IXPs service offerings more complex. i.e. why more features, more choices, more bandwidth control, QoS, bilateral VPNs, etc, as opposed to just bigger pipe when bigger pipes in NZ are cheap and easy. Unless of course you want to be more than an IXP.
On 1/9/2013 2:22 PM, Josh Bailey wrote:
I'm interested to hear more about what you mean by complexity?
I've heard of some large layer 2 networks. They are "simple" (aka "familiar") but also have some operationally "complex" aspects.
For the sake of argument, what is more "complex" about directly programming a forwarding element to do exacrtly what you want? Why is that complexity?
On Wed, Jan 9, 2013 at 12:51 PM, Kris Price <nznog(a)punk.co.nz <mailto:nznog(a)punk.co.nz>> wrote:
VPLS is not hard, it's pretty simple, assuming you mean RFC4761 or RFC4762. It's not necessarily cheap to get good kit for it though.
If you want a Layer 2 fabric that bilateral peering can take place on without involvement of the IX, you could build yourself a public L2 service using SDN.
You could also have your big scale out router running on the fabric, and still do bilateral peering over Layer 2 E-Line services that are set up, with the IX presenting an easy to use portal for configuring these. I go to the portal and log in and say I want an E-Line to my peer X, and X goes in also to say he accepts, and we're both given an C-VID to tag our traffic to each other with. Anything I send to my port on the IX with that CVID pops out his port with the CVID he's given (potentially these are the same at both ends but doesn't have to be that way). Then we get billed a little bit extra at the end of each month. This is also achievable with SDN. Again it's just analogues to existing services, but set up and managed in a different, easier to automate way.
What's certainly true though is that the IX can make use of lots of those commodity 64x10GE switches, each costing sub 10,000 USD, to offer their customers pretty much infinite bandwidth (certainly in NZs case, one of these would happily carry all NZ's traffic right?), so why add complexity if it isn't needed and bandwidth at an IX can be made so cheap? Maybe one reason to go down that path is if you want the IX not to be just an *I* exchange but a general Ethernet exchange, which certainly might have some merits.
On 1/8/2013 2:20 PM, Sam Russell wrote:
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
______________________________**___________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz <mailto:NZNOG(a)list.waikato.ac.**nz<NZNOG(a)list.waikato.ac.nz>
http://list.waikato.ac.nz/__**mailman/listinfo/nznog<http://list.waikato.ac.nz/__mailman/listinfo/nznog> <http://list.waikato.ac.nz/**mailman/listinfo/nznog<http://list.waikato.ac.nz/mailman/listinfo/nznog>
______________________________**_________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/**mailman/listinfo/nznog<http://list.waikato.ac.nz/mailman/listinfo/nznog>
participants (8)
-
Andy Davidson
-
Dean Pemberton
-
Joe Stringer
-
Josh Bailey
-
Kris Price
-
Nathan Ward
-
Sam Russell
-
Truman Boyes