If you hit the limit of number of flows, what happens? Does it reject the flow, or does it delete older ones to make room for it?
Given there is no default route, what happens if you get a packet that doesn't match a flow? Does it drop it, or punt it to the controller? (ie. is there a default flow to drop)
If it deletes older flows, and punts non-matching packets to the controller, that sounds like there's a potential for really bad performance spikes as you approach the upper limits of the flow table.
If it deletes older flows and drops non-matching packets, then that's worse.
If you simply reject new flows, then that's a bit better, but you've got to make sure that the route reflectors never re-advertise prefixes unless they're installed in to the flow tables successfully, or you drop packets, or punt them to the controller as above.
Some interesting issues to consider!
Agreed re. VPLS/etc. - but you've got to make sure your switches have reliable connectivity to your controller(s). In a network like WIX, that might be hard, not sure.
On 8/01/2013, at 2:20 PM, Sam Russell
The hard part of VPLS/MPLS/multiple VRFs over MPLS is getting the routers to all agree on their view of the world, and having one-way LSPs can complicate things further. If you have one controller that pushes out a single unified view of the world to every device, you make it a lot simpler, and therefore a lot harder to break things.
The current implementation (as I understand) pushes flows out pre-emptively - the software RouteFlow syncs the RIB of a running linux VM to the flow table on the switch
On Tue, Jan 8, 2013 at 1:51 PM, Nathan Ward
wrote: Was debating this a bit at lunch. Thought about this, and I'm not sure that making the IX fabric L3 aware is a good idea. There are lots of people that do private peering over ethernet IXPs, so they can prefer certain providers over others, or people that don't use the public route reflectors at all, and I'm sure a number of other cases. A system that makes flow decisions in the IX fabric based on what their route reflectors see isn't going to allow that, at first glance. I'm also not sure about scale - The pronto switches all do about 12k routes according to their data sheets, and I've seen number of flows at roughly the same sort of number - though less for IPv6 flows, I think. The APE LG says it's got about 11000 routes already. Maybe some efficiency can be gained if you've got OF switches at the edge, where they only have to know maybe a few thousand prefixes/flows for their attached customers with a default pointing towards a "core" layer, but you'd still need that "core" layer.. I suppose you'd also aggregate a number of prefixes as you turned them in to flow table entries, but you're sort of scraping the barrel at that point..
Dean - how have you got it running now? Is the flow controller pre-emptively putting flow entries in to the switch, or is it putting them in as required by packets passing through? I imagine the latter would work well for scale in your situation as you're only going to be hitting a few prefixes, but you wouldn't see much efficiency running it like that passing general IXP traffic I don't imagineā¦
Anyway, given the above, I wonder if there's other ways to use OF in an IXP type environment, maybe nothing more other than avoiding STP? You're essentially just re-inventing/re-implementing VPLS though.
On 8/01/2013, at 11:24 AM, Kris Price
wrote: Interesting! On the IX the interesting application (to me) you mention is the scale out BGP router (and similar functionality). But this is internal to the IX and just appears as a big router to the customers.
If the customers were to start managing flow table entries on the IX fabric to suit their own purposes it could raise hell, thus anything exposed will naturally have to be a product developed by the IX, and consumed by connected providers and their customers.
And having to steer traffic based on aggregate flows becomes necessary due those small tables on commodity hardware, which quickly destroys the fine grained control that a lot of people talk about as being a selling point of SDN.
This brings you to look at what innovation SDN has brought us so far (at least in public), and it looks like most of the "innovation" will look very similar to existing services just rebuilt with a different control mechanism. These aren't really new services, they're just SDN applications that build analogues of existing services in a different way. Sure they're subtly different, tailored to the needs of the business, and they can be adapted/enhanced to do new tricks quickly by coders. Or they can quickly be extended to parts of the network you couldn't do this on before, e.g. you could wait 10 years for IETF to finish NVo3 or you could just build your own simple equivalent (as people have been doing) and deploy it to your end points/servers yourself.
But providing any meaningful east/west connectivity between different SDNs and the applications running on them is going to require some form of standard, falling back naturally to the existing routing protocols.
So basically how you make SDN on an IX a publicly consumed feature is pretty interesting if possible.
PS: Not to sound like I'm ragging on SDN too much, I'm a fan. If used internally it can make your life easier through simplified and giving you centralized control over your network. It makes using commodity hardware much easier, there's a potential future there where you buy a commodity piece of equipment meeting the hardware specs you need, slot it in your network, it auto discovers the controller, and is automatically integrated into your topology, and so on. We ideally will see a Debian for routers that will run the limited openflow control plane and other things. I'm just having trouble envisioning how east/west can happen without standards, and standard seem somewhat antithetical to SDN which is all about enabling speed and flexibility.
On 1/6/2013 4:08 PM, Dean Pemberton wrote:
Hi Andy
A valid question. If a route server was the end goal then a stand alone instance of Bird/Quagga/etc would have done the job just fine. The existing NZIX route servers run Quagga for example. The intention here is to use this as a milestone along the way to understanding what is possible on an IX once you have the entire fabric under OpenFlow control.
While the first steps have been to emulate the existing functionality, the real innovation opportunities will come once the fabric is extended to more locations. These are the areas that we are looking towards now, with work underway to bring additional locations online in the near future.
What could people be missing as exchange operators by not using these tools? Potentially nothing, potentially everything, depends on the operator and how rich a product they want to deliver from their exchange. I know there are IXP operators who are pushing lots of customer driven features into their exchanges, I also know some which treat their IX as just a dumb layer 2 network which people can establish BGP peering sessions over.
One of the issues is however that if you deploy them like that then you can get stuck in a place where it is difficult to deliver any value beyond that. Using traditional deployment models, you are limited to the features that a given vendor has enabled for you. In the most part this will be the set of features demanded by their largest customers, of which New Zealand never features. IXP operators then look at duct-taping features onto the side If you want to do something slightly (or heaven forbid, radically) different, then you end up shopping for a different vendor, or simply out of luck.
OpenFlow and SDN allow IXP owners to develop and deploy the set of features that they believe their customers require and drive innovation independent of how many of a vendor's customers may want that feature.
I like to think of SDN as the Open Source operating system of the networking world. If you can think it, you can build it. Linux has given us an ability to bring new ideas to life that Windows and OSX would never have allowed. It's not for everyone (or every situation) but in the right place, it's unparalleled. SDN is the same.
So back to what I was saying earlier. Maybe you're missing nothing by not using these tools. Maybe you already deliver all the functionality that your customers want. Equally likely though, you may have a whole lot of innovative ideas which you wish you could get implemented. In that case SDN might be a way forward.
I know where I'm heading.
Regards, Dean
On Sat, Jan 5, 2013 at 12:32 AM, Andy Davidson
wrote: This is really interesting, thanks for posting your news to the list.
Why did you decide to use an OpenFlow controller rather than Bird, Quagga or OpenBGPd as a route-server ? I am trying to understand your motivations and work out what I could be missing as an exchange operator using these tools.
Andy
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sam Russell
Network Engineer Research & Education Advanced Network NZ Ltd ddi: +64 4 913 6365 mob: +64 21 750 819 fax: +64 4 916 0064