FW: Media Release - Industry agrees - IPv6 transition plan needed
This just came to my attention... Industry agrees - IPv6 transition plan needed Media Release - 15 December 2008 Following a workshop convened by the Ministry of Economic Development, major ICT industry and stakeholder organizations have agreed on the need for New Zealand to develop a transition plan from IPv4 to IPv6, the next generation Internet addressing protocol. The transition plan will include education, and identification and removal of roadblocks to IPv6 deployment. There is growing urgency for the Internet to support both IPv4 and IPv6, which together will offer vastly increased resources. More than one million new devices connect to the Internet every week and the pool of four billion IPv4 addresses is expected to be exhausted by 2010. IPv6 supports a much larger pool of addresses, enough to assign an IP address to each grain of sand in a fine layer covering the entire planet. Internet advocacy group InternetNZ, a key sponsor of IPv6-related initiatives in New Zealand, has agreed to provide continued support for furthering IPv6 as a national industry initiative under the auspices of the Digital Development Forum. The forum is a multi-stakeholder initiative designed to promote New Zealand's transition into the digital economy. The workshop was held on 28 November in Wellington and saw the formation of an IPv6 Steering Group, which includes representatives from telecommunications carriers, internet service providers, ICT vendors, and industry and user associations. A complete list of stakeholder organizations is included at end of this release. Telecommunications industry consultant Dr Murray Milner has been confirmed as independent convenor for the steering group. Milner says IPv6 provides much more flexibility in allocating addresses, and will allow Internet growth in New Zealand to keep pace with the expected worldwide trend. InternetNZ Executive Director Keith Davidson says the initiative is more than welcomed and indicates increasing recognition that IPv6 deployment will be a business-critical issue as IPv4 exhaustion looms. While this is not a Y2K scenario, unless New Zealand has a clear plan for the introduction of IPv6 then there is an increased risk of Internet black holes developing. "This may mean that communication via the internet may not be possible in the near future between parts of New Zealand's IPv4-based Internet and certain developing countries such as China and India, where IPv6-only Internet infrastructure is currently being deployed," says ISPANZ President Jamie Baddeley. Digital Development Council Executive Director Paul Alexander points out that it is not time to panic but it is time for the ICT industry to get organised and reach out to the wider business community to explain the issue. CEO of the Telecommunications Carrier's Forum Ralph Chivers says that progress can only be made with a co-coordinated and concerted effort across the ICT sector to ensure New Zealand has a timely and trouble-free transition to IPv6. IPv6 was highlighted in Digital Strategy 2.0, which mirrored the growing urgency expressed in global ICT technical forums on the need for leadership amongst key industry stakeholders in transitioning to IPv6. IPv4 was deployed in the mid 1980's, at a time when billions of Internet connections were never contemplated. IPv6 has been under development since the mid 1990's, and currently exists in small pockets around the world. TUANZ Chief Executive Ernie Newman says that users welcome the initiative and that TUANZ stands ready to take an active part in educating users once equipment suppliers and the registry have ensured they are fully prepared and have clarified the major steps users need to take. Next steps include development of a technical education plan to stimulate IPv6 skills in New Zealand's ICT and academic sectors. This will be followed by development of a roadmap for the deployment of IPv6 for New Zealand and a business sector outreach programme. A national IPv6 Hui anticipated for September 2009 is also planned. For more information contact: Murray Milner Independent Convenor IPv6 Steering Group 027 443 0120 murray.milner(a)xtra.co.nz Keith Davidson Executive Director InternetNZ 021 377 587 exe.dir(a)internetnz.net.nz Industry organizations supporting the need for a transition plan to IPv6: * InternetNZ * TUANZ * ISPANZ * Telecommunications Carriers Forum * Digital Development Forum * Telecom * TelstraClear * WorldXChange * Orcon * FX Networks * REANNZ * Canterbury Development Corporation * Kordia * Cisco * Vodafone * Juniper Networks * Alcatel Lucent * Braintrust
Industry agrees - IPv6 transition plan needed
Media Release - 15 December 2008
IPv6 supports a much larger pool of addresses, enough to assign an IP address to each grain of sand in a fine layer covering the entire planet.
Does anyone else get really fed up with this analogy? Given that
addresses aren't used contiguously its a really pointless thing to
say; its more likely to engender thoughts of "why?" rather than "cool,
must have". Better, I think, to say things like "IPv6 will enable
people to easily create a home network environment with your
computers, printers, personal communications devices, home theatre
etc. [ie. auto-configuration], communicate and share with others using
rich media [ie. no NAT] and make it more affordable to create rich
content [ie. multicast, hopefully]... all without needing your ask
your neighborhood geek."
If you really wanted to have a "grains of sand" sort of analogy, it
would be more useful to say something like "IPv6 will have enough
addresses [well, address space] to allow every person [that ever
lived? --- I haven't checked this] a large network of their own."
--
Cameron Kerr
On 15/12/2008, at 5:23 PM, Cameron Kerr wrote:
IPv6 supports a much larger pool of addresses, enough to assign an IP address to each grain of sand in a fine layer covering the entire planet.
Does anyone else get really fed up with this analogy? Given that addresses aren't used contiguously its a really pointless thing to say; its more likely to engender thoughts of "why?" rather than "cool, must have". Better, I think, to say things like "IPv6 will enable people to easily create a home network environment with your computers, printers, personal communications devices, home theatre etc. [ie. auto-configuration], communicate and share with others using rich media [ie. no NAT] and make it more affordable to create rich content [ie. multicast, hopefully]... all without needing your ask your neighborhood geek."
People can do that with IPv4. IPv6 doesn't bring any new features to the network, so claiming that it does makes people who know this go "yeah, whatever". Addressing is not preventing those applications from existing today. IPv6 is just a nice NAT traversal API.
If you really wanted to have a "grains of sand" sort of analogy, it would be more useful to say something like "IPv6 will have enough addresses [well, address space] to allow every person [that ever lived? --- I haven't checked this] a large network of their own."
Perhaps. It's all just a way of saying "lots", I don't think how we say it is particularly important. -- Nathan Ward
On 15/12/2008, at 5:23 PM, Cameron Kerr wrote:
IPv6 supports a much larger pool of addresses, enough to assign an IP address to each grain of sand in a fine layer covering the entire planet.
Does anyone else get really fed up with this analogy? Given that addresses aren't used contiguously its a really pointless thing to say; its more likely to engender thoughts of "why?" rather than "cool, must have". Better, I think, to say things like "IPv6 will enable people to easily create a home network environment with your computers, printers, personal communications devices, home theatre etc. [ie. auto-configuration], communicate and share with others using rich media [ie. no NAT] and make it more affordable to create rich content [ie. multicast, hopefully]... all without needing your ask your neighborhood geek."
I'm not sure IPv6 will make multicast any more likely to actually happen. If someone thought deployment of multicast to the consumer was useful, it would already be happening - multicast goes through NATs OK in the UK, where I understand the BBC are testing delivery of content using multicast[1], and I'm pretty sure Kiwi NATs aren't functionally different from British NATs :) So, if I may hijack this thread slightly to ask a question: Is anyone using multicast across AS boundaries in NZ right now? If not, why not? There are several streaming media providers in NZ for whom multicast would surely save a lot of bandwidth at the source. [1] http://support.bbc.co.uk/multicast/ -- Jasper Bryant-Greene Network Engineer, Unleash ddi: +64 3 978 1222 mob: +64 21 129 9458
<08686442-D18F-4A79-B69B-1C14C437D89F(a)cs.otago.ac.nz>
On 15/12/2008, at 6:11 PM, Jasper Bryant-Greene wrote:
I'm not sure IPv6 will make multicast any more likely to actually happen. If someone thought deployment of multicast to the consumer was useful, it would already be happening - multicast goes through NATs OK in the UK, where I understand the BBC are testing delivery of content using multicast[1], and I'm pretty sure Kiwi NATs aren't functionally different from British NATs :)
The main difference is that in Britain the NAT functionality happens on rooters.
So, if I may hijack this thread slightly to ask a question: Is anyone using multicast across AS boundaries in NZ right now? If not, why not? There are several streaming media providers in NZ for whom multicast would surely save a lot of bandwidth at the source.
Some years ago (8ish), Attica or Actrix or A-something was bringing in a feed from what was then called the "m-bone" and making it available to APE (with PIM I think?). http://mbone.net.nz/ Ah, it was Attica. I had an IHUG Ethernet service over United Networks (now Vector) and was able to watch NASA TV with a whole lot of poking around with some fairly clunky apps. It was very experimental, zero support etc. As for why not, back then people were doing differentiated billing - domestic was largely "free" and international was rated (for ADSL circuits anyway). That was a bit of a problem for billing, as people were doing flow based accounting and various other things like that. That was the main problem people cited if I recall correctly. Oh, and also people having different international/domestic speeds on the one circuit, etc. As for why it doesn't happen now, why would an ISP enable multicast for a customer of theirs (ie. a streaming media provider) when they can leave it disabled and charge a much higher access fee? Show me a revenue stream for ISPs hosting the content, and I'm sure it'll get turned on. Right now I don't see any, unless there's content coming in internationally and we just have eyeballs. What applications are available for multicast? Can Flash do multicast? I think Quicktime and Windows media can, yeah? Maybe lack of skill is a problem as well. You could run a hands-on multicast workshop at NZNOG'10, perhaps. Set up a few sources and sinks, and build a multicast capable network in between :-) I'd be kinda keen to do a bit of experimenting with radio streams in some spare time, but I don't have my own APE/WIX port right now. Do you have a link on to APE/WIX with Multicast enabled in some fashion? What about internationally? Can we build a tunnel of some kind? Feel free to offlist the response to that if you want, best not have any operational content on here :-) -- Nathan Ward
There are a bucket load of apps that support multicast ie vlc for starters
which can also do broadcasting over mcast (use to do it in my old flat for
skytv).
I know craig from orcon was playing around with mcast some 4 yrs or so back
cause I gave him some help testing it, and at the time I remember him giving
me a wide range or apps that support it.
mcast can actually announce what is available to watch , well in the sense of
video media that is.
Nathan Ward
On 15/12/2008, at 6:11 PM, Jasper Bryant-Greene wrote:
I'm not sure IPv6 will make multicast any more likely to actually happen. If someone thought deployment of multicast to the consumer was useful, it would already be happening - multicast goes through NATs OK in the UK, where I understand the BBC are testing delivery of content using multicast[1], and I'm pretty sure Kiwi NATs aren't functionally different from British NATs :)
The main difference is that in Britain the NAT functionality happens on rooters.
So, if I may hijack this thread slightly to ask a question: Is anyone using multicast across AS boundaries in NZ right now? If not, why not? There are several streaming media providers in NZ for whom multicast would surely save a lot of bandwidth at the source.
Some years ago (8ish), Attica or Actrix or A-something was bringing in a feed from what was then called the "m-bone" and making it available to APE (with PIM I think?).
http://mbone.net.nz/ Ah, it was Attica.
I had an IHUG Ethernet service over United Networks (now Vector) and was able to watch NASA TV with a whole lot of poking around with some fairly clunky apps. It was very experimental, zero support etc.
As for why not, back then people were doing differentiated billing - domestic was largely "free" and international was rated (for ADSL circuits anyway). That was a bit of a problem for billing, as people were doing flow based accounting and various other things like that. That was the main problem people cited if I recall correctly. Oh, and also people having different international/domestic speeds on the one circuit, etc.
As for why it doesn't happen now, why would an ISP enable multicast for a customer of theirs (ie. a streaming media provider) when they can leave it disabled and charge a much higher access fee?
Show me a revenue stream for ISPs hosting the content, and I'm sure it'll get turned on. Right now I don't see any, unless there's content coming in internationally and we just have eyeballs.
What applications are available for multicast? Can Flash do multicast? I think Quicktime and Windows media can, yeah?
Maybe lack of skill is a problem as well. You could run a hands-on multicast workshop at NZNOG'10, perhaps. Set up a few sources and sinks, and build a multicast capable network in between :-)
I'd be kinda keen to do a bit of experimenting with radio streams in some spare time, but I don't have my own APE/WIX port right now. Do you have a link on to APE/WIX with Multicast enabled in some fashion? What about internationally? Can we build a tunnel of some kind? Feel free to offlist the response to that if you want, best not have any operational content on here :-)
-- Nathan Ward
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 15/12/2008, at 8:21 PM, VeNoMouS wrote:
There are a bucket load of apps that support multicast ie vlc for starters which can also do broadcasting over mcast (use to do it in my old flat for skytv).
Right, and we probably all use it, except VLC isn't really very common on regular end users' home computers. I'm thinking something like Flash or WMP or QT, as they are fairly commonplace. Apparently QT can do it, still looking in to WMP, but Flash cannot.
I know craig from orcon was playing around with mcast some 4 yrs or so back cause I gave him some help testing it, and at the time I remember him giving me a wide range or apps that support it.
mcast can actually announce what is available to watch , well in the sense of video media that is.
Yep, VLC's SAP client/sender appears to work fine. -- Nathan Ward
On 15/12/2008, at 6:56 PM, Nathan Ward wrote:
On 15/12/2008, at 6:11 PM, Jasper Bryant-Greene wrote:
I'm not sure IPv6 will make multicast any more likely to actually happen. If someone thought deployment of multicast to the consumer was useful, it would already be happening - multicast goes through NATs OK in the UK, where I understand the BBC are testing delivery of content using multicast[1], and I'm pretty sure Kiwi NATs aren't functionally different from British NATs :)
The main difference is that in Britain the NAT functionality happens on rooters.
I'm pretty sure there are ADSL CPE on the market which do all that is neccessary for basic multicast - in that a machine behind the NAT can use IGMP to register interest in a multicast group, and the CPE will proxy that IGMP join to the ISP, who then proceed to send multicast datagrams destined for that group to that port, and the CPE sends it to the internal machine that joined that group. I think my Dynalink blob can do it, anyway. Not sure if it was enabled by default, though. If it is enabled by default on a good chunk of CPE, then we're in an even better position than we are with IPv6 - the end users already "support" multicast, and the ISPs just need to "turn it on". I say "support" because the apps have got better, but they're not all that user-friendly yet. This is probably why the BBC have their own client- side player.
As for why it doesn't happen now, why would an ISP enable multicast for a customer of theirs (ie. a streaming media provider) when they can leave it disabled and charge a much higher access fee?
Because the streaming media provider may decide to reduce their costs by hooking up to the APE and doing multicast all by themselves. That's basically what the BBC is doing. For example, a television station could provide high-def content only to customers of multicast-enabled ISPs that they reach over APE, and standard-def to everyone else that they reach over [insert expensive transit ISP here]. Eyeballs ISPs would have pressure from their users, who want the high- def content, and probably from the television station too, who wants to pay less for transit, to enable multicast. This is exactly what's happened in the UK - a bunch of end-user ISPs now support multicast after the BBC started offering content.
What applications are available for multicast? Can Flash do multicast? I think Quicktime and Windows media can, yeah?
No idea about Flash - I would imagine the platform supports it but it would probably depend on the specific application. Quicktime, Windows Media Player, VLC, and various Linux media players all support multicast with varying degrees of user-friendlyness. -- Jasper Bryant-Greene Network Engineer, Unleash ddi: +64 3 978 1222 mob: +64 21 129 9458
Hmm, I wonder how Eyeballs ISP (TCL and TCNZ) would re-act to that situation having de-peered at APE/WIX. ----- Original Message -----
Because the streaming media provider may decide to reduce their costs by hooking up to the APE and doing multicast all by themselves. That's basically what the BBC is doing.
For example, a television station could provide high-def content only to customers of multicast-enabled ISPs that they reach over APE, and standard-def to everyone else that they reach over [insert expensive transit ISP here].
Eyeballs ISPs would have pressure from their users, who want the high- def content, and probably from the television station too, who wants to pay less for transit, to enable multicast. This is exactly what's happened in the UK - a bunch of end-user ISPs now support multicast after the BBC started offering content.
On 15/12/2008, at 10:52 PM, Antonio Pavletich wrote:
Hmm, I wonder how Eyeballs ISP (TCL and TCNZ) would re-act to that situation having de-peered at APE/WIX.
They won't, as they don't have multicast. If they did, they would just receive it internationally, and they wouldn't care about cost as it's a single stream per channel instead of a stream per user. -- Nathan Ward
On Mon, Dec 15, 2008 at 06:56:19PM +1300, Nathan Ward said:
What applications are available for multicast? Can Flash do multicast? I think Quicktime and Windows media can, yeah?
I don't know about Quicktime (I suspect it can), or Flash (I suspect it can't) but to multicast off a Windows media server you move from Windows Server 2008 Standard to Windows Server 2008 Enterprise or Datacentre, with the attendant increase in cost. So for many smaller deployments, you'd be balancing transit cost versus server cost, and the transit cost may be lower. I think there's also a perception that multicast is a solution to a diminishing problem. Most of the major broadcasters seem to be acknowledging that the end of appointment TV (and radio) is nigh [1]. If live content is ~40% of your volume and dropping, and you still need some kind of non-multicast infrastructure to deal with the on demand (individually time shifted) content you're dishing up, then why bother with the effort of setting up multiple platforms? Cheers Si [1] I first heard this excellent phrase from the redoubtable Mr Macewen, I dunno if he spawned it or pilfered, but credit where credit is due, etc.
Maybe lack of skill is a problem as well. You could run a hands-on multicast workshop at NZNOG'10, perhaps. Set up a few sources and sinks, and build a multicast capable network in between :-)
I'd be kinda keen to do a bit of experimenting with radio streams in some spare time, but I don't have my own APE/WIX port right now. Do you have a link on to APE/WIX with Multicast enabled in some fashion? What about internationally? Can we build a tunnel of some kind? Feel free to offlist the response to that if you want, best not have any operational content on here :-)
-- Nathan Ward
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Tue, 2008-12-16 at 22:27 +1300, Simon Blake wrote:
On Mon, Dec 15, 2008 at 06:56:19PM +1300, Nathan Ward said:
I think there's also a perception that multicast is a solution to a diminishing problem.
Aided by CDNs and approaches such as Anycast perhaps? Once upon a time the issue was content server scaling. Geographic Distribution and anycast/cdn trickery helps with that. Quite a lot. Bigger and meaner servers also help. As do the proliferation of datacentres. Another once upon a time the issue was bandwidth to the customer. UCLL/FTTP/FTTH/FTTX/etc helps with that. Copyright does too come to think of it ;-) Once upon a time router CPU inhibited the viability of multicast. FPGA's helped that. Then it was backbone problems. Err, folks helped with that :-) Umm, what's the problem again? Seems to me Multicast is an excellent vehicle for the distribution of noise. Stuff you might want someday but are not sure. Dribble it to me over a period of time and I might check it out. Multicast I suppose is also good for convergence zealots (who are actually good folks trying to make telco business more efficient) who say that a common distribution platform is good. I'm OK with that. Anyone remember that argument around how the ALL IP network will realise lower TCO? Oh, the holey grail! (sic) Noting your reference Simon (hat tip Hamish)..Appointment TV. Hands up who wants an Internet where you have to keep to someone else's appointment to connect to stuff? Is that perhaps exactly not the point? jamie (speaking totally from a personal, non aligned perspective) [p.s back in 2000 I played around with multicast. the biggest problem was I couldn't synchronise CPU processing such that the audio/video spat out on distributed heterogeneous systems at almost exactly the same time over a wide geographic area. At that point I gave up. But it was fun playing with it.]
On 16/12/2008, at 11:24 PM, jamie baddeley wrote:
[p.s back in 2000 I played around with multicast. the biggest problem was I couldn't synchronise CPU processing such that the audio/video spat out on distributed heterogeneous systems at almost exactly the same time over a wide geographic area. At that point I gave up. But it was fun playing with it.]
Could you do it with unicast any better? I would say that regardless of the -cast, you would have to have accurate timing at all the receivers, and buffer ever so slightly to keep stuff in sync. Does MPEG have a standard for that - ie. "play this keyframe at time x" metadata? or is there some other way it's done? -- Nathan Ward
I believe this is what RTP is for, no?
--
Cameron Kerr
Sent from my iPod
On 17/12/2008, at 12:01 AM, Nathan Ward
On 16/12/2008, at 11:24 PM, jamie baddeley wrote:
[p.s back in 2000 I played around with multicast. the biggest problem was I couldn't synchronise CPU processing such that the audio/video spat out on distributed heterogeneous systems at almost exactly the same time over a wide geographic area. At that point I gave up. But it was fun playing with it.]
Could you do it with unicast any better?
I would say that regardless of the -cast, you would have to have accurate timing at all the receivers, and buffer ever so slightly to keep stuff in sync. Does MPEG have a standard for that - ie. "play this keyframe at time x" metadata? or is there some other way it's done?
-- Nathan Ward
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
If memory serves me correctly I was using RTP. The problem was the machines decoding it. A long time ago, and it was not the point, just the postscript :-) jamie On Wed, 2008-12-17 at 00:16 +1300, Cameron Kerr wrote:
I believe this is what RTP is for, no?
-- Cameron Kerr Sent from my iPod
On 17/12/2008, at 12:01 AM, Nathan Ward
wrote: On 16/12/2008, at 11:24 PM, jamie baddeley wrote:
[p.s back in 2000 I played around with multicast. the biggest problem was I couldn't synchronise CPU processing such that the audio/video spat out on distributed heterogeneous systems at almost exactly the same time over a wide geographic area. At that point I gave up. But it was fun playing with it.]
Could you do it with unicast any better?
I would say that regardless of the -cast, you would have to have accurate timing at all the receivers, and buffer ever so slightly to keep stuff in sync. Does MPEG have a standard for that - ie. "play this keyframe at time x" metadata? or is there some other way it's done?
-- Nathan Ward
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Hi guys, I think it's fair to say this conversation has lost it's operational relevance. Time to take it somewhere else (someone might want to set up a discuss/ general list for these types of conversations to migrate to.) Cheers, Patrick
Seems to me like there is a lack of user demand for this kind of stuff. There's certainly no lack of content on the tubes today, so that's not the problem. Multicast seems to work pretty good for re-imaging 1000 machines at a time or whatever, and I hear certain broadcasters use it to distribute the content around their network where it needs to go everywhere simultaneously, but as far as general user usage, I doubt it's going to happen any time soon. If you can't point and click on a link in your browser and make it work, it's not going to be any use to anyone. If it was going to work, it would have worked by now. Please go back to your regularly scheduled anycast and akamai enabled tubes. :) Cheers, Blair As usual, speaking for myself.
On 17/12/2008, at 9:09 AM, Blair Harrison wrote:
Seems to me like there is a lack of user demand for this kind of stuff. There's certainly no lack of content on the tubes today, so that's not the problem.
Multicast seems to work pretty good for re-imaging 1000 machines at a time or whatever, and I hear certain broadcasters use it to distribute the content around their network where it needs to go everywhere simultaneously, but as far as general user usage, I doubt it's going to happen any time soon.
If you can't point and click on a link in your browser and make it work, it's not going to be any use to anyone. If it was going to work, it would have worked by now. Please go back to your regularly scheduled anycast and akamai enabled tubes. :)
I imagine you probably can with WMP - its little stream description asx thing allows for fall back streams etc. So, put multicast first, and fall back to unicast. Not sure what fall back time would be like - would be If we're talking about broadcast style media, I don't see there really ever being any end user demand for it. This is a network/content provider optimisation - it doesn't give end users anything new. If a content provider can drop a multicast stream on to APE/WIX and have it fall back to unicast then that seems like a good thing, I would say. -- Nathan Ward
On 16/12/2008, at 10:27 PM, Simon Blake wrote:
I don't know about Quicktime (I suspect it can), or Flash (I suspect it can't) but to multicast off a Windows media server you move from Windows Server 2008 Standard to Windows Server 2008 Enterprise or Datacentre, with the attendant increase in cost. So for many smaller deployments, you'd be balancing transit cost versus server cost, and the transit cost may be lower.
VLC can send multicast, but not sure if it can do WMA/WMV. I'm pulling in Newstalk ZB as WMA from StreamingNet and multicasting it around my network here as MP3. I'd assume it's possible to do WMA/ WMV without much effort, the cute menus don't have an option for it though. Not sure how to make Windows Media Player listen to it though. Quicktime requires an SDP in a file, WMP is probably the same.
I think there's also a perception that multicast is a solution to a diminishing problem. Most of the major broadcasters seem to be acknowledging that the end of appointment TV (and radio) is nigh [1]. If live content is ~40% of your volume and dropping, and you still need some kind of non-multicast infrastructure to deal with the on demand (individually time shifted) content you're dishing up, then why bother with the effort of setting up multiple platforms?
Yeah, valid point. I'm not entirely sure that broadcast content will disappear entirely though. Even if it does, there are other things that multicast would be useful for - RSS over multicast could be fun, for example. Poor example, as bandwidth requirements are really low. I like the idea of low bandwidth requirements at the source though - really lowers the barrier to entry. What about distributing Linux ISOs?
[1] I first heard this excellent phrase from the redoubtable Mr Macewen, I dunno if he spawned it or pilfered, but credit where credit is due, etc.
Brilliant! That sounds like a Hamish original to me. -- Nathan Ward
On 17/12/2008, at 12:24 AM, Nathan Ward wrote:
I think there's also a perception that multicast is a solution to a diminishing problem. Most of the major broadcasters seem to be acknowledging that the end of appointment TV (and radio) is nigh [1]. If live content is ~40% of your volume and dropping, and you still need some kind of non-multicast infrastructure to deal with the on demand (individually time shifted) content you're dishing up, then why bother with the effort of setting up multiple platforms?
Yeah, valid point. I'm not entirely sure that broadcast content will disappear entirely though.
People still seem to actively seek out "appointment media" even when alternatives are available (for example listening to live radio stations online when they could download music from any number of legit and non-legit sources). I don't doubt live content as a proportion of total volume is dropping, but I don't think it'll disappear entirely any time soon, and I'd even go so far as to say the volume won't be insignificant for a while yet.
What about distributing Linux ISOs?
Well, the sender would have to send at the speed of the slowest likely receiver, and who wants to wait half an hour for the next Gentoo ISO broadcast to roll around? ;) I guess you could have several differently-paced streams, but it strikes me that the problem of distributing Linux ISOs has been quite adequately solved already... -- Jasper Bryant-Greene Network Engineer, Unleash ddi: +64 3 978 1222 mob: +64 21 129 9458
On 17/12/2008, at 12:35 AM, Jasper Bryant-Greene wrote:
People still seem to actively seek out "appointment media" even when alternatives are available (for example listening to live radio stations online when they could download music from any number of legit and non-legit sources). I don't doubt live content as a proportion of total volume is dropping, but I don't think it'll disappear entirely any time soon, and I'd even go so far as to say the volume won't be insignificant for a while yet.
It occurs to me that multicast could be a useful mechanism for subscription media. Podcasts, for example. If someone subscribes to a TV show or a podcast, why would you wait until you want to watch it before you start the download? Reducing the amount of streaming media means we can care less about latency and jitter in the network, which probably makes engineering a bit easier.
What about distributing Linux ISOs?
Well, the sender would have to send at the speed of the slowest likely receiver, and who wants to wait half an hour for the next Gentoo ISO broadcast to roll around? ;)
I guess you could have several differently-paced streams, but it strikes me that the problem of distributing Linux ISOs has been quite adequately solved already...
Divide the file in to 120x5MB chunks, play over and over on 64k or 128k streams. At 128kbit/s that's 15Mbit for a 600MB ISO, with a minimum download time of 15 minutes. Adjust the chunk sizes to fit your available bandwidth. Client can subscribe to as many as they want at a time - and we have things like RSVP so they can figure out how many they can ask for concurrently (I think.. I haven't used RSVP before). or Why do you have to start listening at the start of the file? As long as you know how far through you are who cares where you start? The start is going to come later, and if not you can fall back to unicast to grab it. Several streams of different bitrates works fine in that situation. Let's say we do the following bitrate streams: 64, 128, 192, 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 6144, 8192 (kbit/s). That's ~28Mbit/s total, which let's you distribute content to as many people as you want, up to 8Mbit/s. From a network point of view, it occurs to me that network providers would be much happier if all those Linux ISOs were one or two streams in over their international circuits, instead of one per end user. Also, bit torrent uses lots of upstream - which seems wasteful. I'm not sure if it's a problem right now though - probably not in NZ, I don't know of any ISPs providing Internet access to end users with constrained upstream. This is a rather interesting thought exercise. -- Nathan Ward
On Wed, Dec 17, 2008 at 1:10 AM, Nathan Ward
Let's say we do the following bitrate streams: 64, 128, 192, 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 6144, 8192 (kbit/s).
What about my grandma who's still on 56k dialup? People in Zambia would kill for 64k intergoogletube - I know I would have when I was there! I know this was just a top of the head example but it's interesting how people's expectations of what the minimum standard is changes over time. =P
At 10:27 p.m. 16/12/2008, Simon Blake wrote:
On Mon, Dec 15, 2008 at 06:56:19PM +1300, Nathan Ward said:
What applications are available for multicast? Can Flash do multicast? I think Quicktime and Windows media can, yeah?
I don't know about Quicktime (I suspect it can), or Flash (I suspect it can't) but to multicast off a Windows media server you move from Windows Server 2008 Standard to Windows Server 2008 Enterprise or Datacentre, with the attendant increase in cost. So for many smaller deployments, you'd be balancing transit cost versus server cost, and the transit cost may be lower.
Server 2003 does it as well. AKL Uni has quite a bit coming off its 03 server. One of the big attractions in M/C is not having to have a server. On Internet 2 theres a regular stream (sorry) of M/C trials using boxes like the Visionary Solutions, QVidium or Vbrick, to multi-cast direct from an appliance. These aren't too expensive, and Amino makes a range of real cute set top boxes, that are very affordable. they also work at HD levels, around 10Mbps for 1080i. The general idea has been to make a solution that is sort of IP-cable-TV. This is handy in hotels or if you're a cable operator. Sadly, even on Internet-2 there is variable reachability. Each trial is followed by a burst of emails giving reception around the USA. As user of streaming deliveries, it suggests to me that, its still too hard for making a living off. But it does raise a point that I was wondering about doing a talk at NZNOG-09. Big-ish networks. Back in '94 WCC did quite a bit of modelling around ftth. We worked on what sort of network design would be required to deliver 100+Mbps to 100,000 homes. Coming from an electricity background, it was interesting. I suspect that with the new Govt move to FTTH we should start a group discussion on some of the issues, now, rather than waiting until things break. It might also be a good time to start large-ish use of IP-V6. So - a question - would there be interest in a talk on big-ish networks, even if all it does is trigger a wild group discussion that disolves into beer drinking (to stay on topic).
I think there's also a perception that multicast is a solution to a diminishing problem. Most of the major broadcasters seem to be acknowledging that the end of appointment TV (and radio) is nigh [1]. If live content is ~40% of your volume and dropping, and you still need some kind of non-multicast infrastructure to deal with the on demand (individually time shifted) content you're dishing up, then why bother with the effort of setting up multiple platforms?
For nearly 2 decades its been known that on-demand viewing was the killer app. Live streaming is fine but is really only needed for synchronous events like election night, tennis, rowing, etc. On-demand is very easy on servers, can be squidified and so becomes easy on networks. Part of the work mentioned above, was a distributed (it has to be distributed for bandwidth reasons) network of 1000 squid caches, just for Wellington. Interestingly, there are roughly 1000 electricity substations, most of which have enough space for a rack or half rack. They also have good power :-) - sorry. The substations are also in geographical locations that follow the load requirements (ie they are where the people are). The issues aren't just network topology, but also geographic. So - how does the group feel about managing large cache networks, measured in 1000's of servers, geographically dispersed ? What network topologies are viable ? Is this of interest for NZNOG-09 ? I would also explain how the Power companies in Wgtn and London do it. (or did) Richard
At the risk of upsetting everyone by adding to my own post One of the big attractions in M/C is not having to have a server. On
Internet 2 theres a regular stream (sorry) of M/C trials using boxes like the Visionary Solutions, QVidium or Vbrick, to multi-cast direct from an appliance. These aren't too expensive, and Amino makes a range of real cute set top boxes, that are very affordable. they also work at HD levels, around 10Mbps for 1080i.
There are cute apps like DVTS. They take the DV video frames (25Mbps), wrap ip around them and biff them onto the network in either a point to point unicast, or multi-cast. We did once run it between VUW and CityLink, but it does require 40Mbps. It isn't efficient on bandwidth, but great on CPU. It will run on most laptops. So you can multi-cast off a laptop a tru-ish broadcast video signal. Its at the lowest end of broadcast standard. (altho News will play anything if its interesting) The industry is heading to HD and things aren't quite so easy. Currently an HD camera puts out 1.4Gbps SDI and is fiber connected. There are the cheap HDV cameras that use firewire and achieve video at 19Mbps. There is a version of DVTS apparently (very hard to find) that will shift the HDV frames. The problem is HDV is MPEG compressed with long GOP, so you need a decent machine to de-compress. VLC will shift HD using MPEG-2, but once again you're into compression and the CPU requirements go up. The 1080i/10Mbps figure I quoted was for 1080i heavily compressed. Currently we struggle to do it live. 720p is OK, but 1080i is just a bit too hard. We use a quad core with a HD-SDI feed at 1.4Gbps. We will try a 8-core CPU when money allows (sponsors are very welcome). (we can do MPEG, but the bandwidth is heavy and MPEG servers also pricey - hence multi-cast) For the HD TV you watch, its typically shifted using MPEG-2 either over satellite at 26Mbps, or over fiber using a Tandberg encoder at 96Mbps. (So all "big" venues need fiber.) the links are all point to point, using distribution amps at the international broadcast centers. Its all HD-SDI. The costs are staggering - A typical HD truck is $25M. A camera chain without lenses is around $300K. A typical rugby match has 17+ cameras. But considering the feeds go to NZ, Aust, SA, UK, Can, USA and EU, the audience is large and the revenues also large. And the really challenging bit is that 1080p is now available, running at 3Gbps. So watch what you do when buying that new plasma or LCD screen for Christmas. So how do we run 100K+ feeds of 3Gbps to the rugby mad homes of Wellington ? R
On 2008-12-15 17:23, Cameron Kerr wrote:
Industry agrees - IPv6 transition plan needed
Media Release - 15 December 2008
IPv6 supports a much larger pool of addresses, enough to assign an IP address to each grain of sand in a fine layer covering the entire planet.
Does anyone else get really fed up with this analogy?
Very, but it's much less hyped than the first draft. When I speak on this point, I try to limit myself to conveying that IPv6 has enough addresses and IPv4 doesn't. That's a necessary and sufficient argument, but of course doesn't carry the emotion that marketing people like.
From an operational point of view the key argument is "as many addresses as you need" instead of "the fewest you can get away with".
Brian
participants (15)
-
Antonio Pavletich
-
Barry Murphy
-
Blair Harrison
-
Brian E Carpenter
-
Cameron Kerr
-
Dean Pemberton
-
jamie baddeley
-
Jasper Bryant-Greene
-
Jonathan Woolley
-
Mark Foster
-
Nathan Ward
-
Patrick Jordan-Smith
-
Richard Naylor
-
Simon Blake
-
VeNoMouS