On 5/11/2007, at 2:05 PM, Neil Gardner wrote:
But, the fire service was hit fairly hard. The reporting system is also used for turnouts, and was down, so the comm centers had to page everyone out manually.
[snipped lots of interesting stuff about how various emergency services handled the drop back to manual processes]
Anyone else pretty pleased and impressed with how well the people and organisations involved dealt with a failure in the automated systems? Personally it makes me a bit happier to realise that some genuinely critical services have pretty solid backup processes that by all accounts so far were followed and performed satisfactorily.
I think everyone with a reliance on technology for their business / personal life needs to take a lesson from this.
If your reliance on technology is _THAT_ important, you WILL have suitable backup systems in place. And I don't mean having 2 x ADSL connections either! :-)
Do Telecom manage this service for the Fire Service, or is it managed by a third party, or even the Fire Service itself? It seems like the blame for the Fire Service outage is proposed to be Telecom by some people - but if there's a third party involved I'd put the blame on them for either: a) Not planning for an outage (which might be because of b, or incompetence). b) Deciding that an outage wasn't likely to happen. In short though - yep, their fallback appeared to work pretty well, and it's good that they were prepared for it. A similar thing for ISPs might be - what happens if you lose power to your call centre - do you have an answer phone (or something) somewhere off your network that you can have calls diverted to? Does anyone have info about how this power outage happened? The analysis of 365 Main's outage was interesting - I wonder if Telecom will provide the same level of info? -- Nathan Ward