Steve Wray wrote:
Redundancy isn't really what I mean; more forward-thinking.
So if a TelstraClear engineer thought about it really hard the truck would have missed the fibre? Personally, I don't think the power of prayer is a valid network protection scheme. Though when I used to work there, it was useful on occassion ;)
From what I've heard out of Telstra people -- and thats just the faults and 'helpdesk' people, the attitude seems to be that a 'truck hitting a pole' is not something that could have been foreseen and that therefore there was no point in having a contingency plan to deal with it.
To me, this just seems wrong-headed.
My question for Telstra is, was there a plan? How well did it work out? If they didn't have a plan or if it didn't work out well, how are they going to address this in future? And the answers I've been getting back from Telstra people are rather disconcerting.
A quick talk about how things are done in a Telco may be in order. 1. You need money to do everything. This includes laying fibre, writing plans and taking a toilet break. 2. There are two types of money to be spent - OPEX and CAPEX (OPerational expenditure and CAPital expenditure). 3. OPEX is evil. Accountants see it as a black hole that money is thrown down. It is not an investment. It is paying some guy for something you already own. OPEX is constantly cut. Many things that should be done as OPEX are classed as CAPEX by squinting your eyes and with the aid of smoke and mirrors. 4. CAPEX is ok - it is investing money in something with the expectation of a return at some point in time. Dig a hole, lay a cable, charge people to use it, eventually they've paid you more than it cost you to lay - this is CAPEX. You need a business case to spend CAPEX. This will consist of all likely and unlikely costs and revenues; and consequently a best, worst and expected-case return on investment. If the company thinks this is the best way to spend it's money it will do it. If it thinks it can make a larger profit or the same profit in a shorter time frame elsewhere it will do that. This is The Way It Works(tm) for a Telco and most businesses. If you can write a business case showing that getting 1000 monks in Tibet to pray for the safety of TelstrClear's network will turn a profit then they will do it. If prayer wasn't what you meant by "forward-thinking", and redundancy wasn't it then I'm at a loss. What else could they do? Not lay the fibre there? That location was probably the place that brought the best return on investment to justify the CAPEX. Could there have been a better place to put it - probably. Do companies have the time to second guess every decision made? Or will someone else beat you to the punch if you do that? Sorry for the massive rant, but berating a company, any company, for doing their best to fix a fault caused by some truck driver who was likely high on P at the time just seems like cynical people trying to complain about nothing to me. Sure, if you were one of the people affected it must've really sucked. In the end them's the breaks. On a lighter note, Merry Christmas to everyone - try to shake off the cynicism and anger that accumulated through the year at the beach this summer! Jonathan
jdmwoolley(a)ihug.co.nz wrote:
Steve Wray wrote:
Redundancy isn't really what I mean; more forward-thinking.
So if a TelstraClear engineer thought about it really hard the truck would have missed the fibre?
Now thats just silly. The outage lasted for quite a while. Did it last that long because there was no effective plan in place to deal with this kind of eventuality? Thats the question.
Personally, I don't think the power of prayer is a valid network protection scheme. Though when I used to work there, it was useful on occassion ;)
I prefer the power of maniacal laughter...
Sorry for the massive rant, but berating a company, any company, for doing their best to fix a fault caused by some truck driver who was likely high on P at the time just seems
Yeah well, what I would like to know is did they do their best and did they learn from the experience? And as I said, the attitude of the people at TC who I've spoken with it seems that they have this notion that since this was such a fantastically unexpected kind of eventuality, theres *nothing* to learn from it. I'm not blaming TC *for* it, I just think they might do something to inspire confidence rather than sucking the confidence right out of the customer (which is what their 'helpdesk' people have done (for me) so far).
On Sat, 2006-12-23 at 07:44 +1300, Steve Wray wrote:
The outage lasted for quite a while. Did it last that long because there was no effective plan in place to deal with this kind of eventuality? Thats the question.
If there was no effective plan, I think you would have seen a much longer outage - more like a week or so. As a residential customer who doesn't pay for redundancy, and considering what happened, I don't have any complaints. There are always other options. I can access the Intarweb-thingy from work, and if I couldn't there's always CafeNet (yay!). The reason the outage lasted "so long" (I've heard) was that when they fixed the obviously broken bit they discovered the cables had been stretched and there were breakages underground. I don't see how they could have known for sure that had happened until the got the main break fixed. I don't imagine it would make sense to call in the guys to do the digging just to have them hang around doing nothing until you know they're needed.
I'm not blaming TC *for* it, I just think they might do something to inspire confidence rather than sucking the confidence right out of the customer (which is what their 'helpdesk' people have done (for me) so far).
I didn't bother calling the helpdesk, but I heard the above from someone who did. I guess what I'm trying to say is: get over it. Disclaimer: I have never worked for a Telco and nor have any of my friends or relations. Lesley W
At 10:17 a.m. 23/12/2006 +1300, Lesley Walker wrote:
The reason the outage lasted "so long" (I've heard) was that when they fixed the obviously broken bit they discovered the cables had been stretched and there were breakages underground. I don't see how they could have known for sure that had happened until the got the main break fixed. I don't imagine it would make sense to call in the guys to do the digging just to have them hang around doing nothing until you know they're needed.
you often have to have them around for a while and dig several times, as you may have to locate several faults. Remember that a fault location gives you a break at xx.x% of cable length or so many meters out from one end. Finding that actual point is often tricky and the +/- error can actually be several meters - typically a road width in a short cable. In a long one like this, several hundred meters. These days you seem to have to wait a lifetime for new poles. In the old days utilities kept a stock of poles "in the yard" as a cheap form of insurance. Accountants don't like that now. I've heard horror stories from the snow down south this year, that they ran out of poles several times.
On Sat, 2006-12-23 at 07:44 +1300, Steve Wray wrote:
Yeah well, what I would like to know is did they do their best and did they learn from the experience? And as I said, the attitude of the people at TC who I've spoken with it seems that they have this notion that since this was such a fantastically unexpected kind of eventuality, theres *nothing* to learn from it.
I drove round the corner in Khandallah where the truck hit the pole (or just hit the wires, which ever) at about 1800 on Wednesday. Given the number of people (and the number of work trucks) present, they must have had some contingency plans that kicked in pretty quick. So while they might have some things to learn from it (like check for secondary damage while fixing the primary damage?), their initial response seems to be pretty good. Cheers! -- Andrew Ruthven Wellington, New Zealand At home: andrew(a)etc.gen.nz | This space intentionally | left blank.
jdmwoolley(a)ihug.co.nz wrote:
Steve Wray wrote: [snip] 2. There are two types of money to be spent - OPEX and CAPEX (OPerational expenditure and CAPital expenditure).
3. OPEX is evil. Accountants see it as a black hole that money is thrown down. It is not an investment. It is paying some guy for something you already own. OPEX is constantly cut. [snip] 4. CAPEX is ok - it is investing money in something with the expectation of a return at some point in time. Dig a hole, lay a cable, charge people to use it, eventually they've paid you more than it cost you to lay - this is CAPEX. You need a business case to spend CAPEX. This will consist of
I've been thinking about how I approach this problem. Clearly, as a systems administrator, OPEX is very important for me. So how does one explain to bean-counters (who don't like spending money) and suits (who often have their heads way up in the clouds) that OPEX is essential? The approach I use is fairly straightforward: CAPEX creates assets. OPEX stops assets from becoming liabilities. A few examples about things like fleets of cars, lack of maintenance, ensuing accidents and uneccesary deaths seems to do the trick. :)
participants (5)
-
Andrew Ruthven
-
jdmwoolley@ihug.co.nz
-
Lesley Walker
-
Richard Naylor
-
Steve Wray