Britain's national plan to bring order and interoperability to the patient records and operations of the national health service was going to cost about £1.2bn and be an enormous business and political success by early 1998. Ten years later it's widely expected to be a continuing £13bn pound disaster that will eventually be classified as an outstanding success only on the resumes of those responsible.
This kind of escalating commitment to failure may seem suicidel but is quite common among large organizations. Comparably nationalized health bureaucracies across Canada have, for example, hatched numerous comparable schemes in which they hired basically the same people under broadly the same rules to produce pretty much the same kinds of stunning financial successes for their contractors - and IT failures for users and taxpayers.
There are comparable American examples at the VA too, but the important thing to understand about these disasters is that the underlying challenge isn't that hard to meet - in fact, the core problem: architecting safe record interchange among competing entities, was hard twenty years ago when these projects were getting started but is really pretty straightforward today.
So why can't organizations with essentially unlimited budgets consistently deliver multi-constituency, large roll-out, but reasonably uncomplicated systems?
The answer, I think, is that the processes by which these projects get approved, acted on, and shelved both evolve and enforce rules that have the effect of creating closed communities in which only members are considered qualified for membership - and so the taxpayer ends up funding a mutual admiration society in which the wrong people are hired and re-hired to do the wrong things in the wrong ways.
You cannot, for example, become a "key design resource" in a major player proposal on one of these things if you can't claim to have played a significant role in a previous project of comparable scope and agenda - and because these have all pretty much failed, the simple minded interpretation of the experience requirement is that the key prerequisite for any senior role in projects of this kind is the proven ability to contribute to, and then survive, large scale project failures.
Similarly TC (Tiny Co.) isn't going to be considered for the next hundred million dollar project put out by one of Canada's provincial health agencies; only big companies need apply, because, you know, they have the people and other resources needed to guarantee success -except, of course, that they never do.
The big argument here is that you need a ten billion dollar company to stand behind a hundred milllon dollar project because you simply can't hold companies like TC responsible for losses on the scale these projects generate.
In practice, however, the contracts generally allow the companies involved to bill per diems for rescue personnel brought in as the project starts to collapse - and the number of times lawsuits against the companies involved have gone all the way to judgment against them can just about be counted on a shark's middle finger - largely, I think, because both sides know up front that the other guy's personal incentives always favor getting along by going along.
It's hard to understand the extent to which the people, processes, and bad assumptions interlock at every stage from project inception to burial if you haven't worked inside a few - but one example of how this works at the detail level may give you the flavor: contracting agencies charged with assuring independence in the services purchasing process routinely drop bids more than about 40% off the pre-qualified bidder average from further consideration.
Since the buyer authorizes the pre-qualified bidder list, what this means is that the buyer's assumptions about costs and technologies force the contracting agency to rule out innovative proposals without any discussion of their possible business, technical, or professional merit.
When Sun first released StarOffice as free software, for example, the government services group found itself being taken off a lot of government and agency approved vendor lists because the cost of StarOffice didn't make it to 65% of the average set by Microsoft and Corel -and if you want to propose a five man team to implement a medical records exchange using Cocoon and Postgres on Linux you'll find there's no way to pad the price sufficiently to get a hearing at any major national, provincial, or state agency.
Notice that these realities are not artifacts of dishonesty or incompetence among those involved. Most of them know that their own processes prevent cost cutting, prevent innovation, and essentially assure project failures - and if you point out that repeatedly repeating the same processes in hopes of getting a different result is insane they'll generally agree with you and then go right ahead and do that again.
Why? because fundamentally they're captive to processes that have evolved to mitigate the personal effects of failure - and that's where unsupported open source comes in as a paradigm breaker.
It takes a lot to bring an open source product into the bureaucracy, but once it's there the people involved start to discover the obvious: that open source works, that people who care first about software and technology are a lot more effective as responders to problems than people who care first about billings, and ultimately that not having to deal with obstructive business processes can be insanely liberating.
It's the old business of the bank owning you if you owe them a few thousand bucks, but you owning them if you owe them a few hundred million: sign a contract with SGIF (Some Giant International Firm) and the mutual self interest of all involved means they own you; have Tiny Co. install and configure some free software for you, and you'll own TC.
Open source is starting to break into these environments - and, more importantly, budget issues are driving some of the people involved to look at breaking the process chains that led them to demand the right to pay for products like Sun's StarOffice - or Red Hat's Linux. And once that gains a bit of momentum? I'm both hoping and guessing that we'll see something akin to what happens when a dam gets a small break: the pressure behind that first small flow will make the break exponentially bigger and bigger until the whole structure collapses.