Ever get the feeling that you've got hold of an important idea but can't quite get it straight in your head? That's how I've felt in our discussions of the role user interfaces and perceptions have in defining the differences between what we usually do in applications development and what we should be doing. Today's confusion is more of the same -a related issue whose tendency to slide away into infinities of buts, exceptions, and extensions demonstrates that I don't really understand how it all fits together.
The argument sounds simple: we're constrained, in making decisions on user services, by the cost of change - and the cost of change varies with the technology in place in a very systematic way: the more proprietary the dominant technology, the greater the cost of change and therefore the more constrained our decision making is.
Organizationally the cost of IT change does not matter by itself: what matters is the net cost to the organization. In other words, if a dollar spent on IT change returns more than a dollar in organizational productivity it's worth doing - and otherwise not.
Unfortunately measuring direct cost is easy, but measuring benefits is not. Checks written are easily tracked, tied to specific project managers, and clearly dated - but benefits are not. Benefits are usually amorphous, always arguably due to multiple causes, and generally spread over time.
As a result you usually have to go through some well defined process to get spending authorization - and not only does that process usually involve bigger and bigger hurdles as the amounts involved get bigger, but your critics will have perfect memories on every nickel.
Benefits, in contrast, tend to accrue almost invisibly, usually only the project's critics will track them, and there's always considerably ambiguity about attribution -meaning that the people who resent every nickel you spent, will usually be able to present a dollar in benefits as what someone else managed to save out of the dollar-fifty the company would have received following their advice.
The only way to reduce the effect this has in terms of counter-productively constraining IT decision making is to reduce the cost of change to near invisibility while raising benefits - and that's where the inverse relationship between the IT organization's cost of change and the extent of its commitment to proprietary systems comes in. Basically, the lower the cost of change, the more freedom you have to serve users; and because the less proprietary your systems are the lower those costs are, it should follow that using open source products improves your ability to serve users.
As a practical illustration imagine that you've done an analysis of user needs on some project and decided that users would be best served with an application written using an RDBMS backend, a browser with some embedded JavaFX scripts as the front-end, and business logic expressed independently of those two "layers" using some open source toolset.
Recommend this in a 1930s style data processing environment built around IBM's z9/10 mainframes and you'll be committing the unthinkable - even doing it using Linux on an IFL is going to cost roughly $140K up front for processor and software licensing plus several times that in long term staffing support, while the only things you'll have going for you are novelty and the fact that the PCs they all use for information distribution will run your prefered terminal emulator.
Doing it an all Microsoft environment wouldn't be quite so career ending - but by the time you're done compromising you'll discover that doing it with SQL-Server and dot.net would have been a lot less hassle as well as faster and, after the support people get involved, much more effective too.
The fundamental problem in both cases is simply that trying to make small changes at the edges of large proprietary systems is a lot like throwing snowballs at an on-coming tank: the inexorable inertia of the thing means that only very disruptive and large scale change can affect its operation.
In contrast, these same choices would probably cause your boss in a Solaris Smart display environment to raise an eyebrow only if you didn't have a mock up ready to show users - and even if, in that environment, your recommendation required adding a Microsoft server or two, the only incremental cost would be the cost of acquiring, maintaining, and licensing those servers -with no reverberating organizational impacts of the kind adding a $2,500 Dell running Linux to either of the two locked down, proprietary, systems would have.
Or, in metaphoric bottom line terms: if your analysis shows that your proprietary architecture isn't effectively meeting user needs, the cost of change argument shows that the piecemeal approach to change doesn't have a snowball's chance - except perhaps in terms of building support for the eventual use of a rocket launcher to get rid of this stuff forever.