About a week ago in this blog I discussed the history of both virtualization and partitioning as solutions to problems we no longer have -and which we therefore no longer need. I was thinking only in terms of how these technology ideas are being misapplied in the Unix world but, as reader Roger Ramjet pointed out, these solutions still have value in the Windows world -and, of course, the mainframers are still out there too.
The Windows 2000 kernel is quite capable of handling multiple concurrent applications, but the Windows registry really isn't without the kind of hand editing that Microsoft puts into consolidated products like Small Business Server -- and neither is Microsoft's network stack unless you use one NIC per application. As a result Windows virtualization provides a perfectly reasonable way to ensure that multiple low use applications can be run and maintained on the same box without interfering with one another.
On the Unix side -and whether that means BSD, Linux, or Solaris to you doesn't matter- neither Microsoft's problems now, nor IBM's problems then, exist. The workarounds do, but that's not because they're needed, its because people from the mainframe and Windows environment insist they have to have them.
With Unix you can safely run multiple applications on the same machine -the technical issues you run into have little or nothing to do with minimizing systems resource interactions, and a lot to do with externals like failover management and network connectivity. The most important difference, however, isn't in the technology but in what you try to do with it: when the resource is cheaper than user time, utilization becomes unimportant because the value lies in improved user service.
Consolidation generally does lead to both better utilisation and better service, but its the better service that counts, not the utilisation. That was nicely illustrated in a press release issued by Sun and Manugistics yesterday. In it they report using a Sun 20K machine with 36 USIV CPUs to set new world records on the Manugistics Fulfillment v7.1 benchmark.
It's a positive result for Sun, but it's the way they got it that counts here. To get both a 23% speed advantage and a 45% price/performance advantage over the previous record holder (an IBM P5-590) they put both the database and application set on the same machine. Remember, Unix isn't client-server, so why use a relatively slow network when you've got SMP and a fast backplane?
Machine considerations in Unix consolidation usually involve appropriate scaling, not operating system limitations in memory, network, or processor management. There's nothing wrong with loading three different database engines on the same machine if you've got the I/O and processor bandwidth to handle them -you can trust Unix to do its job pretty much no matter what you throw at it.
On the other hand you can't trust the typical corporate PC network to the same degree so one of the critical pieces in making consolidation work is to measure response on the user desktop, not at your server. What you'll often find is that the server's lightly loaded, but the network is forcing the user to wait -and in that situation you don't consolidate the servers, you put them electrically adjacent to the users they serve and go back to the budget committee for money to clean up the network.
The reality is that since it's not the Unix technology that limits your ability to consolidate, you don't need work around tools like partitioning and virtualization to help you. What you do need is a good understanding of usage demand patterns (and the willingness to change once you discover where you were wrong) because your success depends on meeting user needs, not on saving a few thousand bucks on hardware at the cost of making hundreds of users wait minutes every day. That's the real bottom line on consolidation: if minimizing user time means leaving capacity idle, then do that and smile because capacity's cheap, but user time isn't.