Almost two years ago, in November of 2004, I did a piece for Linuxinsider with this same title: the importance of Solaris 10.
Last week Sun formally issued the latest point release, now including ZFS and PostgresSQL. So how do my opinions then, stand up now?
Here's part of my original comment and a list by Adam Leventhal:
Hot new intrinsic capabilities like DTrace, ZFS, and the ability to run Linux binaries are the realizations of deeper technological innovations like microstate accounting. Solaris 10 brings a lot of those out of the labs and into production environments where they'll receive the kind of intensive real world testing that will ultimately determine how important they are. Consider, for example, this list of Solaris 10 top new features put together by Adam Leventhal (one of the key developers behind DTrace):
Things like these are invisible to IT management and of little importance to the press, but this is the stuff on which technology revolutions like DTrace and ZFS are built. Thus their presence in this release signals the importance of Solaris 10, not as an end product but as a work in progress.
- libumem - the tool for debugging dynamic allocation problems; oh, and it scales as well or better than any other memory allocator
- pfiles(1) with file names - you can get at the file name info through /proc too; very cool
- Improved coreadm(1M) - core files are now actually useful on other machines, administrators and users can specify the content of core files
- System V IPC - no more clumsy system tunables and reboots, it's all dynamic, and -- guess what? -- faster too
- kmdb - if you don't care, ok, but if you do care, you really really care: mdb(1)'s cousin replaces kadb(1M)
- Watchpoints - now they work and they scale
- pstack(1) for java - see java stack frames in a JVM or core file and through DTrace
- pmap(1) features - see thread stacks, and core file content
- per-thread p-tools - apply pstack(1) and truss(1) to just the threads you care about
- Event Ports - a generic API for dealing with heterogeneous event sources
To me it seems that Sun is driving toward what I think of as Plan-9 compliance; not at the code level but in terms of system wide functionality. Plan 9, you may recall, is a kind of second generation Unix liberated from the single machine focus of the original design to make full use of multiple machines on a network. Originally Sun's marketing people said that "the network is the computer"; realistically, Plan 9 reverses that to make it: "the computer is the network" - and that's exactly what's going on with Solaris.
Adam Leventhal's list, above, reflects the achievements of people working to put in place the foundations for future software while the forthcoming Niagara and later SPARC designs do the same thing at the hardware level -putting the equivalent of a traditional 32-way SMP box into a single processor.
Today the first Niagara CPUs are in production and getting very positive reviews from users and testers alike. Dtrace has led to a revolution in applications debugging and is being ported to BSD and Linux, ZFS appears likely to establish a new standard, and "under the hood" stuff like microstate accounting and generic accelerator support is facilitating both Niagara2 (encryption and packet management) and Rock (an FPOA?) development.
Great, but do you know what I missed? From a sales perspective I missed the importance of Sun's effort to integrate and simplify the fault detection and correction stuff - critical in sales pitches to mainframers pretending to 100% uptime, but an absolute pain to work with and much less useful in real life than its adherents like to pretend.
What I missed of substance, however, was the importance of the OpenSolaris community and the "eat your cake and keep it too" licensing model - because that's driving adoption among the thousands of small developers now prepping the next great wave of application change to hit our industry.