Ok, that headline may be a bit overblown - but Microsoft Research has released part of a report on the "Singularity" kernel they've been working on as part of their planned shift to network computing. The report includes some performance comparisons that show Singularity beating everything else on a 1.8Ghz AMD Athlon based machine.
What's noteworthy about it is that Microsoft compared Singularity to FreeBSD and Linux as well as Windows/XP - and almost every result shows Windows losing to the two Unix variants.
For example, they show the number of CPU cycles needed to "create and start a process" as 1,032,000 for FreeBSD, 719,000 for Linux, and 5,376,000 for Windows/XP. Similarly they provide four graphs comparing raw disk I/O and show the Unix variants beating Windows/XP in three (and a half) of the four cases.
Oddly, however, it's the cases in which they report Windows/XP as beating Unix that are the most interesting. There are three examples of this: one in which they count the CPU cycles needed for a "thread yield" as 911 for FreeBSD, 906 for Linux, and 753 for Windows XP; one in which they count CPU cycles for a "2 thread wait-set ping pong" as 4,707 for FreeBSD, 4,041 for Linux, and 1,658 for Windows/XP; and, one in which they report that "for the sequential read operations, Windows XP performed significantly better than the other systems for block sizes less than 8 kilobytes."
So how did they get these results?
The sequential tests read or wrote 512MB of data from the same portion of the hard disk. The random read and write tests performed 1000 operations on the same sequences of blocks on the disk. The tests were single threaded and performed synchronous raw I/O. Each test was run seven times and the results averaged.
umm...
The Unix thread tests ran on user-space scheduled pthreads. Kernel scheduled threads performed significantly worse. The "wait-set ping pong" test measured the cost of switching between two threads in the same process through a synchronization object. The "2 message ping pong" measured the cost of sending a 1-byte message from one process to another and then back to the original process. On Unix, we used sockets, on Windows, a named pipe, and on Singularity, a channel.
So why is this interesting? because their test methods reflect Windows internals, not Unix kernel design. There are better, faster, ways of doing these things in Unix, but these guys - among the best and brightest programmers working at Microsoft- either didn't know or didn't care.
And if they're the best and brightest, what do you think happens when the average Microsoft programming whiz gets asked to program for Linux?