- by Paul Murphy -
About a month ago I compared the cost for Apple's desktop, server, and laptop products to their nearest Dell equivalents and discovered that Macs generally cost less than comparable PC products. That was a bit of surprise, but the truly astonishing thing that came out of the comparison was that Dell's product line extends marginally below Apple's at the low end, but has nothing to stack up against Apple's 17 inch Powerbook, X-Serve/X-RAID combination, or Cinema displays at the high end.
Bottom line: when you upgrade the PCs enough to allow an approximately apples to apples comparison, Apple turns out to offer both lower prices and a broader range.
The PC community response is first that the multi-media features distinguishing the Mac aren't necessary and secondly that the PC is so far ahead of the Mac on speed that the comparisons are pointless anyway. Personally I think they're begging the question on stuff like firewire: that they don't see the value of Apple's multi-media capabilities only because they've never had them, but that's an argument for another day. In this column I want to focus on the performance part of their response.
So are PCs faster than Macs? The real answer is that relative performance depends entirely on the software and is both hard to define and hard to measure.
The short answer, however, can be based entirely on raw hardware capabilities and that answer is pretty simple: the Mac wins hands down.
There is a complication here: Mac users upgrade much less often than PC users. Look just at the hardware in a newly introduced Apple product like latest iMacs and it will be capable of doing more processing per second than the roughly comparable Dell product. Survey people you know, however, and the PC users will, on average, have faster hardware than the Mac users simply because most of the Mac people won't have upgraded their hardware in years.
To determine which hardware is really more capable we have to first strip out the impact of operating system and applications design and coding decisions. It's attractive to think that this could be done easily by running Linux on both an Apple and a Dell workstation, but in fact the impact of x86 architecture assumptions permeates the Linux kernel design. Thus a better way to do this is to look at the per system contribution in the cluster computer business where everyone uses their own Unix and the application developers don't have hardware agendas.
For example, the NCSA "Tungsten" cluster computer built last year was recently upgraded to include 2,500 dual Xeon Dell Poweredge 1750 servers at 3.2Ghz. According to the NCSA public affairs this thing has a theoretical peak capacity of about 32 Tereflops and yeilds about 15.36 teraflops in operation - meaning that each CPU contributes about 3.1 Gigaflops to actual throughput.
In contrast the cluster built last year at Virginia Tech using 1,100 Mac desktops has a theoretical peak of about 18.2 Teraflops and initially benchmarked at 8.1 Teraflops to deliver a contribution of 3.7 Gigaflops per CPU.
Although that was 19% better than the most recent Dell Xeon's, later machines built with Apple's X-Serves do much better because they have fewer I/O bottlenecks. Thus the Mach5 cluster built by Colsa Corporation and the U.S. Army, uses 1,566 dual CPU X-serves to deliver an expected 15 Teraflops in sustained throughput. That's 4.8 Gigaflops per CPU - more than 50% faster than the Xeon -and that's with last year's 2.0Ghz G5.
It can be argued, of course, that this comparison isn't fair because the Xeon is an older 32bit processor and not generationally comparable to the G5. It might be better, therefore, to compare the X-serve cluster to machines built using a more modern 64Bit processor like AMD's Opteron's. "Lightning," at Los Alamos National Laboratory, uses 2816 Opterons to produce a peak of 11,264 or 4.0 Gigaflops per CPU - 30% better than the Xeons, but still 20% less than the G5s.
Thus the basic short answer is pretty clear: level the software playing field and the X-Serve blows everything else away while even last year's desktop G5s, at 3.7 Gigaflop per CPU, handily beat this year's Dell servers at only 3.1.
The short answer is, however, terribly incomplete because it applies only to the hardware. That's a component of what you use, of course, but what really counts is the combination of hardware, operating system, and applications. Thus the real question isn't so much whether Apple's hardware is faster, it's whether an Apple machine gets real work done faster than a Wintel PC.
To see just how hard that question is, think about this: a 25Mhz i80386 with 8MB of RAM will generally seem to outperform a 3.2Ghz P4 with 512MB of RAM for typing up a simple document if the 386 user is running MS-DOS with WordPerfect 4.0 and the P4 user has to use Microsoft Word for Windows XP under Windows/XP.
In general, however, there seem to be three main aspects (or "dimensions") to the Mac vs. PC productivity argument:
To many people in the sciences and engineering, therefore, the Wintel side of the MacOS vs. Windows productivity issue is a non starter because Microsoft doesn't have the software to compete. Instead the PC. vs. Mac speed issue boils down to hardware running Unix -an arena in which BSD on the G5 has a 50% advantage over Linux on the Xeon.
You'd think there would be an objective way of measuring the user outcomes of these two directions in interface design, but so far there doesn't seem to be one. Instead, the issue often seems to come down to an emotional argument about the relative "coolness" of the two interfaces with the decision heavily weighted by the presumption that what we already know must be better than what we don't. That's unresolveable in any general sense, but if you're a Windows user prepared to defend XP as better than MacOS X, I have a challenge for you. Think about Win-D (or is it M?), and all the rest of it while reading this Apple puff piece about the "Exposé" feature in MacOS X.
If you're at all fair about this I think you'll agree that the ability to deliver gimmicks like this support the view that Microsoft's Windows GUI remains at least a full generation behind Apple's.
Remember "Slowaris"? When Sun recompiled Solaris 2.5.1 for x86 uniprocessors, a product that ran like greased lightening on 85Mhz SuperSparcs turned out to cheerfully morph 400Mhz PII machines into 286s. That's what happened to Apple too when they first ported BSD to RISC and what still happens to games manufacturers who just re-compile x86 code for the PowerPC.
Unfortunately all three of these aspects of the performance debate lead to largely to unresolveable arguments about productivity, conspiracy theories about compiler de-optimization, and the delusory discovery that the MacOS X advantage can reasonably be considered infinite on Unix or multi-media applications where the PC simply doesn't play. All of which, ultimately, just gets us back to another version of the firewire argument on cost in which the PC people say none of the Mac advantages matter and the Mac people ask them how they know.
On the other hand I think the intuitive bottom line on the Macintosh versus PC productivity debate is actually pretty simple: I've never met a PC user whose focus on the job he or she was supposed to be doing wasn't significantly diluted by the need to accommodate the PC and its software, but I've never met a business Mac user who considered the machine anything other than a tool, like a telephone or typewriter, for getting the job done.