In general the Apple community press has fallen into line with Apple's move to Intel: cheerfully unremembering previously closely held beliefs to embrace the new.
I have not found a single commentator, for example, noting that Apple's previous architecture transition, from the 40Mhz and down MC680XX line to the 60Mhz and up PowerPC in 1994, was accompanied by a speed increase, not a 50% decrease, for unconverted, user licensed, binaries for the older CPU.
The new Intel based Mac Mini has, accordingly, generally been hailed as an engineering and performance triumph despite being, in reality, another more for less deal relative to the PPC line it replaces.
As various outraged editorial writers put it in response to some of my earlier comments, I just don't get it. To me, adding $100 to the minimum price of a mini looks like a cost increase, taking away the independent graphics memory looks like it makes the device cheaper for Apple while reducing performance for the user, and bragging about going from two to four USB ports while sacrificing Firewire connectivity looks like an attempt to sell us a silk purse made from a pig's ear - because the loss of firewire is real, but the USB gain just reflects a change in the underlying chip set.
The reviews on this thing are remarkable mainly for their tendency to focus on the positive: marginally better overall performance on specially tweaked Intel code than predecessors based on PPC technology from about 2002; while eschewing the negatives: much worse performance than a Mini based on current PPC technology would have, significantly higher costs, the end of Firewire, higher power use (85 watts vs 75 for the old Mini), reduced graphics support, and significantly increased user exposure to PC style "security" problems.
Apple should be embarrased - and I'll bet its senior people are quietly fuming over having been led to believe this thing would be both cheaper and ready for the January MacWorld festival and then being forced first to rush out the MacBook Pro announcement instead and then to raise prices for an inferior product.
Meanwhile quite a few people have run interesting benchmarks on these new machines- and they've all had the same problem: you can't compare an Intel based Mac to something that doesn't exist: a PowerPC based Mac using current PPC technology. In real tests all you can do is test against what exists: today's latest Intel production against an overclocked 2002 G4 or G5.
We can, however, ask ourselves the more interesting question: how would these MiniMacs stack up against one bult around today's MPC7448 and MPC8641S/D PPC chips?
Some answers are obvious: on PPC code such devices would outperform current G4 based products by anything from 50% to 200% -and therefore outperform the Intel products by very nearly that same range across the board.
Some answers depend on believing that negotiated costs roughly reflect published prices less scale discounts: i.e. that Apple would pay significantly less for the Freescale CPUs than it pays Intel, and therefore that our hypothetical Mini would have been profitable for Apple without the component cuts and without the price increase.
The big question, however, is how overall performance would compare given that the non CPU technologies in the box have also advanced since the first Minis were made. In other words, it's clear that the Intel based Macs run somewhere between 20% and 30% faster on native code than their G4/G5 predecessors, but we need to ask what part of that increase is due to the "Intel inside" and what part is due to other factors like coding and the general advance in PC technology?
There's a class of applications that go much faster on Intel. Doom3, for example, achieves about twice the frame rate on the Core Duo at 2Ghz it did on the 2.1Ghz G5. As a group these illustrate one of the bitter realities of this transition: namely that the developers who did the least to work efficiently with the PPC architecture benefit the most, while those who worked hardest to produce efficient, fully Altivec enabled, code now face the highest transition costs.
Look at MacWorld's initial benchmarks on Apple's own code and something else pops up: the ratios obtained by comparing the time needed to run the code on the 1.83Ghz and 2.0Ghz Intel machines aren't constant. Since the only real difference between the two machines is the CPU cycle rate, they should be constant if the CPU is the limiting factor at both speeds.
This effect seems to be general, but is clearest in cases where the Intel product is faster than the G5. In adding the rain effect, for example, the Core Duo at 2.0Ghz takes 125 seconds - a clear win over the 163 seconds needed by the G5. The 1.83Ghz Intel, however, takes 132 seconds -better than the 136 you'd expect based on the CPU speed difference. In other words this process isn't CPU limited and we need to look at factors other than the CPU -like the graphics processor, memory subsystem, or cache manager- to explain the apparent gain over the G5.
Bottom line: where there's an apparent Intel performance advantage, it's most likely coming mainly from non Intel component change, and not from the CPU change. In other words, you'd get the same effect from any modernization of the underlying design: Moore's law applies, after all, as much to graphics, memory, and I/O as it does to processors.
Basically, if we compare what is to what was, we see cost driven compromises on components together with increased power use set against performance gains that either result mainly from an underlying failure to work effectively with the PPC architecture or mainly from the updating of non CPU components.
Apple, as I said earlier, should be embarrased.