I got an Ask Bloggie question some time ago that seemed inappropriate for that forum. Here it is:
Apple has stated that they are going to be using an Intel chip for their computers in the near future. You mentioned Linux on Intel (X86 architecture) would have the same security issues as Windows currently does. To extrapolate what you said I would tend to infer then that the currently fairly secure Mac OS would become a holy nightmare for users to keep their computers secure running on an Intel chip (one site showed Apple using a P4 3.6 GHz chip). I'm hoping that my extrapolation is incorrect for it would not seem to be a good business plan to just to be able to have a faster chip in your computer if that computer loses its ability to be secure. I had not seen any comments you have made on this issue. What are your thoughts?
Signed:
Concerned, behind the Red Sticker
Okey, before we get to the security issue let me just unload about this faster chip nonsense. The 3.6Ghz P4 isn't remotely performance competitive with the 2.7Ghz G5. What happens is that applications written for x86 run better on x86 -and so the less effort the application developer put into working with the G5 Mac, the better the applications look when ported to the x86 Mac. Conversely, my understanding is that Microsoft is having a very difficult time with Windows/XP on the X360 Xenon (a triple core, 3.2Ghz, 6 thread PowerG5 successor) because it limps along steadily enough, but more slowly than dead armadillo.
Apple didn't go Intel because it's faster and they certainly didn't go Intel because it offers a quicker route to lower power requirements for laptops - a laughable suggestion recently revived by someone at computerworld: Apple went Intel because they had to get away from IBM and hadn't rethought Plan B for four years.
All that aside, however, the security issue is interesting. I reviewed, a few years ago now, over a thousand computer security vulnerability records from the nist catalogue. What I found was that almost all of the attacks went through two steps:
The typical x86 exploit, for example, finds a way to cause a buffer overflow, and then uses the overflow to get the CPU to execute code it shouldn't.
It seems pretty clear from the record that people have been a lot more successful at part one of this with Windows systems than with Linux or other Unix systems. However, if an exploitable software problem is found, the exploit itself is no more difficult to write for Linux on x86 than for Windows on x86 because such exploits are hardware, not OS, dependent.
Those hardware weaknesses do not, however, exist in the same way or to the same effect in non Intel chipsets like those of the G5, the UltraSPARC, Cell, or Xenon. As a result finding a part two method on these CPU sets is at least as difficult, if not significantly more so, than part one. That's why there have been hundreds of widely publicized Solaris and MacOS X vulnerabilities for which there are no actual exploits and therefore no victims.
So, bottom line: when Darwin/MacOS X goes x86 the people who focus on finding and exploiting vulnerabilities may still have a harder time with part one of the usual exploit process than they do with Windows, but the part two hardware barrier to real success will be gone.