- by Paul Murphy, -
My wife has a Dilbert cartoon on her office door in which one of the characters says: "If you have any trouble sounding condescending, find a Unix user to show you how." Ha, she's a Mac user and they were worse even before they all became Unix users too.
Or maybe not, but finding out whether the average Mac user really is smarter than the rest of us isn't so easy. Part of the problem is that even if you matched the admissions test results for a graduate school with individual PC/Mac preferences to discover a strong positive correlation, people would argue that the Mac users are exceptional for other reasons, that the tests don't measure anything relevant, and that it's unethical to do this in the first place. In fact, it's pretty clear that this topic is sufficiently emotionally loaded that you'd get shouted down by one side or another no matter how you did the research; and that's too bad because a clear answer one way or the other would be interesting.
I doubt it's possible to get a definitive answer, but as long as you don't take any of it too seriously you can have a lot of fun playing with proxies such as the average user's ability to read and write his or her native language. This isn't necessarily a reasonable measure of inteligence (mainly because inteligence has yet to be defined) but almost everyone agrees that a native English speaker's ability to write correct English correlates closely with that person's ability to think clearly.
In other words, if we knew that Mac users, as a group, were significantly better users of written English than PC users then we'd have a presumptive basis for ranking the probable "smartness" of two people about whom we only know that one uses a Mac and the other a PC.
So how can we do that? As it happens Unix has been useful for text processing and analysis virtually from the beginning. In fact, the very first Unics application offered text processing support for the patent application process at Bell Labs - in 1971 on a PDP-11 with 8K of RAM and a 500K disk. By coincidence Interleaf, the first GUI based document processing package, was the first major commercial package available on Sun - in 1983, well before Microsoft "invented" Windows and well ahead of the first significant third party applications for the Apple Lisa. During the 12 years between those two applications, text processing and related research became one of the hallmarks of academic Unix use. By the early eighties therefore most Unix releases, whether BSD or AT&T derived, came with the AT&T writers workbench - a collection of useful text processing utilities.
One of the those was a thing called style
. Style
is somewhat out of style these days but is on many Linux "bonus" CDs and downloadable
from gnu.org as part of the diction package.
Style
produces readability metrics on text. Forget for the moment what the ratings mean and
look at the numbers. For comparison here's what style
says about the first 1,000 words
in what is arguably the finest novel ever published in English: The Golden Bowl:
readability grades:
Kincaid: 18.2
ARI: 22.2
Coleman-Liau: 9.8
Flesch Index: 46.7
Fog Index: 21.7
Lix: 64.4 = higher than school year 11
SMOG-Grading: 13.5
Of course that's Henry James at the top of his form. For a more realistic, and interesting, baseline
I collected about 2,800 lines of slashdot discussion contributions and ran style
against
them to get the following ratings summary along with a lot of detail data omited here:
readability grades:
Kincaid: 7.7
ARI: 8.0
Coleman-Liau: 9.7
Flesch Index: 72.4
Fog Index: 10.7
Lix: 37.1 = school year 5
SMOG-Grading: 9.8
Notice that these results apply to comments from slashdotters, not to the text on which they're commenting. Look at the source articles and you get very different results because, of course, most are professionally written or edited - although there is an interesting oddity in that ratings for files made up by pasting together stories posted by "Michael" are consistently at least one school year higher than comparable accumulations made from postings (other than press releases) by "Cowboyneal."
Comments put in discussion groups aren't usually studied, professional, productions like news articles. You'd expect those to rate considerably higher; and they do. Here, for example, is the summery from running it against five articles taken from today's on-line edition of The Christian Science Monitor.
readability grades:
Kincaid: 10.4
ARI: 12.5
Coleman-Liau: 12.9
Flesch Index: 59.5
Fog Index: 13.3
Lix: 48.8 = school year 9
SMOG-Grading: 11.6
Lots of smart people have put effort into arguing that these readability scores are either meaningless or meaningful, a choice that apparently depends rather more on the writer's agenda than research. Most of the more credible would probably agree, however, that higher rankings are mainly useful as a rough guide to the writer's expectations about his, or her, audience but lower rankings do correlate directly with the writer's education in English and indirectly with inteligence.
So what happens if we treat the slashdotters, a mixed bunch if there ever was one, as a median and then compare the ratings shown above with results from "pure play" Mac and PC communities?
I tried running style
against text collected from various PC sites. The very lowest ratings came
from text collected from an MSN forum host
but I only got about 600 lines because the forums suffer the Wintel design disease of
requiring you to click for each new text contribution and I get bored easily.
readability grades:
Kincaid: 2.9
ARI: 1.9
Coleman-Liau: 8.0
Flesch Index: 89.5
Fog Index: 6.0
Lix: 21.5 = below school year 5
SMOG-Grading: 7.1
The highest PC oriented ratings came from a sample of about 2500 lines taken from reader comments hosted by PC Magazine:
readability grades:
Kincaid: 5.9
ARI: 5.9
Coleman-Liau: 9.0
Flesch Index: 79.3
Fog Index: 9.0
Lix: 32.2 = below school year 5
SMOG-Grading: 8.8
Notice that both sets score well below the level of slashdot's contributors.
So do Mac users differ? You bet, here's the ratings summary based on about 3,000 lines of text taken from reader comments hosted by the Macintouch site:
readability grades:
Kincaid: 8.9
ARI: 9.4
Coleman-Liau: 10.0
Flesch Index: 67.8
Fog Index: 12.0
Lix: 40.5 = school year 6
SMOG-Grading: 10.7
Not only were these ratings significantly higher than those given slashdot's contributors, and thus better than those given text from the PC sites, but the vocabulary was larger too. Without collapsing words to their root forms, but after removing punctuation (including capitalization) and numbers, the Macintouch stuff had 870 unique words to only 517 for the combined PC sites.
Overall the results are pretty clear: Mac users may not actually be smarter than PC users, but they certainly use better English and a larger vocabulary to express more complex thinking.