When CERT published its 2005 Vulnerabiities summary earlier year it drew hundreds of reality reversing headlines along the lines of Tom Espiner's zdnet report: "Linux and Unix 'had more vulnerabilities than Windows.'"
On January 10th I looked at the list of 2,328 claimed Unix vulnerabilities only to discover them greatly exaggerated. Not only are almost all of them application rather than Unix issues but most (62%) are duplicates, and, among the remainder, most of the ones I checked turned out to be either absurdly over enthusiastic claims of vulnerabiities where there aren't any, applicable to pre-release code that never made it to production, or exploitable only under circumstances that should never reasonably arise.
Some, furthermore, have an engaging disengenousness about them that just begs for the attention of a good cartoonist. Consider, for example, one labelled "Adobe Reader For Unix Local File Disclosure". This is based on CVE-2005-1841:
The control for Adobe Reader 5.0.9 and 5.0.10 on Linux, Solaris, HP-UX, and AIX creates temporary files with the permissions as specified in a user's umask, which could allow local users to read PDF documents of that user if the umask allows it.
Basically a local attacker with legal access to the machine can use this vulnerability to read /tmp/*.pdf files being read by a user who defaults his file permission settings to world readable.
Unfortunately the unreality of many of the listings on the Windows side goes the other way. When I looked at the list of 812 Windows vulnerabiities, it turned out to significantly under represent reality by turning genuine security issues known to affect multiple application/OS combinations into remote possibilities that might affect a few applications on some generic Windows brand Operating system.
As a special case consider ebay item number #7203336538: a "Brand new Microsoft Excel Vulnerability." This is the only Excel vulnerabiity listed for Windows by CERT and therefore counts counts exactly as as much against Windows as the Adobe one mentioned above does against Unix.
In reality, Microsoft admitted the flaw; the search "Excel vulnerability MS05- site:microsoft.com" on google returns 63 hits; all versions of Excel on all Microsoft platforms back to Windows 95 are affected; and several blackhat sites claimed to have exploit code grabbing full system control provided only that at least one user open an attached xls file.
The question today, however, isn't whether CERT's reports are biased - they clearly are- but what can, or should, be done about it?
CERT's original funding came, I believe, in response to the Christmass "virus" that shut down IBM's network in 1987, but the organization seemed to become captive to the PC security industry in the early to mid ninties. As Linux started to gain momentum, therefore, no one reasonably expected CERT to be impartial with respect the missing security problems because their absence, if understood by the public, would have put those companies out of business.
Unfortunately that was then, this is now, and CERT's role has changed: the people who put out this year's summary represent the Government of the United States, not a vendor community, and higher standards apply.
Tom Espiner, in the report mentioned above, engages in some gentle questioning, but nevertheless reports "the facts" straight from the CERT bulletin:
The US Government has reported that fewer vulnerabilities were found in Windows than in Linux/Unix operating systems in 2005....
The report Cyber Security Bulletin 2005 was published last week and found that out of 5,198 reported vulnerabilities, 812 were Windows operating system vulnerabilities, while 2,328 were Unix/Linux operating vulnerabilities. 2,058 were multiple operating system vulnerabilities.
And that's fundamentally what's wrong here: not that a PC vendor dominated group took the press for a ride, but that the name and credibility of the United States Government got attached to a report whose effect is to mislead the public by hyping Unix weaknesses while de-emphasizing both the importance and the prevalance of those affecting Microsoft's user communities.
At a minimum, therefore, Homeland Security management needs to make the organizational changes required to bring impartiality to CERT's operations - and that means applying the same standards, the same kinds of wording, and the same throughness to all vulnerability claims it reports.
On the other hand, impartiality, while a critical requirement, is not enough. Had CERT applied its Unix standards to Windows the count might have been on the order of 2,328 Unix vulnerabiities versus 20,000+ Windows related vulnerabilities. Conversely, had they applied their Windows standards to Unix, the list might have had 812 Windows related vulnerabilities versus perhaps 90 Unix ones. Either way, and regardless of what the numbers would really be, this would have been fairer and would not have produced the hundreds of wildly misleading headlines the real summary did, but it would have been no more useful than the present list in terms of the real business of security risk assessment.
The easy option, therefore, is for CERT to simply stop issuing summary bulletins and supporting lists that don't contribute to the public good.
A more useful option, however, is for the homeland defence folks to figure out what the cyber security job really is and then what role CERT should play in getting it done.
Neither part of this is obvious. For example, the SANS Institute has cleaned up its summary vulnerability reporting and may be well positioned to take over this role from CERT. Indeed you could see CERT's decision to issue that bulletin as part of an inappropriate turf war taking place between proxies for the core Homeland Defence folks and the FBI.
The how part isn't easy either. Several people have privately sent me e-mail outlining what I thought were pretty good ideas for this, and certainly whichever agency gets the responsibility should consult the general IT community to get this kind of input. For myself, I'd like to see summary assessments expressed in terms of a single number capturing the risk per 100,000 installed systems - with the numbers backed by the real experience of a broad panel made up of people actually using the technologies described.
So the bottom line prescription is simple: the people who control CERT need first to enforce standard policies on organizational behavior and impartiality while, in the longer term, also re-examining CERT's role and organizational mandate to ensure that whatever it does, it produces value, rather than embarrasment, for the taxpayer.