Both VeriTest (formerly zdlabs) on Microsoft's behalf and IBM have recently issued reports on running Ziff Davis Media's PC Magazine NetBench performance benchmark on mainframe Linux. The results are not directly comparable because IBM used a dedicated 16 CPU z900 while VeriTest was restricted to a two CPU partition on a z900 but overall it appears that Microsoft reports better mainframe performance than IBM does.
Microsoft Results in MegaBits/Sec. |
IBM Results in MegaBytes/Sec. |
---|---|
Hardware
2 CPU IBM z900 1C6 LPAR
24GB Memory,
|
Hardware z900 2064 model 116 with 16CPUs enabled as two 8CPU partitions, 64GB Memory, 1.6TB DASD (Disk) on 32 ESCON channels; 10 OSA Express Gigabit Ethernet (GBE) cards |
Results (Excerpted from the VeriTest/Microsoft report)
NetBench Results without z/VM The IBM z900 two processor LPAR achieved 14 percent less performance than an Intel-based server with two 900 MHz Intel Xeon processors running Windows Server 2003. On the NetBench Enterprise DiskMix suite for testing file serving, the z900 only achieved 546 Megabits per Second maximum throughput, compared to 632 Megabits per Second maximum throughput the Windows server achieved in the VeriTest study. NetBench Results with z/VM z/VM, which is required to run multiple virtual Linux servers on the mainframe, exerts a heavy penalty on mainframe performance for file serving. ... the penalty measured by the Mainframe Linux Benchmark Project was 24 percent at maximum throughput with four virtual Linux server images. Overhead exceeded 48 percent with ninety-six server images. Overall, the highest NetBench results for Linux on z/VM were 417 Megabits per Second throughput with four Linux server images and sixty clients. Additionally the z900 started generating read errors on the clients after twenty server images were reached, resulting in the benchmark software dropping clients. At twenty server images with ninety-six clients, the maximum throughput achieved was 288 Megabits per Second. At ninety-six server images, and ninety-five clients with one dropped client, the mainframe achieved 199 Megabits per Second maximum throughput. This means that the maximum average throughput per server at ninety-six servers was only 2.071 Megabits per Second, and that it would take one of these server images 38.62 seconds to serve one 10 Megabyte file. |
Results (Excerpted from the IBM report)
1. Between 50-70 concurrently active servers with an aggregate peak throughput of 105.8 MB/second could be supported. Testing with a number of concurrently active servers beyond 70 resulted in lower throughput and longer response times. 2. With a single GBE OSA card, up to 25 guests with one concurrent request each and an aggregate throughput of 13.37 MB/second could be supported. Maximum OSA card throughput was reached between 12-15 SMB processes. 3. With a single guest server and a single OSA card, it was possible to support up to 30 concurrent users at an aggregate throughput of 19.4 MB/second. Native results versus. z/VM guest results.
The cost in
throughput between the 2.4.17 kernel in a native LPAR versus running
the timer change version of this kernel on z/VM is most significant with
small numbers of guests.
|
Source Document | Source Document |
The Microsoft sponsored report has cost information. IBM's does not. In summary, VeriTest's estimates for the annual cost of operating the mainframe partition as benchmarked are based on numbers from Gartner and range from about $252,000 to about $480,000 depending on factors like customer licensing and alternate workload. According to the report, these annual costs range from ten to twenty times that of the more powerful 900MHZ dual Xeon machine running Windows Server 2003.
According to VeriTest the best throughput results obtained from the 2 CPU z900 were 68.25MBytes/Sec. with one Linux instance running without zVM and 52.12MBytes/Sec. with four Linux instances running with zVM. Scale this linearly to a 16CPU system and you would predict that the full machine should offer about 546MBytes/Sec native and 420MBytes/Sec. with perhaps 16 Linux instances under zVM.
IBM's results are nowhere near that. Using a single guest server in an eight CPU partition they were able to achieve only about 19.4MBytes/Sec. on a single OSA/Express GigaBit ethernet card and maxed out at 105.8Mbytes/Sec. when running zVM on the full machine with "between 50 and 70" concurrent Linux instances and all ten OSA/GBE boards.
A big part of the technical Why? on this relates to scaling:
Microsoft's tester used two dual channel OSA/GBE cards to connect to the network and two fibre channel connectors running to a single 1.2TB ESS/Shark array while IBM used ten OSA/GBE cards and "8 S/390 ECKD control units with 512 emulated 3390-3s connected via 32 ESCON channel paths to the z900" (i.e. 32 controllers on eight ports). For the Microsoft configuration to scale up linearly with unimpeded data flow on each device you would need 32 ports --eight more than the system maximum.
In effect VeriTest got better than expected zVM/Linux results because the partition it used dramatically over-states the scalability of the machine. Note, as a corollary, that this also has the effect of seriously understating the cost for the fully scaled up zVM system and thus hurts Microsoft's case on both cost and performance.
There's a more subtle question here too. You don't have to be unusually cynical to expect that a report paid for by Microsoft and executed by a spin-off from an organization founded on hyping Microsoft products would tend to favor Microsoft. Why then did this report seem to somewhat exaggerate the real world viability of mainframe Linux?
I have two ideas; one related to beliefs about how operating systems and hardware condition tester behaviour that I'm working on clarifying, and one related more specifically to the report at hand. The results shown by Microsoft: both the performance advantage a dual 900MHZ Xeon with Windows 2003 Server has over the "Turbo" mainframe and the 20 to 1 cost advantage to Wintel, contradict the idea that enterprise scale "big iron" must be as powerful as it is expensive. It is possible, therefore, to guess that VeriTest produced some initial results which were simply disbelieved, resulting in the generosity shown in the revised document as ultimately published.
An explanation based on a reverse bias arising from incredulity is supported by the fear, uncertainty, and doubt that has to underlie this paragraph from the report:
The META Group, an independent consultancy, has audited the [test] plan, the facility, the tests, and the final report. META Group was asked to verify that the benchmark configuration and procedures were appropriate but was not asked to endorse the results one way or the other. Also, neither VeriTest nor Ziff Davis Media, who provided the PC Magazine benchmark test suites, were not approached about endorsing the results. Based on IBM's marketing, expectations going in to the project were that mainframe Linux would produce results at the higher end of Windows server performance. The results turned out quite the opposite.
Not only is this hesitancy utterly uncharacteristic of Microsoft but it's totally missing from the report's evil twin: an earlier VeriTest report comparing Windows 2003 Server to Linux on that same dual 900MHZ Xeon. That analysis was published in May, four months before this one, and pretty much conforms to cynical expectation with lots of questionable content and a probable wrong conclusion favoring Microsoft --but no quavering quibbles about third party consultants, tame or otherwise, reviewing test plans or authenticating processes.