A petaflop (1E16) is a million gigaflops: a thousand times a trillion floating point operations per second.
That's a big number - to get an idea how big imagine having to count out Obama's trillion dollar debt plan, now amounting now to just over $17,000 per vote counted for him in the last election, in tenth of pennies and understand that IBM's planned 20 Petaflop machine could do that twenty times per second - from the press release:
Armonk, NY - 03 Feb 2009: The Department of Energy's National Nuclear Security Administration has selected Lawrence Livermore National Laboratory as the development site for two new supercomputers--Sequoia and Dawn- and IBM as the computer's designer and builder. These systems will allow for smarter simulation and negate the need for real-world weapon testing.Under terms of the contract, Sequoia will be based on future IBM BlueGene technology, exceed 20 petaflops (quadrillion floating operations per second) and will be delivered in 2011 with operational deployment in 2012. Dawn, an initial delivery system to lay the applications foundation for multi-petaflop computing, will be based on BlueGene/P technology, reach speeds of 500 teraflops (trillion floating operations per second) and is scheduled for operational deployment in early 2009.
- Sequoia will represent a significant leap forward in compute power. With top speeds of 20 petflops Sequoia will be approximately 15 times faster than today's most powerful supercomputer and offers more processing power than the entire list of Top500 supercomputers running today.
- Sequoia will primarily be used to ensure the safety and reliability of the nation's nuclear weapons stockpile. It will also be used for research into astronomy, energy, human genome science and climate change.
- Sequoia will be based on future IBM BlueGene technology and use 1.6 million IBM POWER processors and 1.6 perabytes of memory, which are housed in 96 refrigerator sized racks occupying just 3422 square feet The Sequoia system will deploy a state of the art switching infrastructure that will take advantage of advanced fiber optics at all levels.
- Sequoia will run the Linux operating system.
- The machine will be built, tested and benchmarked in IBM's Rochester, Minnesota plant, home of the Blue Gene class of supercomputers the company builds for ulra-scale computational applications. The hardware and software development will by provided by IBM engineers in Rochester and by researchers in IBM's Yorktown Heights, N.Y. research lab, in partnership with the Lawrence Livermore National Lab and the Argonne National Lab.
- Compared to most traditional supercomputer designs, Sequoia will offer unprecedented levels of energy efficiency. Sequoia is expected to deliver world-leading efficiency of 3,050 calculations per watt of energy.
The current record holder for both performance maximimization and power use minimization is IBM's "roadrunner" hybrid - it's the world's first and only one petaflop machine. Sequoia will be twenty times faster than Roadrunner, but also much smaller: 94 racks to Roadrunner's 278; and much more power efficient: 3050MF/W versus Roadrunner's 445.
Internally the cell architecture isn't as much about number crunching as it is about communications - think of it as a grid on a chip with on chip traces taking the place of traditional cabling and backplanes. As a result it's always been obvious that you could make arbitrarily large grids by linking these things either externally in cabinets or internally on the wafer. Thus IBM's claim that Sequoia will use 1.6 million power processors means that the basic building block is expected to have four 16 SPU cell2 chips and a Power7 based SAP in the same way that Roadrunner's triblades have one dual core Opteron and two, eight SPU, cell BE chips.
The numbers here are so big they're disquietingly beyond direct comprehension, but do you know what's most surprising about the announcement? Roadrunner pretty much hit the limits on moving data from files to processors without stalling the system; but here we have IBM contractually committing to beating that by twenty times with one third the available cabling space.
That's astonishing: everyone knew supercomputing would be going all PPC/Cell just as soon as progress within IBM's bureaucracy aligned with the customer budget cycle to let it happen - but they would not have taken the contract unless they had something new in storage worked out. So what is it: buy Sun's optical technology? go to Terabyte infiniband? introduce some advance on highly parallel file systems? I don't know - but whatever it is, it could be more directly important to those of us who work in commercial computing than the change over in super computing because we don't usually need compute blades individually do floating point faster than 474 of the world's top 500 super-computers, but something that deeply integrates storage with computing? -now that's a technology with market legs.