I read an Information Week article recently on data center best practices praising various data center centralization efforts. Here's the opening bit:
There are data centers, and then there are data centers. The first kind ranges from the overheated, wire-tangled, cramped closets that sometimes also host cleaning supplies to the more standard glass-house variety of years past. The second kind--and the topic of this article--cool with winter air, run on solar power, automatically provision servers without human involvement, and can't be infiltrated even if the attacker is driving a Mack truck full-throttle through the front gate.These "badass" data centers--energy efficient, automated, hypersecure--are held up as models of innovation today, but their technologies and methodologies could become standard fare tomorrow.
Rhode Island's Bryant University sees its fair share of snow and cold weather. And all that cold outside air is perfect to chill the liquid that cools the university's new server room in the basement of the John H. Chafee Center for International Business. It's just one way that Bryant's IT department is saving 20% to 30% on power consumption compared with just a year ago. "We've come from the dark ages to the forefront," says Art Gloster, Bryant's VP of IT for the last five years.
Before a massive overhaul completed in April, the university had four "data centers" scattered across campus, including server racks stuffed into closets with little concern for backup and no thought to efficiency. Now Bryant's consolidated, virtualized, reconfigured, blade-based, and heavily automated data center is one of the first examples of IBM's young green data center initiative.
On a word count basis the first half of this article is mostly devoted to adding an energy savings/environmental sizzle to selling the centralization agenda - thus this bit, a mid article return to the Bryant University example, pretty much wraps that up:
Consolidation was one of the main goals of Bryant's data center upgrade. The initial strategy was to get everything in one place so the university could deliver on a backup strategy during outages. Little thought was given to going green. However, as Bryant worked with IBM and APC engineers on the data center, going through four designs before settling on this one, saving energy emerged as a value proposition.The final location was the right size, near an electrical substation at the back of the campus, in a lightly traveled area, which was good for the data center's physical security. Proximity to an electrical substation was key. "The farther away the power supply, the less efficient the data center," Bertone says. Microsoft and Equinix both have data centers with their own substation.
The next page or so focuses mainly on physical security - a return to the opening paragraph comment that some data centers are built so well they're proof against a Mack attack. A sample:
For Terremark, too, security is part of its value proposition. It recently built several 50,000-square-foot buildings on a new 30-acre campus in Culpepper, Va., using a tiered physical security approach that takes into consideration every layer from outside the fences to the machines inside.For its most sensitive systems, there are seven tiers of physical security a person must pass before physically touching the machines. Those include berms of dirt along the perimeter of the property, gates, fences, identity cards, guards, and biometrics.
Among Terremark's high-tech physical security measures are machines that measure hand geometry against a database of credentialed employees and an IP camera system that acts as an electronic tripwire. If the cordon is breached, the camera that caught the breach immediately pops up on a bank of security monitors. That system is designed to recognize faces, but Terremark hasn't yet unlocked that capability.
Some of what Terremark says are its best security measures are the lowest tech. "Just by putting a gutter or a gully in front of a berm, that doesn't cost anything, but it's extremely effective," says Ben Stewart, Terremark's senior VP for facility engineering. After the ditches and hills, there are gates and fencing rated at K-4 strength, strong enough to stop a truck moving at 35 mph.
The last part of the article advocates data center automation - here's a bit:
Our data centers are pretty dark," says Larry Dusanic, the company's director of IT. The insurer doesn't even have a full-time engineer working in its main data center in southern Nevada. Run-book automation is "the tool to glue everything together," from SQL Server, MySQL, and Oracle to Internet Information Server and Apache, he says.Though Dusanic's organization uses run-book automation to integrate its systems and automate processes, the company still relies on experienced engineers to write scripts to make it all happen. "You need to take the time up front to really look at something," he says. Common processes might involve 30 interdependent tasks, and it can take weeks to create a proper automated script.
One of the more interesting scenarios Dusanic has been able to accomplish fixes a problem Citrix Systems has with printing large files. The insurance company prints thousands of pages periodically as part of its loss accounting, and the application that deals with them is distributed via Citrix. However, large print jobs run from Citrix can kill print servers, printers, and the application itself.
Now, whenever a print job of more than 20 pages is executed from Citrix, a text file is created to say who requested the job, where it's being printed, and what's being printed. The text file is placed in a file share that Opalis monitors. Opalis then inputs the information into a database and load balances the job across printers. Once the task is complete, a notification is sent to the print operator and the user who requested the job. Dusanic says the company could easily make it so that if CPU utilization on the print server gets to a certain threshold, the job would be moved to another server automatically. "If we had a custom solution to do this, it probably would have cost $100,000 end to end," he says.
Put all the pieces together and what you get is an innocent sounding question with an immediate corollary: how does today's "badass" data center differ from the 1970s glass house?
The answer, I think, is that it doesn't: from physical design to controls imposed on users, this is the 1970s all over again - and that's what brings up the corollary question: all of this stuff is discussed and presented, both in the article and in the real world, from an IT management perspective - so who represents the users and what role do they have in any of it?
The answer to that, I think, is that the users weren't considered except as sources of processor demand and budget - and that everything reported in this article, from the glass house isolation achieved at Bryant to the obvious pride taken in the user tracking component for the ludicrous printing "solution" at Dusanic's company, reflects an internal IT focus placing enormous managerial barriers between users and IT.
Think about that a bit and I'm sure you'll agree that all of this brings up the most difficult question of all: assume, as I do, that the analyses undertaken before these organizations committed to the increased controls and centralization praised in the article showed them to produce significant savings to IT, and then ask how it netted out organizationally after the impact on users is accounted for?
My guess is first that the question is never seriously considered by the people proposing or executing this type of IT power grab; and second that the answer will be expressed, in the longer term, as the organizational cost of rebel and personal IT. In other words, when some professor spends an extra dollar on a laptop so he can work independently of the network, spends an extra hour trying to make his own backups work, or relies on his home machine to serve course PDFs to his students, he's functioning as a largely untrained, $100,000 per year or more, sysadmin and thus incurring enormous organizational costs that should be charged against those centralization projects - but almost certainly were not.
And from that I get my bottom line on this: a pithy new rule for executives reviewing data processing proposals from mainframers and their Wintel colleagues: the more money organizations save by centralizing IT control and processing, the more it costs them.