Blog Feature
Kaylie Gyarmathy

By: Kaylie Gyarmathy on July 21st, 2014

Print/Save as PDF

Talking TCO: What's the Real Deal with Data Center Consolidation?

data center consolidation | Data Center Performance

Subscribe to vXchnge Blog

Gone are the days of tiny, windowless server rooms running hot and melting down hardware left and right. Data centers have evolved to leverage the environment rather than fight it, and next-gen providers now rely on blade servers capable of 20 times the consolidation of legacy systems — along with 20 times the power requirements and heat output. But these changes in data center consolidation prompt a question: What's the real total cost of ownership (TCO) when cabinet densities and power demands rise exponentially?

Maximizing Space

What’s the simplest way to minimize data center budgets and decrease TCO? Put more resources in the same cabinet. If hardware supports it, this lowers latency and avoids data 'sprawl'. Unfortunately, this isn't always an easy task.

As noted by a recent NextGov article, one problem government organizations face in maximizing their TCO is duplication of data. Survey respondents reported that federal data was duplicated an average of “four or more times,” requiring massive storage volumes and increasing the TCO of any data center server. Consolidation remains a priority, however — the U.S. Navy just issued the first contract of its new data center consolidation efforts, aimed at cutting their 1400 application 'capabilities' in half by eliminating duplication. In other words, excessive repetition simply isn't worth it, even for the Department of Defense (DOD). Effective consolidation depends on both how data gets stored, and determining what data requires storage.

Avoiding Burn-Out

But TCO isn't just about striking the right balance between space and necessary data storage — it's also about minimizing hardware failure. According to a research paper from Carnegie Mellon University, hardware accounts for approximately 60 percent of all failures, well ahead of other factors such as environment, software, or human error. As a result, it's often tempting to avoid higher-density, higher-temperature cabinets in favor of larger, cooler spaces.

As a recent Venture Beat article points out, however, that's not always cost-effective. While the ASHRAE (American Society of Heating, Refrigerating, and Air Conditioning Engineers) recommends that servers operate between 65 and 80 degrees Fahrenheit to help minimize failure, the organization 'allows' temperatures of up to 90 degrees. By bumping up temperatures just over the 80 degree mark, data centers can increase their power usage effectiveness (PUE) to spend less on power each month. The Venture Beat piece notes that a five-megawatt data center could spend up to $500,000 per month in energy costs, but save that much per year by bumping up heat. Increased server failures would be minimal, and more than covered by power savings, thus improving TCO.

The Reality of TCO

To compete in a cloud-based global tech market, companies can't afford to rely on legacy data center systems and hope that hardware stability will somehow equate to ideal TCO. Effective management of TCO means taking intelligent risks by consolidating on high-performance stacks, minimizing data duplication, and being unafraid of the occasional hardware failure in pursuit of better PUE.

Next Steps:

 

About Kaylie Gyarmathy

As the Marketing Coordinator for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is also responsible for social media marketing and brand promotion through various outlets. Kaylie enjoys creatively developing new ways and events to capture the attention of the vXchnge audience. If you have a topic idea, feel free to reach out to Kaylie through her social platforms.

  • Connect with Kaylie Gyarmathy