The Importance of Backing Up Your Backup


Business loses power

Loss of Power

Data center downtime is one of the biggest fears of any data center manager.  Data centers can experience downtime for a number of reasons, some within control and some completely outside of control.  For instance, any poor weather ranging from a heavy storm to a natural disaster could potentially knock the power out in the area in which the data center resides.  No matter what the source of downtime is in a data center, one thing is certain, data centers must have a backup in plan.  But, this is not news.  Every good data center has backup.  The next question is – is that enough?  The answer is a resounding no.  To ensure that data center information and functionality is protected it is critical that there is a backup for the backup in place.  Is this redundant?  Yes.  But ultimately, it could save money, jobs and ultimately, the entire business altogether.  Data Center Knowledge points out just how costly downtime is for a data center and, by extension, a business, “Unplanned data center outages are expensive, and the cost of downtime is rising, according to a new study. The average cost per minute of unplanned downtime is now $7,900, up a staggering 41 percent from $5,600 per minute in 2010, according to a survey from the Ponemon Institute, which was sponsored by Emerson Network Power. The two organizations first partnered in 2010 to calculate costs associated with downtime.”

When you evaluate a data center, its power and capacity needs, and endeavor to create true redundancy in backup power supply you have to ensure that the redundancy is continuous.  If it is not, major problems could arise.  When creating the right system for your data center you have to try to anticipate future growth and needs.  What worked yesterday for data centers no longer works for today.  It is a continuously changing world and, for this reason, it is also important to do regular audits of your backup system to ensure true redundancy has been achieved and is still functioning as you move forward.  The key power components of a data center include the backup generator, uninterruptible power supply (UPS), internal power supplies, power distribution unit (PDU) and much more.  A fully redundant power supply will have adequate amounts of power supply to completely support the data center and all of its components with no single points of failure.  Should a power outage occur, a data center will remain completely functional with this redundant backup power system in place.  Not all data centers need this elaborate of a backup for their backup in place but, if the environment is running a mission critical project this type of backup is absolutely necessary.  Backing up the backup may seem redundant, and it is, to save data centers from frustration, loss of time and significant loss of money.

Posted in Back-up Power Industry, Data Center Battery, Data Center Design, Power Distribution Unit, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , | Comments Off

Prefabricated Modular Data Center vs. Traditional Data Center

TP-MCCD-0019With an influx of modular data centers it may leave many wondering if traditional legacy data centers are a thing of the past.  Traditional data centers are likely not going away completely but there are certain applications in which modular data centers are a more ideal fit.  When designing a data center there are many “future proof” considerations that must be factored in to the design process so that a data center does not become unusable or outgrown.  But, these same considerations often lead to traditional data centers being inefficient.  Because of this, a decision has to be made as to what is best – a traditional data center or a modular data center.

In the design process of a traditional data center there is often added room and extra heating and cooling capacity to accommodate growth and future needs.  This methodology can help a traditional data center adapt and grow but it can also lead to an inefficient use of space and higher energy bills.  Within a traditional data center energy efficiency can be achieved by taking advantage of strategies like hot/cold aisles, ceiling ducted air containment and more.    When these strategies are used, a traditional data center can be efficient while still having room to grow and serving the existing needs of the data center.  The problem with traditional data centers is that, often, future growth is incorrectly anticipated and if growth occurs more rapidly than expected, a traditional data center can be quickly outgrown.

Modular data centers present a number of benefits, particularly for new data center builds.  When starting from scratch, modular data centers allow you to start with current needs and can then be scaled up or down as future needs arise. Data Center Knowledge discusses the importance of scalability when designing a data center, “With a repeatable, standardized design, it is easy to match demand and scale infrastructure quickly. The only limitations on scale for a modular data center are the supporting infrastructure at the data center site and available land. Another characteristic of scalability is the flexibility it grants by having modules that can be easily replaced when obsolete or if updated technology is needed. This means organizations can forecast technological changes very few months in advance. So, a cloud data center solution doesn’t have to take years to plan out.”  For businesses that do not have the space or budget for a full-blown data center a modular data center is an ideal solution.  Modular data centers streamline the process and make it easy to quickly and efficiently get a data center up and running.  They often work with minimal space but maximum efficiency to quickly and easily launch a data center. When determining whether to design a traditional data center or a modular data center space, budget, rack density and more must be considered.  Discuss these things with a data center contractor to determine whether a traditional data center or a modular data center is the best fit for your unique needs.

Posted in Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , , | Comments Off

How to Anticipate Data Center Capacity Needs In a Consolidation Project

datacenter45Data center consolidation projects, and particularly federal data center consolidation projects, are taking place all over the country.  Data center consolidation projects can prove immensely beneficial and often saves a significant amount of money in the long run.  Often, outdated data centers are not energy efficient and prove to be very wasteful.  By consolidating old data centers and moving over some equipment and all of the data to newer data centers energy consumptions can be greatly improved.  A significant amount of money can be saved through consolidation and the U.S. Government Accountability Office shows just how much through reporting recent statistics regarding federal data center consolidation projects, “Of the 24 agencies participating in the Federal Data Center Consolidation Initiative, 19 agencies collectively reported achieving an estimated $1.1 billion in cost savings and avoidances between fiscal years 2011 and 2013. Notably, the Departments of Defense, Homeland Security, and Treasury accounted for approximately $850 million (or 74 percent) of the total. In addition, 21 agencies collectively reported planning an additional $2.1 billion in cost savings and avoidances by the end of fiscal year 2015, for a total of approximately $3.3 billion—an amount that is about $300 million higher than the Office of Management and Budget’s (OMB) original $3 billion goal. Between fiscal years 2011 and 2017, agencies reported planning a total of about $5.3 billion in cost savings and avoidances.”  Many data center managers need to take a lot into account to ensure that the transition between data centers during consolidation is smooth to prevent downtime or overloading in the new data center.  As data centers scramble to meet energy and data demands, data center managers have to have a comprehensive and accurate knowledge of how much power and space will be needed in a new data center so that it can truly meet the needs.  Additionally, cooling needs need to be anticipated because if not, things can become overheated and energy efficiency diminished.

Truly understanding the capacity of an existing data center that is being consolidated and the power, data and cooling needs necessary to accommodate it is easier said than done for data center managers.  Amongst the things that need to be understood and anticipated are current density, design rating, bulk power capacity, bulk cooling capacity, data center capacity,  physical site details, equipment and more.  Determining what equipment can be brought from an existing data center to a new one will help anticipate physical space needs.  Additionally, it is important to understand what existing software licenses can be transferred and what may need to be cancelled or renewed.  While these are all important, it is simply skimming the surface of what needs to be anticipated in a data center consolidation project.  A new data center that is taking on all of this from a data center consolidation project needs to not only accommodate the addition but be capable of expansion and growth in the future.  While unanticipated problems are inevitable in any consolidation project, preparedness is the key to minimizing downtime and optimizing uptime and efficiency.

Posted in Data Center Build, Data Center Construction, data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, Datacenter Design, Power Management | Tagged , , , | Comments Off

The Importance of Ensuring True Redundancy in a Data Center

A data center often prides itself on its ability to maintain uptime in the face of significant power and data loads.  Downtime is seen by customers and consumers as simply unacceptable and while there are many ways to avoid downtime, one important protection all data centers need to utilize is redundancy.  Data center hardware and power needs are constantly evolving and because of this, it can be difficult to anticipate and properly ensure power supply redundancy.  With a dual-path power supply true redundancy can be achieved but it is critical that both paths are capable and managing all equipment and power needs in the data center.  ComputerWeekly describes the importance of redundancy in a data center, ” Redundancy has a negative connotation when the duplication is seen by the business as unnecessary. Yes, for some businesses this datacentre capacity excess is an issue, but for the majority, the other form of redundancy – the provisioning of a datacentre to survive a range of failure scenarios – has become even more of an issue. IT infrastructure is part of an organization’s DNA. If someone were to cut off the IT service for an organization, it would not be a small snag, but a corporate catastrophe for its operations. Business processes would halt, customers would be left stranded, suppliers would be unable to know what was required to be delivered, the organization would struggle to pay its employees what they are owed, communication and collaboration would be severely impaired. The overall availability of an IT platform means that an approach of a single application on a single physical server with dedicated storage and individual dedicated network connections is a strategy to oblivion. It is incumbent on IT to ensure that the IT platform can continue to operate through failures – as long as the cost of doing so meets the organization’s own cost/risk profile.”

The purpose of having a dual path is that, should a failure occur in the power supply, the other power supply will be able to pick up exactly where the first left off and maintain uptime.  Having a redundant power supply today is simply best practice for any data center.  One of the most important parts of creating true redundancy is ensuring that, should a failure occur somewhere in one path, it will not affect or impact the other path in any way. While many data centers choose to employ redundant power supplies, many data centers fail to realize just how much power their supplies will actually need for true protection.  The sad fact is, many data center managers simply do not have an adequate grasp on the power supply needs of all of the equipment in the downstream.  Because new equipment is constantly being added to data centers, the additions can frequently get overlooked and suddenly, without realizing it until it is too late, the power needs exceed the redundant power supply.  The power demands should be constantly checked so that the power redundancy is never exceeded.  The best way to do so is through the implementation of a monitoring system that routinely checks and ensures that all is as it should be.  When discussing redundancy, many may raise an eyebrow at the concept and think it is not very efficient. But, to properly run a data center, redundancy is necessary.  To ensure optimal efficiency, it is best to run the data center on the most efficient power supply and reserve the less efficient power supply as the redundant supply that is only used should a failure occur.

Posted in data center maintenance | Comments Off

Energy Requirements for Government Data Centers

Government data centers exist all over the country, in various states and in a variety of buildings. It is no secret that data centers consume a lot of energy but government data centers must comply with data center energy efficiency requirements. According to the Federal Energy Management Program, data centers account for a significant portion of overall energy consumption in the United States, “In 2013, data centers accounted for 2.7% of the 3831 billion kWh used in the US, or 102.9 billion kWh.1,2,3 Federal data centers used about 5 billion kWh in 2013, or nearly 10% of federal electricity use.” While, for the most part, the country does not have a specific government mandate or law in terms of energy consumption in data centers, more and more states are moving towards mandating energy efficiency in data centers. As the government and independent businesses strive to improve energy efficiency, be more green, and save money wherever possible, it is inevitable that data centers will see the initiation and implementation of energy efficiency requirements.

The PUE rating, or power usage effectiveness rating, helps measure how efficiently a data center uses energy. TechTarget explains what exactly PUE is and how it is measured, “Power usage effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is therefore expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1.” Many federal buildings are very old and were certainly not designed with data centers or energy efficiency in mind. For this reason, many federal data centers have a poor PUE, with some in the high 2s or even 3s, dramatically highlighting the need for improved energy efficiency in data centers across the board. The U.S. government has taken on a new mandate for energy efficiency – an average PUE of 1.4, a 30% energy usage reduction, and more. This is no small task considering that many are in the 2s or 3s, necessitating the closure of many data centers and a move towards new more energy efficient data centers, data center pods and other measures. Whether discussing a federal data center or an independent data center, it often comes down to the bottom line. Ultimately, consolidating energy usage and implementing energy saving strategies to drive down PUE will not only make data centers more compliant in advance of a government mandate but it will also save money. Over time, both compliance with energy efficiency mandates and money saved due to reduced energy usage will provide much needed sustainability in data centers.

Posted in data center maintenance | Comments Off

Should Data Centers Use Ceiling-Ducted Air Containment?











Data centers and all their power and energy usage generate a lot of heat.  One of the biggest expenses for any data center is the cost of cooling.  For this reason, data centers are always trying to develop new strategies to more effectively and efficiently keep their facilities cool while still performing their essential roles.  While each approach to cooling your data center offers unique advantages, each scenario is different and what is most efficient for one data center may not be most efficient for every data center.

Hot air and cold air containment are two options when trying to improve cooling efficiency in data centers.   Cold air containment strategies utilize a hot air cold air configuration in data centers but contain the cold aisles so that hot air and cold air do not mix which makes to increase the effectiveness of cooling in a data center.  But, even with hot air and cold air configurations, there are till additional ways to maximize efficiency.  Existing data centers can take advantage of the ceiling plenum to improve efficiency of cool airflow.  A dropped ceiling plenum could allow for a hot air plenum return system.  APC-Schneider Electric conducted studies and advised to following in regard to use of a ducted system, “We used Schneider Electric’s EcoStream CFD software to study the airflow pattern and pressure in the ceiling plenum for passive ducted systems, and came to the following conclusions:

1. Cooling performance is strongly linked to the ceiling plenum pressure.

2. Ducted HACS can adequately cool much higher densities than individually-ducted racks at a given ceiling vacuum pressure.

3. Average cooling performance is a fairly weak function of plenum depth and rack density, with deeper plenums and lower density yielding somewhat better performance.

4. Cooling performance is moderately affected by drop-ceiling leakiness with tighter being generally better though typical plenums perform much closer to well-sealed than leaky.

5. Cooling performance is a strong function of cooling-to-IT airflow ratio with higher values being better.

By utilizing a ducted air containment system in data centers, energy efficiency and reliability can be improved.  As far as data centers go, these are two of the most important things any data center can do.  APC-Schneider Electric concluded the following about ceiling-ducted air containment systems in White Paper 182, “Ducted air containment can simultaneously improve the energy efficiency and reliability of data centers. Since all ducted equipment, ducted cooling units, and the ceiling plenum function as a single entity, the use of CFD modeling is recommended for new deployments particularly when design constraints are close to the best-practice limits established here. In any case, deployment advice centers on ensuring an adequate and fairly uniform vacuum pressure in the ceiling plenum. This, in turn, can be achieved by providing sufficient ducted cooling airflow, creating a relatively “tight” ceiling system, employing deeper ceiling plenums, and sealing unnecessary leakage paths in racks and containment structures.”  These systems appear to work best in existing data centers and due to the uniqueness of each data center, the decision of whether or not to employ ducted air containment systems in data centers have to be taken on a case by case basis.

Posted in data center cooling | Comments Off

IT Departments & Facilities Really Can Work Together

Data Center










When it comes to energy usage, facilities and their IT departments are often at odds.  In most businesses today energy efficiency is the name of the game.  Many facilities have undertaken major initiatives to reduce energy usage through a variety of means such as more energy efficient HVAC and lighting usage.  IT departments tend to be the largest consumer of energy in any facility but because of the nature of the work that an IT department does, it is not always as simple to implement energy efficiency measures as it would be in another department within a facility.  While you can control lighting and environment in a facility or data center, there is quite a bit more to be done if you want to truly reduce energy consumption in an IT department while still functioning at full capacity.

For facilities and IT departments to better cooperate with a goal of improved energy consumption within an organization certain measure must be taken.  An IT department or data center has to undertake the improvement of energy consumption while still performing their primary task – IT operations reliability and maximized uptime.  The first way to achieve this is through the use of hot aisle/cold aisle rack arrangement.  This configuration manages air flow and minimizes cooling costs through  an ideal configuration of racks in an IT department or data center.  An additional consideration for IT departments and data center is a modular or scalable UPS (uninterruptable power supply)system that can be scaled according to specific needs.  With a modular UPS system you can easily and cost effectively increase your UPS power as needed over time which results in improved efficiency.  There are also interesting and unique ways to make better use of outside air and thermal storage for an energy efficient form of cooling.

When IT departments and data centers implement these data center best practices it helps drastically reduce data center energy consumption.  In addition to these measures, facilities can also upgrade outdated equipment that is not energy efficient for newer technologies that not only are capable of better data management but improved energy efficiency.  The facility’s DCIM (data center infrastructure management) blends information technology with facilities management and can monitor and asses data center’s critical systems, energy usage and help make future improvements.  All is not lost – IT departments and facilities do not have to be at odds.  Through collaboration and implementation of best practices, IT departments and data centers can reduce energy usage while still performing their primary function effectively.

Posted in Data Center Infrastructure Management | Comments Off

10 Surprises a Data Center Manager Might Encounter











As a data center or IT department manager, things are not always straight forward or uncomplicated.  In fact, over time, it is likely that a data center or IT department manager will encounter a surprise or two.  As time moves along, technology advances, density loads increase and the specific needs of data centers and IT departments evolve.  In order to minimize downtime and maximize the effectiveness and efficiency of a data center, it is ideal to anticipate potential surprises and try to mitigate those surprises or prepare potential solutions should a problem arise.  Emerson Network Power released a list of 10 common surprises data center and IT mangers encounter to help data centers and IT departments better anticipate and prepare.

Emerson Network Power’s 10 Common Data Center Surprises:

1. Those high-density predictions finally are coming true

  • Rack density is increasing and many data centers and IT departments are scrambling to find ways to handle such a high density.  With densities hovering around 7 kW per rack and expectations for them to grow each year (some are predicting 50 kW by 2025!), it is important to start preparing for higher density racks now rather than waiting until it is a real problem.

2. Data center managers will replace servers three times before they replace UPS or cooling systems

  • Bottom line – data center and IT department managers need to be prepared to scale to support future server needs.

3. Downtime is expensive

  • Downtime is not only frustrating but very expensive which means when downtime happens, no one is happy.  Gartner notes that a recent study reported that downtime is very expensive, “Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour.”

4. Water and the data center do not mix – but we keep trying

  • Water + expensive IT equipment = bad.  Data centers or IT departments that experience water damage or flooding damage experience major problems so every effort must be taken to prevent this from happening.

5. New servers use more power than old servers

  • New technology may be smaller which means it takes up less space in a data center or IT department but it has shown consume much more energy.  IT departments and data centers must prepare to support such high energy demands.

6. Monitoring is a mess

  • Managing different infrastructures is time consuming and confusing.  A good DCIM is a must.

7. The IT guy is in charge of the building’s HVAC system

  • There can be a communication gap between IT and Facilities.  This is not exactly news

8. That patchwork data center needs to be a quilt

  • Mix and match data center components are out – full integration for maximum capability and efficiency is in.

9. Data center on demand is a reality

  • It is no longer necessary to have complicated designs for a data center that take many long hours to bring to reality.  A data center can be scaled to the immediate needs and through many plug-and-play options a data center can be created in an instant, wherever there is available space, based on immediate needs.

10. IT loads vary – a lot

  • Have you heard of a website hosting a sale that draws so much traffic that it experiences a crash?  IT loads can vary significantly depending on the type of business, the time of day or the season.  A data center or IT department needs to prepare for drastic changes in IT loads so that there is capacity to handle such changes and prevent expensive and frustrating downtime.
Posted in Data Center Infrastructure Management | Comments Off

100% Network Uptime Is An Expectation of Smartphone Users

Advanced PDU














If you have ever needed to use your Smartphone and not been able to access your carrier’s network you know how incredibly frustrating it is.  And if you are lucky enough to have never experienced this, you have likely come to expect your carrier’s network to provide 100% uptime.  Network uptime is the ultimate goal of any data center and without network uptime businesses lose money and clients become incredibly frustrated.  We live in a world of instant gratification and knowledge constantly at our fingertips.  When something stands in our way or slows us down we no longer see it as frustrating but completely unacceptable.  And, as research shows, network uptime is not a simple luxury, it is an expectation.

In a study recently conducted by Vasona Networks, proof of an expectation amongst customers of 100% uptime was shown, “Sixty-four percent of consumers responding to the survey cited “good performance all the time” as a reasonable expectation from their mobile data service provider. Just 36 percent of subscribers still think it is reasonable for there to be “hiccups in performance,” “unavailability for extended periods” or “unavailability in certain places.” When asked to identify the principal cause of problems during use of an app, mobile service providers are the most commonly cited party for blame, with 40 percent selecting them. Thirty-nine percent blame the app maker and the remainder of consumers suspect their device or device operating system to be the cause. Data service quality is crucial for subscribers, with 29 percent citing “mobile Internet performance” as most important when choosing a provider… “Mobile Internet performance is becoming increasingly important for consumers and this survey indicates just how high a bar subscribers are setting for their service providers,” says John Reister, vice president of marketing and product management for Vasona Networks. “Our findings indicate that it is no longer sufficient for mobile operators to offer a good experience most of the time across most of their network. Today, if every cell isn’t delivering great performance, subscribers are being let down.”  Customers today do not simply expect network availability during business hours, or even waking hours, service must exist 24/7, 365 days of the year.

Not only will network downtime frustrate customers and possibly drive them to another provider but it comes with a significant cost to companies.  Gartner, an information technology research and advisory company makes note of the high cost of network downtime, ” This is a follow-up to last week’s post on network downtime as several have asked “What is the cost of  network downtime?” Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour (see Ensure Cost Balances Out With Risk in High-Availability Data Centers by Dave Cappuccio). However, this is just an average and there is a large degree of variance, based on the characteristics of your business and environment (i.e., your vertical, risk tolerance etc).”    That is no small price to pay.  Every network provider needs to spend more time focusing on network uptime so keep customers satisfied so that customers can be retained and financial loss prevented.


Posted in Customers | Comments Off

Anticipating Future Rack Density and How to Prepare for Changes











Much like a doomsday prepper stocking up on as much water and canned goods for the arrival of the end times, data center and IT professionals have been discussing and preparing for the increase of rack density that is coming in the years ahead.  For years and years now IT professionals have been predicting a massive increase in future rack density and…so far…they have been wrong.  Many server racks currently operate at around 3-5 kW per rack and some people are predicting that within a matter of years rack densities will each 50 kW.  But, as inflated predictions continue to plague the IT world more and more businesses and data centers find themselves with bloated, overbuilt data centers that end up being far more than what is actually needed and a complete waste.  A data center with a massive amount of unused capacity will, take up space and consume a completely unnecessary amount of energy that will simply go unused.  While it is important to build a data center that is scalable and future-proof, completely overdoing it with capacity is not necessarily the right way to go.

So, can you really predict future rack density?  Is it worth it to even try?  When talking about creating a future-proof data center, you do have to at least consider potential future density needs.  If rack density is going to dramatically increase data centers must be prepared to power and cool such a high density.  The lower the rack density the less energy and money it takes to power and cool it so considering a tripling (or more) of rack density can seem quite daunting.  Some recent surveys and reports from industry experts predict that by 2025 rack density could be around, or even exceed, 52 kW.  2025 is still 10 years away but that is not so far off that we should not start preparing now.

While there will certainly be some data centers operating at 25 kW or 50 kW by the year 2025, it is unlikely that this will be the average or even the trend.  The primary reason for this is technology.  New technology that will help lower rack density is likely to emerge in that time.  Even a change to a rack density of 25 kW will mean a significant change to the physical environment in a data center.  While rack density will most certainly increase in the next decade, so will utilization rates which means more efficient performance.  Additionally, in the next decade it is expected that there will be major changes in the way data centers are powered.  When it comes to the future of data center power, solar is the name of the game.  In addition to solar power, nuclear, gas and wind power will also be utilized to create a more energy efficient data center.  All of these things will impact the way data centers anticipate and manage an increase in rack density.  If there is one thing that is clear, scalability is critical to the preparation and management of increased rack density in the future.

Posted in data center equipment | Comments Off