How to Anticipate Data Center Capacity Needs In a Consolidation Project

datacenter45Data center consolidation projects, and particularly federal data center consolidation projects, are taking place all over the country.  Data center consolidation projects can prove immensely beneficial and often saves a significant amount of money in the long run.  Often, outdated data centers are not energy efficient and prove to be very wasteful.  By consolidating old data centers and moving over some equipment and all of the data to newer data centers energy consumptions can be greatly improved.  A significant amount of money can be saved through consolidation and the U.S. Government Accountability Office shows just how much through reporting recent statistics regarding federal data center consolidation projects, “Of the 24 agencies participating in the Federal Data Center Consolidation Initiative, 19 agencies collectively reported achieving an estimated $1.1 billion in cost savings and avoidances between fiscal years 2011 and 2013. Notably, the Departments of Defense, Homeland Security, and Treasury accounted for approximately $850 million (or 74 percent) of the total. In addition, 21 agencies collectively reported planning an additional $2.1 billion in cost savings and avoidances by the end of fiscal year 2015, for a total of approximately $3.3 billion—an amount that is about $300 million higher than the Office of Management and Budget’s (OMB) original $3 billion goal. Between fiscal years 2011 and 2017, agencies reported planning a total of about $5.3 billion in cost savings and avoidances.”  Many data center managers need to take a lot into account to ensure that the transition between data centers during consolidation is smooth to prevent downtime or overloading in the new data center.  As data centers scramble to meet energy and data demands, data center managers have to have a comprehensive and accurate knowledge of how much power and space will be needed in a new data center so that it can truly meet the needs.  Additionally, cooling needs need to be anticipated because if not, things can become overheated and energy efficiency diminished.

Truly understanding the capacity of an existing data center that is being consolidated and the power, data and cooling needs necessary to accommodate it is easier said than done for data center managers.  Amongst the things that need to be understood and anticipated are current density, design rating, bulk power capacity, bulk cooling capacity, data center capacity,  physical site details, equipment and more.  Determining what equipment can be brought from an existing data center to a new one will help anticipate physical space needs.  Additionally, it is important to understand what existing software licenses can be transferred and what may need to be cancelled or renewed.  While these are all important, it is simply skimming the surface of what needs to be anticipated in a data center consolidation project.  A new data center that is taking on all of this from a data center consolidation project needs to not only accommodate the addition but be capable of expansion and growth in the future.  While unanticipated problems are inevitable in any consolidation project, preparedness is the key to minimizing downtime and optimizing uptime and efficiency.

Posted in Data Center Build, Data Center Construction, data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, Datacenter Design, Power Management | Tagged , , , | Comments Off

The Importance of Ensuring True Redundancy in a Data Center

A data center often prides itself on its ability to maintain uptime in the face of significant power and data loads.  Downtime is seen by customers and consumers as simply unacceptable and while there are many ways to avoid downtime, one important protection all data centers need to utilize is redundancy.  Data center hardware and power needs are constantly evolving and because of this, it can be difficult to anticipate and properly ensure power supply redundancy.  With a dual-path power supply true redundancy can be achieved but it is critical that both paths are capable and managing all equipment and power needs in the data center.  ComputerWeekly describes the importance of redundancy in a data center, ” Redundancy has a negative connotation when the duplication is seen by the business as unnecessary. Yes, for some businesses this datacentre capacity excess is an issue, but for the majority, the other form of redundancy – the provisioning of a datacentre to survive a range of failure scenarios – has become even more of an issue. IT infrastructure is part of an organization’s DNA. If someone were to cut off the IT service for an organization, it would not be a small snag, but a corporate catastrophe for its operations. Business processes would halt, customers would be left stranded, suppliers would be unable to know what was required to be delivered, the organization would struggle to pay its employees what they are owed, communication and collaboration would be severely impaired. The overall availability of an IT platform means that an approach of a single application on a single physical server with dedicated storage and individual dedicated network connections is a strategy to oblivion. It is incumbent on IT to ensure that the IT platform can continue to operate through failures – as long as the cost of doing so meets the organization’s own cost/risk profile.”

The purpose of having a dual path is that, should a failure occur in the power supply, the other power supply will be able to pick up exactly where the first left off and maintain uptime.  Having a redundant power supply today is simply best practice for any data center.  One of the most important parts of creating true redundancy is ensuring that, should a failure occur somewhere in one path, it will not affect or impact the other path in any way. While many data centers choose to employ redundant power supplies, many data centers fail to realize just how much power their supplies will actually need for true protection.  The sad fact is, many data center managers simply do not have an adequate grasp on the power supply needs of all of the equipment in the downstream.  Because new equipment is constantly being added to data centers, the additions can frequently get overlooked and suddenly, without realizing it until it is too late, the power needs exceed the redundant power supply.  The power demands should be constantly checked so that the power redundancy is never exceeded.  The best way to do so is through the implementation of a monitoring system that routinely checks and ensures that all is as it should be.  When discussing redundancy, many may raise an eyebrow at the concept and think it is not very efficient. But, to properly run a data center, redundancy is necessary.  To ensure optimal efficiency, it is best to run the data center on the most efficient power supply and reserve the less efficient power supply as the redundant supply that is only used should a failure occur.

Posted in data center maintenance | Comments Off

Energy Requirements for Government Data Centers

Government data centers exist all over the country, in various states and in a variety of buildings. It is no secret that data centers consume a lot of energy but government data centers must comply with data center energy efficiency requirements. According to the Federal Energy Management Program, data centers account for a significant portion of overall energy consumption in the United States, “In 2013, data centers accounted for 2.7% of the 3831 billion kWh used in the US, or 102.9 billion kWh.1,2,3 Federal data centers used about 5 billion kWh in 2013, or nearly 10% of federal electricity use.” While, for the most part, the country does not have a specific government mandate or law in terms of energy consumption in data centers, more and more states are moving towards mandating energy efficiency in data centers. As the government and independent businesses strive to improve energy efficiency, be more green, and save money wherever possible, it is inevitable that data centers will see the initiation and implementation of energy efficiency requirements.

The PUE rating, or power usage effectiveness rating, helps measure how efficiently a data center uses energy. TechTarget explains what exactly PUE is and how it is measured, “Power usage effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is therefore expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1.” Many federal buildings are very old and were certainly not designed with data centers or energy efficiency in mind. For this reason, many federal data centers have a poor PUE, with some in the high 2s or even 3s, dramatically highlighting the need for improved energy efficiency in data centers across the board. The U.S. government has taken on a new mandate for energy efficiency – an average PUE of 1.4, a 30% energy usage reduction, and more. This is no small task considering that many are in the 2s or 3s, necessitating the closure of many data centers and a move towards new more energy efficient data centers, data center pods and other measures. Whether discussing a federal data center or an independent data center, it often comes down to the bottom line. Ultimately, consolidating energy usage and implementing energy saving strategies to drive down PUE will not only make data centers more compliant in advance of a government mandate but it will also save money. Over time, both compliance with energy efficiency mandates and money saved due to reduced energy usage will provide much needed sustainability in data centers.

Posted in data center maintenance | Comments Off

Should Data Centers Use Ceiling-Ducted Air Containment?











Data centers and all their power and energy usage generate a lot of heat.  One of the biggest expenses for any data center is the cost of cooling.  For this reason, data centers are always trying to develop new strategies to more effectively and efficiently keep their facilities cool while still performing their essential roles.  While each approach to cooling your data center offers unique advantages, each scenario is different and what is most efficient for one data center may not be most efficient for every data center.

Hot air and cold air containment are two options when trying to improve cooling efficiency in data centers.   Cold air containment strategies utilize a hot air cold air configuration in data centers but contain the cold aisles so that hot air and cold air do not mix which makes to increase the effectiveness of cooling in a data center.  But, even with hot air and cold air configurations, there are till additional ways to maximize efficiency.  Existing data centers can take advantage of the ceiling plenum to improve efficiency of cool airflow.  A dropped ceiling plenum could allow for a hot air plenum return system.  APC-Schneider Electric conducted studies and advised to following in regard to use of a ducted system, “We used Schneider Electric’s EcoStream CFD software to study the airflow pattern and pressure in the ceiling plenum for passive ducted systems, and came to the following conclusions:

1. Cooling performance is strongly linked to the ceiling plenum pressure.

2. Ducted HACS can adequately cool much higher densities than individually-ducted racks at a given ceiling vacuum pressure.

3. Average cooling performance is a fairly weak function of plenum depth and rack density, with deeper plenums and lower density yielding somewhat better performance.

4. Cooling performance is moderately affected by drop-ceiling leakiness with tighter being generally better though typical plenums perform much closer to well-sealed than leaky.

5. Cooling performance is a strong function of cooling-to-IT airflow ratio with higher values being better.

By utilizing a ducted air containment system in data centers, energy efficiency and reliability can be improved.  As far as data centers go, these are two of the most important things any data center can do.  APC-Schneider Electric concluded the following about ceiling-ducted air containment systems in White Paper 182, “Ducted air containment can simultaneously improve the energy efficiency and reliability of data centers. Since all ducted equipment, ducted cooling units, and the ceiling plenum function as a single entity, the use of CFD modeling is recommended for new deployments particularly when design constraints are close to the best-practice limits established here. In any case, deployment advice centers on ensuring an adequate and fairly uniform vacuum pressure in the ceiling plenum. This, in turn, can be achieved by providing sufficient ducted cooling airflow, creating a relatively “tight” ceiling system, employing deeper ceiling plenums, and sealing unnecessary leakage paths in racks and containment structures.”  These systems appear to work best in existing data centers and due to the uniqueness of each data center, the decision of whether or not to employ ducted air containment systems in data centers have to be taken on a case by case basis.

Posted in data center cooling | Comments Off

IT Departments & Facilities Really Can Work Together

Data Center










When it comes to energy usage, facilities and their IT departments are often at odds.  In most businesses today energy efficiency is the name of the game.  Many facilities have undertaken major initiatives to reduce energy usage through a variety of means such as more energy efficient HVAC and lighting usage.  IT departments tend to be the largest consumer of energy in any facility but because of the nature of the work that an IT department does, it is not always as simple to implement energy efficiency measures as it would be in another department within a facility.  While you can control lighting and environment in a facility or data center, there is quite a bit more to be done if you want to truly reduce energy consumption in an IT department while still functioning at full capacity.

For facilities and IT departments to better cooperate with a goal of improved energy consumption within an organization certain measure must be taken.  An IT department or data center has to undertake the improvement of energy consumption while still performing their primary task – IT operations reliability and maximized uptime.  The first way to achieve this is through the use of hot aisle/cold aisle rack arrangement.  This configuration manages air flow and minimizes cooling costs through  an ideal configuration of racks in an IT department or data center.  An additional consideration for IT departments and data center is a modular or scalable UPS (uninterruptable power supply)system that can be scaled according to specific needs.  With a modular UPS system you can easily and cost effectively increase your UPS power as needed over time which results in improved efficiency.  There are also interesting and unique ways to make better use of outside air and thermal storage for an energy efficient form of cooling.

When IT departments and data centers implement these data center best practices it helps drastically reduce data center energy consumption.  In addition to these measures, facilities can also upgrade outdated equipment that is not energy efficient for newer technologies that not only are capable of better data management but improved energy efficiency.  The facility’s DCIM (data center infrastructure management) blends information technology with facilities management and can monitor and asses data center’s critical systems, energy usage and help make future improvements.  All is not lost – IT departments and facilities do not have to be at odds.  Through collaboration and implementation of best practices, IT departments and data centers can reduce energy usage while still performing their primary function effectively.

Posted in Data Center Infrastructure Management | Comments Off

10 Surprises a Data Center Manager Might Encounter











As a data center or IT department manager, things are not always straight forward or uncomplicated.  In fact, over time, it is likely that a data center or IT department manager will encounter a surprise or two.  As time moves along, technology advances, density loads increase and the specific needs of data centers and IT departments evolve.  In order to minimize downtime and maximize the effectiveness and efficiency of a data center, it is ideal to anticipate potential surprises and try to mitigate those surprises or prepare potential solutions should a problem arise.  Emerson Network Power released a list of 10 common surprises data center and IT mangers encounter to help data centers and IT departments better anticipate and prepare.

Emerson Network Power’s 10 Common Data Center Surprises:

1. Those high-density predictions finally are coming true

  • Rack density is increasing and many data centers and IT departments are scrambling to find ways to handle such a high density.  With densities hovering around 7 kW per rack and expectations for them to grow each year (some are predicting 50 kW by 2025!), it is important to start preparing for higher density racks now rather than waiting until it is a real problem.

2. Data center managers will replace servers three times before they replace UPS or cooling systems

  • Bottom line – data center and IT department managers need to be prepared to scale to support future server needs.

3. Downtime is expensive

  • Downtime is not only frustrating but very expensive which means when downtime happens, no one is happy.  Gartner notes that a recent study reported that downtime is very expensive, “Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour.”

4. Water and the data center do not mix – but we keep trying

  • Water + expensive IT equipment = bad.  Data centers or IT departments that experience water damage or flooding damage experience major problems so every effort must be taken to prevent this from happening.

5. New servers use more power than old servers

  • New technology may be smaller which means it takes up less space in a data center or IT department but it has shown consume much more energy.  IT departments and data centers must prepare to support such high energy demands.

6. Monitoring is a mess

  • Managing different infrastructures is time consuming and confusing.  A good DCIM is a must.

7. The IT guy is in charge of the building’s HVAC system

  • There can be a communication gap between IT and Facilities.  This is not exactly news

8. That patchwork data center needs to be a quilt

  • Mix and match data center components are out – full integration for maximum capability and efficiency is in.

9. Data center on demand is a reality

  • It is no longer necessary to have complicated designs for a data center that take many long hours to bring to reality.  A data center can be scaled to the immediate needs and through many plug-and-play options a data center can be created in an instant, wherever there is available space, based on immediate needs.

10. IT loads vary – a lot

  • Have you heard of a website hosting a sale that draws so much traffic that it experiences a crash?  IT loads can vary significantly depending on the type of business, the time of day or the season.  A data center or IT department needs to prepare for drastic changes in IT loads so that there is capacity to handle such changes and prevent expensive and frustrating downtime.
Posted in Data Center Infrastructure Management | Comments Off

100% Network Uptime Is An Expectation of Smartphone Users

Advanced PDU














If you have ever needed to use your Smartphone and not been able to access your carrier’s network you know how incredibly frustrating it is.  And if you are lucky enough to have never experienced this, you have likely come to expect your carrier’s network to provide 100% uptime.  Network uptime is the ultimate goal of any data center and without network uptime businesses lose money and clients become incredibly frustrated.  We live in a world of instant gratification and knowledge constantly at our fingertips.  When something stands in our way or slows us down we no longer see it as frustrating but completely unacceptable.  And, as research shows, network uptime is not a simple luxury, it is an expectation.

In a study recently conducted by Vasona Networks, proof of an expectation amongst customers of 100% uptime was shown, “Sixty-four percent of consumers responding to the survey cited “good performance all the time” as a reasonable expectation from their mobile data service provider. Just 36 percent of subscribers still think it is reasonable for there to be “hiccups in performance,” “unavailability for extended periods” or “unavailability in certain places.” When asked to identify the principal cause of problems during use of an app, mobile service providers are the most commonly cited party for blame, with 40 percent selecting them. Thirty-nine percent blame the app maker and the remainder of consumers suspect their device or device operating system to be the cause. Data service quality is crucial for subscribers, with 29 percent citing “mobile Internet performance” as most important when choosing a provider… “Mobile Internet performance is becoming increasingly important for consumers and this survey indicates just how high a bar subscribers are setting for their service providers,” says John Reister, vice president of marketing and product management for Vasona Networks. “Our findings indicate that it is no longer sufficient for mobile operators to offer a good experience most of the time across most of their network. Today, if every cell isn’t delivering great performance, subscribers are being let down.”  Customers today do not simply expect network availability during business hours, or even waking hours, service must exist 24/7, 365 days of the year.

Not only will network downtime frustrate customers and possibly drive them to another provider but it comes with a significant cost to companies.  Gartner, an information technology research and advisory company makes note of the high cost of network downtime, ” This is a follow-up to last week’s post on network downtime as several have asked “What is the cost of  network downtime?” Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour (see Ensure Cost Balances Out With Risk in High-Availability Data Centers by Dave Cappuccio). However, this is just an average and there is a large degree of variance, based on the characteristics of your business and environment (i.e., your vertical, risk tolerance etc).”    That is no small price to pay.  Every network provider needs to spend more time focusing on network uptime so keep customers satisfied so that customers can be retained and financial loss prevented.


Posted in Customers | Comments Off

Anticipating Future Rack Density and How to Prepare for Changes











Much like a doomsday prepper stocking up on as much water and canned goods for the arrival of the end times, data center and IT professionals have been discussing and preparing for the increase of rack density that is coming in the years ahead.  For years and years now IT professionals have been predicting a massive increase in future rack density and…so far…they have been wrong.  Many server racks currently operate at around 3-5 kW per rack and some people are predicting that within a matter of years rack densities will each 50 kW.  But, as inflated predictions continue to plague the IT world more and more businesses and data centers find themselves with bloated, overbuilt data centers that end up being far more than what is actually needed and a complete waste.  A data center with a massive amount of unused capacity will, take up space and consume a completely unnecessary amount of energy that will simply go unused.  While it is important to build a data center that is scalable and future-proof, completely overdoing it with capacity is not necessarily the right way to go.

So, can you really predict future rack density?  Is it worth it to even try?  When talking about creating a future-proof data center, you do have to at least consider potential future density needs.  If rack density is going to dramatically increase data centers must be prepared to power and cool such a high density.  The lower the rack density the less energy and money it takes to power and cool it so considering a tripling (or more) of rack density can seem quite daunting.  Some recent surveys and reports from industry experts predict that by 2025 rack density could be around, or even exceed, 52 kW.  2025 is still 10 years away but that is not so far off that we should not start preparing now.

While there will certainly be some data centers operating at 25 kW or 50 kW by the year 2025, it is unlikely that this will be the average or even the trend.  The primary reason for this is technology.  New technology that will help lower rack density is likely to emerge in that time.  Even a change to a rack density of 25 kW will mean a significant change to the physical environment in a data center.  While rack density will most certainly increase in the next decade, so will utilization rates which means more efficient performance.  Additionally, in the next decade it is expected that there will be major changes in the way data centers are powered.  When it comes to the future of data center power, solar is the name of the game.  In addition to solar power, nuclear, gas and wind power will also be utilized to create a more energy efficient data center.  All of these things will impact the way data centers anticipate and manage an increase in rack density.  If there is one thing that is clear, scalability is critical to the preparation and management of increased rack density in the future.

Posted in data center equipment | Comments Off

Human Error the Cause of Data Center Downtime – Practical Tips for Minimizing












Ask any IT or data center manager what the most frustrating aspect of running a data center is and they will most likely tell you it is downtime.  Downtime is an outage of computer service, network connectivity or online service for a period of time.  The period of time could be as short as a few seconds with no limit of length.  While a few seconds of downtime may sound like no big deal to most people, data center managers and those who have worked in IT departments know differently.  Downtime is a big deal.  Downtime is frustrating for businesses, for customers, for IT departments and for data centers.  And, beyond frustration, downtime is costly.  A recent report from Gartner noted the high cost of downtime, “Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour.”  In order to prevent downtime, data center and IT department managers must try to understand the root cause of downtime.  The trouble is, there can be a variety of causes when it comes to downtime.  But interestingly, the most common cause of downtime is actually human error.  The good news about this is that human error, at least in many cases, is preventable.

When talking about human error, you might wonder just how much downtime does human error account for?  Data Center Knowledge notes just how much human error can influence downtime in data centers, “Data center downtime is often the result of equipment failure, or a chain reaction of unexpected events. But one of the leading causes of data center downtime is human error, as ComputerWorld reminds us in Stupid Data Center Tricks, which relays anecdotes of data center mishaps. The story notes a study by The Uptime Institute, which estimates that human error causes roughly 70 percent of the problems that plague data centers today. How can this problem be mitigated? “There is no doubt that human errors in the data center causes a great deal of downtime and some of these can be avoided by adhering to some simple steps,” said Ahmad Moshiri, director of power technical support for Emerson Network Power’s Liebert Services business.”  70 percent is a staggering statistic.  A lot of human error is avoidable and Data Center Knowledge offers some valuable, practical tips that all data centers can implement.

1. Shielding Emergency OFF Buttons

2. Documented Method of Procedure

3. Correct Component Labeling

4. Consistent Operating Practices

5. Ongoing Personnel Training

6. Secure Access Policies

7. Enforcing Food/Drinks Policies

8. Avoiding Contaminants

To truly minimize downtime that is caused by human error a lot of discipline will be necessary.  Practical steps are great in theory but they must be implemented with dedication and procedure for them to truly work.  With patience, persistence and time downtime as a result of human error can be greatly diminished.

Posted in data center maintenance | Comments Off

Tips for Maneuvering Your Way Through the Clouds

If you’re thinking of implementing cloud-based services with your regular data center services, you’ll want to make sure you know what you’re getting into so that you don’t get lost in the clouds. At Titan Power we realize how popular and powerful cloud computing is becoming, be we also know how confusing it can be for data center mangers who don’t know what they’re getting into or how to use cloud-based services to their fullest potential.


Know What You’re Buying


As you’re considering cloud-based services, be sure that they’ll be able to easily integrate with your data center’s existing infrastructure. Before agreeing to a uniform service, look over everything that’s included with the service and be clear on exactly how the service will be implemented with your current technology infrastructure. Ask questions and request an explanation for anything that isn’t clear.


Have a Solid Idea of Your Commitment


You’ll find that a majority of cloud-based services are bought on either a subscription or a pay-per-use basis. There will also more than likely be minimum requirements to consider. Take a look at those requirements and ask yourself if they’re a good fit for whatever cloud service you’ll be adding on.


Know That You’ll Have Access to the Cloud When you Need It


Whether the cloud service is for you, your clients or both, you’ll want to be sure that you can easily and quickly access the service when you need it. As you’re considering cloud service providers, ask whether or not they have secondary servers, regular back-ups and a solid disaster recovery plan. We also suggest that you be sure there’s sufficient up-time during access periods, which includes your peak operation hours.


Full Data Access


In addition to having constant and consistent access to your data, you should also ask if you’ll have your data in a form you can actually use. Another good question we recommend is to ask if your cloud data will be returned to you in a fully usable format if you decide to part ways with your cloud service provider.


Secure Cloud Storage


Know exactly who will have access to your data and where your data will be stored. Any security measures that cloud service providers use should be in full compliance with the most recent data security laws. It’s best that you review data security measures and ask to be made aware of any changes to those measures as well as data breaches. You should also find out about cost remedies in the event that a data breach takes place.


Just as not all data centers are created equal, the same is true of cloud-based services. Titan Power is here to help you smoothly and efficiently implement cloud computing with your data center, so don’t hesitate to contact us if you need help.

Posted in Data Center Infrastructure Management | Comments Off