header-about-us-sub

Does Your Data Center Have a Disaster Recovery Plan In Place?

interxion-containment-overhFew things in recent history point to the need for a comprehensive disaster recovery plan for data centers than Hurricane Sandy.  When disaster struck many data centers were unprepared and ill-equipped which led to significant downtime that ultimately cost millions of dollars.  Data centers can only operate for so long under the false sense of security and protection before a disaster strikes and the sudden panic sets in.  A detailed disaster recovery plan must be in place, complete with multiple contingencies, before a disaster ever strikes so that, should it happen, immediate action can be taken.

To begin formulating a disaster recovery plan you must first identify all of your critical systems.  Once you have identified them, you can properly determine how to best protect them in the event of a disaster.  To properly prepare a detailed inventory of infrastructure, along with a comprehensive understanding of it must be routinely kept.  When DCIM is lagging and knowledge of infrastructure lacking or out of date a disaster will become a major problem for data centers.  Along with a detailed knowledge, a thorough backup must be in place as well.  Redundancy in a data center not only protects against lost data on a day to day basis but in the event of a disaster as well.  Additionally, by taking advantage of the cloud data centers can virtually protect information which is safe from disaster and a useful tool for data recovery.  Ready, a national public service campaign that is “designed to educate and empower Americans to prepare for and respond to emergencies including natural and man-made disasters,” describes what the critical elements of a disaster recovery plan are for any data center, ” Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:

  • Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)
  • Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)
  • Connectivity to a service provider (fiber, cable, wireless, etc.)
  • Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)
  • Data and restoration

Once you have identified critical system components and how to best protect them, a comprehensive DR (disaster recovery) Plan should be formally written up and kept safe.  All pertinent personnel should also be trained and prepared for how to act should a disaster occur.  By doing so, you will be able to best protect your data center and clients if a disaster occurs and maximize uptime.

Posted in Back-up Power Industry, Computer Room Design, data center equipment, Data Center Infrastructure Management, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , | Comments Off

Stop Manually Collecting Data For Your Data Center!

UofA_2011_825kVA_Delivery_011-001It is critical that any data center have a strong DCIM strategy in place.  Even in relatively small data centers the amount of infrastructure that must be tracked, managed, maintained and more is enough to overwhelm any data center manager.  There are a variety of ways to track and maintain infrastructure but traditionally it has been done manually.  But, is this method truly the most effective way to collect data in a data center?  And, if not, what is the solution?

One way that many data centers have traditionally collected data is to give each piece of infrastructure a barcode.  The barcode can then be scanned and inventoried to keep track of what is where.  But, the problem with this method is that it does not provide much data and the data it does provide is often outdated.  Additionally, it does not provide much insight as to how each piece of equipment within an infrastructure is using energy.  Without that information, how can a data center manager accurately determine where energy is being used appropriately, where it is being underused and where it is being overused?  If an analysis is only completed every 6 months or once per year, that is a lot of time that has been wasted when improvements could have been made, efficiency improved and money saved.  By implementing a more intelligent DCIM management tool it can provide up to data information about equipment and energy consumption so that it can offer real, actionable data that data center managers can use.  Alerts can be arranged based on predetermined thresholds, graphs and charts generated from gathered data and more accurate action can take place.   It is also a far more efficient means of collecting data in a data center rather than manual data collection.  This will save time and many headaches for data center managers, making an improvement any way you look at it.  One of the last and most important reasons to implement an automated data collection method instead of manually collecting data is that it can alert you to potential capacity problems far sooner than manual methods.  Rather than capacity problems sneaking up on data center managers, leading to major problems and the possible expense and frustration of having to relocate, automated DCIM will help data center managers see where capacity stands within a data center far before outgrowth ever becomes a problem so that any necessary condensing, rearranging or cloud storing can take place to save room.  Manual data collection may have been the way thing were done in the past but it is not the way of the future and data centers that make the switch now to automated DCIM will enjoy the variety of benefits it has to offer.

Posted in Computer Room Design, computer room maintenance, Data Center Infrastructure Management, DCIM | Tagged , | Comments Off

Will More Data Centers Switch to Racks-On-Chips?

datacenter45In today’s technology world everything is getting smaller and smaller.  And, with the cloud, some things are simply disappearing!  Many data centers struggle to make their capacity work without exceeding it but, as expansion occurs, infrastructure changes and more racks are needed, data centers often find themselves outgrowing their space.  This leaves them in the frustrating position that they must then relocate which can be tricky and often leads to downtime.  Racks-on-chips have become a hot topic of conversation because it  can help slow down overgrowth in a data center.  Racks-on-chips are essentially a condensed version of a server rack while maintaining processing power and capacity but packaged in a much smaller package – a chip.

The critical component of racks-on-chips that makes them possible is that the capacity of chips must be increased so that they can store as much as server racks can.  Racks-on-chips can be achieved but data centers must be retrofitted to make them possible because racks-on-chips have to be networked together with optical circuit switching and electronic packet switching.  New data center builds that are forward thinking will plan ahead and have these system in place so that racks-on-chips will be possible.  By switching to racks-on-chips the amount of server racks used in a data center can be dramatically reduced and once of the biggest benefits of this aside from freeing up physical space in a data center is that is also significantly reduces the amount of energy needed for heating and cooling.  This means that a data center using racks-on-chips will be far more energy efficient, which is not only more green but will save a significant amount of money.  While racks-on-chips are a potential ideal solution for data centers space and energy issues, Motherboard explains that it is still in the early stages and much still needs to happen for the benefits to be truly realized, “But the meat and potatoes of Yeshaiahu Fainman and George Porter’s server-rack-on-a-chip vision is really about taking the existing framework for a server rack and recreating it at the nano-level. They say that miniaturizing all server components so that several servers can fit onto a computer chip would increase processing speed. Making circuit systems to support all these mini-components using advanced lithography is already feasible, but scientists have yet to realize nano-transceivers and circuit-switchers—the key components that transmit data. And while silicon chips are increasing being used to transmit data-carrying light waves in fiber optic networks, efficiently generating light on a silicon chip is still early in its development. The researchers offer some solutions, like including light generating nanolasers in the chip design.”

 

Posted in Computer Room Design, computer room maintenance, data center equipment, Data Center Infrastructure Management, DCIM | Tagged | Comments Off

Data Center Capacity Management Reporting and Analysis

datacenter726DCIM, or data center infrastructure management, is incredibly important for any data center.  If infrastructure is not managed properly in a data center many problems can arise.  Proper management is challenging for even the most experienced data center managers because technology is constantly evolving, power density is constantly added to racks and cooling needs shift as well.  One of the best ways to improve data center infrastructure management is to utilize software to collect information rather than manually collecting information.  DCIM software is a central location to aggregate and analyze data that has been collected so that performance and uptime can continue to be improved.

To properly manage capacity, data about the following must be collected: power usage, rack space, cooling and more.  As this data is collected data center managers will be able to see how much rack space is being used so that they can determine whether or not they want to add more density to a rack.  With this data, managers will know if rack density maximum capacity is being used properly or too close to being exceeded.  If it is being underutilized too much energy may be used to cool it and it is likely better to increase density or condense server racks to maximize space and efficiency. This is important because proper management of rack density allows for improved energy efficiency and a reduction of downtime.  But, without a sophisticated DCIM software, a data center manager may not even realize if a rack density is being under or over-utilized.  Often, what data center managers experience is that, over the life of a data center, more systems are added, equipment is added or adjusted and suddenly there is a general lack of knowledge about power density.  DCIM software helps prevent a lack of knowledge from occurring by ensuring that data center managers, and support staff, all have the most current information about what is happening in a data center.  DCIM software will not only analyze rack utilization but power management as well.  As data is collected data center managers can see how and where power is being used.  This will help pinpoint areas in need of improvement and allow managers to make changes as quickly as possible.  DCIM software can even be set up with automatic notifications so that the most current information is being provided at all times.  With such current information it is easy to micromanage or become overly concerned with finite details but it is better to instead look at trends over time.  By analyzing charts and graphs to see trends you will be able to see if something was a fluke or if it is a consistent pattern.  Once consistent patterns are known it is easier and more effective to make permanent changes and improvements to a data center so that efficiency and data management can be optimized.

Posted in Back-up Power Industry, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM | Tagged , , , | Comments Off

The Strain of High Density Devices Without Proper Cooling Methods

DCIMMany data centers operate with high density server racks for a variety of reasons.  Perhaps they want to maximize their space by increasing the density of their racks so that they do not outgrow their data center walls.  Perhaps it is a strategy to concentrate power, and therefore heat, in an attempt to more specifically cool areas as an attempt at improved cooling efficiency.  No matter the reason , many data centers have extremely high PDUs, sometimes without even realizing it, and without realizing it, it can be incredibly challenging to appropriately cool a data center.  This dilemma is often the source of outages and downtime, two things no data center wants to experience.  No doubt about it, the pressure is on data center managers to run a high density data center that are also highly efficient and able to sustain their power needs.  No small or simple task by any means.

So, what is a data center manager to do?  How can you achieve high density, high efficiency and high efficacy all at once?  Is it even possible?  Yes, but data center managers must make it their priority to have a full working knowledge of their data center, power density capacities, best practices when it comes to cooling and more and then devise a strategy with contingencies so that if the unexpected arises there is a plan in place.

First and foremost, a data center manager must have knowledge of what a data center’s rack power density maximum capacities because without this knowledge first a data center will always be playing catch up.  Catch up leads to outages and downtime so it is simply unacceptable.  Once a data center manager has knowledge of power density maximum capacity they can work to ensure it is not exceeded.  This alone will prevent a significant amount of problems.  Additionally, by understanding maximum capacity a data center manager can make appropriate cooling arrangements.  They will also be able to accurately determine what backup power and redundancies are needed to mitigate downtime.  With a full working knowledge of data center power densities and cooling needs, alternative cooling methods can be explored to determine whether or not they would be more efficient or effective.  If power density maximums are being pushed or exceeded additional server racks or space may be needed because pushing or exceeding the limits may temporarily save space but it will create major problems going forward.  Outages, downtime, damage and more could be experienced if power density maximums are exceeded so the temporary solution was never really a solution at all.  Once a strategy and plan is in place, the entire data center team and all other appropriate personnel should be well informed and trained because without communication problems will continue to arise.  High density racks are a good strategy for data centers who want to maximize space and efficiency but for it to be effective a full working knowledge of data center abilities and capacities must be known and appropriate cooling methods must be executed.

Posted in Back-up Power Industry, computer room construction, Computer Room Design, data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit | Tagged , , | Comments Off

Should Data Centers Attempt to Achieve a Single Pane of Glass for DCIM?

datacenter45Data centers run a lot of software, applications, and more in order to perform all of the necessary tasks a data center completes.  For a data center to run properly there is not just one program to get everything done, there are many.  And, over time, more and more are added as technology and data center priorities change.  Data center infrastructure management, or DCIM, is a top priority for data center managers because without access to and analysis of the most current data regarding data center infrastructure, programs and applications, devastating problems can and will arise.  A data center manager that is not well informed may not know rack density maximums are and may not realize they are going to exceed them until it is too late.  What happens when it is too late – power outages and downtime.  Downtime can be devastating for any size business so it must be avoided at all costs.  Due to the urgency and importance of effective DCIM, many data centers and managers are on a quest to get their DCIM on a “single pane of glass.”  If they can see everything they need to see on a single pane of glass then they do not need to change applications or go rooting around for various data, it is all in one easy to access place.  But, can it really be achieved?  Can everything a data center manager needs to know be located on a single pane or is this a fruitless quest?

A single pane of glass for DCIM can be achieved but, most likely, application consolidation may need to take place.  Consolidation is a cost-effective, efficient and effective way to achieve a single pane of glass approach.  There are service providers who can consolidate applications and achieve a unified framework, and thus, unified view of applications.  A single pane of glass may be able to fit everything that a data center manager needs to monitor operations but can they maintain and optimize their data center from a single pane?  If not, how effective is it really?  By hiring a service provider experienced in consolidation and optimization you can work closely with them for a smooth transition that does just lead to more headaches, confusion and miscommunication.   If the approach is achieved effectively, all of the software tools can utilize data to offer the most current information about the function of a data center but in the attempt to make the transition some data centers realize that the compilation of so much data into an effective database for analysis often leads to errors.  While a single pane of glass is possible it may be difficult to achieve but with the help of an experienced professional, time and patience your data center manager can get the view they hope to achieve to improve DCIM.

Posted in Data Center Infrastructure Management, data center maintenance, DCIM | Tagged , | Comments Off

Environmental Impact of Data Centers

Green Building Data CenterData centers perform an invaluable job but that comes at a cost.  Data centers use a significant amount of energy and, while many are making efforts to be more energy efficient, the fact of the matter is that there are still many data centers that consume a dramatic amount of energy each year.  This of course is not only costly but not friendly for the environment.  The digital world has some very real and tangible energy side effects.  While it may not all be felt immediately, there is an environmental impact occurring as a result of data centers.  Time discussed just how much energy is being used by data centers and what to expect going forward, ” IT-related services now account for 2% of all global carbon emissions, according to a new Greenpeace report. That’s roughly the same as the aviation sector, meaning all those Netflix movies the world is streaming and the Instagram photos they’re posting are the energy equivalent of a fleet of 747s rumbling for takeoff. Unless something is done to green the cloud, we can expect those emissions to grow rapidly—the number of people online is expected to grow by 60% over the next five years, pushed in part by the efforts of companies like Facebook to expand Internet access by any means necessary. The amount of data we’ll be using will almost certainly increase too. Analysts project that data use will triple between 2012 and 2017 to an astounding 121 exabytes, or about 121 billion gigabytes. ‘If you aggregated the electricity use by data centers and the networks that connect to our devices, it would rank sixth among all countries,” says Gary Cook, Greenpeace’s international IT analyst and the lead author on its report. “It’s not necessarily bad, but it’s significant, and it will grow.'”

The fact of the matter is that data centers consume energy, a lot of energy, and that will not change.  But there are ways to reduce energy consumption and find more clean ways to use energy that leave less of an environmental footprint.  Most data centers use a lot of energy but they also waste a lot of energy and that is simply unacceptable.  Energy efficiency must be made a priority in a data center and a plan of action must be immediately implemented on a full scale.  Consolidation must take place, fewer servers means less space and less to cool.  But, with a high density rack, cooling can be focused so that cooling efforts are not wasted.  Additionally, containment such as hot aisles and cold aisles is one way to help improve energy usage.  Another and more green option in terms of cooling is to house your data center in a cooler climate that can take advantage of natural cooling elements and reduce cooling needs within a data center.  Additionally, as more and more moves to the cloud, physical infrastructure can be reduced in data centers which helps reduce cooling needs.  Data centers need to examine their energy usage and look for ways to not only reduce but for green options that are more environmentally friendly so that a sustainable data center future can be achieved.

Posted in Data Center Design | Tagged , , , | Comments Off

What To Consider During a Data Center Migration

Spring Cleaning Checklist

Data center migration.  Those three words could probably send any data center manager running for the hills.  Alas, when a data center has outgrown its current facilities or has to be moved for any other reason, data center migration is a necessity.  A data center migration is a significant undertaking fraught with potential risks and hidden problems.  So, how does anyone ever accomplish it?  Through proper research, planning that is complete factors in contingencies, careful instruction to ensure that everyone is on the same page, and a lot of patience, a successful data center migration can be achieved.  When anticipating a data center migration there is no avoiding it, you simply have to face the fact that there will be surprises along the way.

For any data center, the primary concern with a migration is downtime.  Even seconds of downtime can be incredibly costly so reducing or completely eliminating downtime is the name of the game.         While certain things can be moved during off-peak hours, some things still may occur during peak business hours.  For this reason it is very important to keep everyone well informed and on the same page at all times.  End users, support teams and anyone impacted by the migration should be informed of the time table, schedule and anticipated plans regarding the migration.  Next, it is important to research what existing infrastructure there is, what legacy systems are in place, and what will be making the migration.  Once you have a thorough idea of what will be making the migration, it is important to anticipate what additional infrastructure and equipment will be needed in the new location so that a layout can be planned.  Plan a layout based on best practices in the new data center so that you can accommodate growth as needed.  Scalability is important for the lifespan of any data center so while you are making the move it is an ideal time to ensure the new data center will be scalable.  When anticipating scalability you must see beyond current needs and standards and look forward to future rack density expectations.  With all of these things being considered, you may want to also consider what option is best for efficient cooling.  Cooling a data center is often one of the biggest expenses a data center experiences so when determining layout you may want to think about implementing hot aisles/cold aisles and whether or not there are other layouts that may serve your data center best.  The final part of any data center migration is the migration itself.  For most migrations it is a good idea to use the help of an experienced project manager with knowledge and expertise about data center migrations to ensure everything goes smoothly and that once the migration has occurred, everything runs as it should.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , | Comments Off

How to Build An Immortal & Future-Proof Data Center

data_center_facebook_postDesigning and constructing a data center is no small undertaking.  Heating and cooling, energy efficiency, infrastructure management and scalability must all be considered.  A data center must be designed to meet current needs while being able to scale up or down in the future as needed.  The tricky part of that with a data center is that if you build a data center that is far bigger than current needs it may not be energy efficient.  But, if you build a data center that is just barely big enough and you expand over time you may quickly outgrow your space and find yourself looking to move which is costly and difficult.  So, how does one build the illusive immortal data center?  Do not fear, it can be achieved.

First thing is first, you have to determine the goals of your data center when designing an immortal data center.  One of the most important things to consider is scalability.  Without scalability, most data centers will outgrow themselves with time.  A data center must have dense server racks, do not run racks on only 50% capacity.  Increase the density of racks to maximize space.  But, you cannot increase rack density without having sufficient cooling options.  Some data centers prefer to employ hot aisles and cold aisles to help more efficiently heat and cool a data center.  By creating hot aisles and cold aisles, or specific zones for certain infrastructure, it can drastically improve energy efficiency and reduce energy expenses.  If hot aisles or cold aisles are not ideal for your data center, consider adding a high density room where cooling energy can be focused to the most demanding area and the rest of the data center can be kept at a warmer temperature.  This will help keep costs down while still efficiently cooling high density server racks.  Additionally, when designing an immortal, future-proof data center, it is wise to look at green and more efficient methods of cooling.  By implementing green methods you can save significantly over time while still effectively cooling a data center.  The last consideration for all data centers is the cloud.  The cloud is really the way of the future and many data centers are converting to more and more cloud usage to reduce physical infrastructure.  This helps free up additional room in a data center and reduce heating and cooling needs.  The cloud is infinitely scalable so it offers immense potential for the future and is a significant tool to keep any data center remain immortal.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design | Tagged , , , , | Comments Off

Importance of Circuit Protection in Data Centers

ASU-Polytechnic-Fire-Suppression-012Data centers use a lot of power, have a lot of equipment and run around the clock.  With so much going on in data centers, circuit breakers have a lot to manage and can easily become overloaded without proper attention, maintenance and regulation.  When a circuit breaker gets overloaded, an arc flash can occur.  If you are not familiar with what an arc flash is, you should be.  Why?  They are very dangerous and pose significant risk to data centers.  The Workplace Safety Awareness Council explains what an arc flash is and why it should be avoided at all costs, ” Simply put, an arc flash is a phenomenon where a flashover of electric current leaves its intended path and travels through the air from one conductor to another, or to ground. The results are often violent and when a human is in close proximity to the arc flash, serious injury and even death can occur.”  This is an understandable concern for data center managers because an arc flash could endanger data center employees and possibly damage data center equipment and infrastructure.

To best achieve circuit breaker protection and prevent arc flashes from occurring you have to begin by protecting the circuit breaker from overload.  You have to assess the load, and then size the load to what each circuit breaker can manage.  With monitoring and the use of today’s technology the load the circuit breaker is experiencing can be assessed, monitored and tweaked as needed. While this is a small solution, it is not a complete solution because it does protect from significant arc flashes.    Next, protection must be implemented for branched circuit breakers to ensure that there is no upstream or downstream problems if a circuit breaker is tripped.  Through the use of tools like ZSIs, or zone selective interlocking systems, circuit breakers can better communicate with each other.  Upstream and downstream circuit breakers communicate with each other so that, should a problem occur downstream, it can send a message to upstream circuit breakers to wait until the problem has been cleared.   When you better manage circuit breakers you protect your data center from downtime as well as other significant concerns such as injury to workers and damage to property.  All of these things cannot have a price tag put on them, they are truly invaluable.  In addition to implementing effective systems to protect circuit breakers from becoming overloaded or experiencing arc flashes, it is important to properly train personnel so that they know how to assess and maintain circuit breakers and know what to do should a problem occur.

Posted in Data Center Battery, Data Center Design, data center equipment, Data Center Infrastructure Management, Datacenter Design, Power Distribution Unit, Power Management | Tagged , | Comments Off