header-about-us-sub

Data Centers: Securing the Cloud

truth-about-hackersIn an increasingly digital world where just about all of our personal and business-related information is stored, relayed and transacted online security concerns continue to grow and grow.  We hear about hack after hack and the need for data centers to increase their security.  As more and more location move towards cloud computing, how can they increase not only the security of their infrastructure but their overall security?  There are growing concerns that the public cloud may actually be more secure than the IT facility cloud.  Infoworld explains the concern and what the main contributing factors to the problem are, “What public clouds bring to the table are better security mechanisms and paranoia as a default, given how juicy they are as targets. The cloud providers are much better at systemic security services, such as looking out for attacks using pattern matching technology and even AI systems. This combination means they have very secure systems. It should be no surprise that the hackers move on to easier pickings: enterprise data centers. The on-premises systems that IT manages is typically a mix of technologies from different eras. The aging infrastructure is often less secure — and less securable — than the modern technology used by cloud providers simply because the old, on-premises technology was designed for an earlier era of less-sophisticated threats. The mixture of different technologies in the typical on-premises data center also opens up more gaps for hackers to exploit.”  So, does it just boil down to a narrowed focus paired with hyper-awareness of threats?  Is it just that the cloud can simply focus on its unique set of challenges whereas the traditional facilities have a wide range of weaknesses that pose potential threats and therefore security is spread thin across the board?

Cloud computing has more than proved its value so it is certainly not going anywhere.  Facilities are getting on board with it and more making the switch.  The problem is that they still have a wide range of infrastructure that must also be kept safe and protected, and traditional security approaches for facilities are different in the digital space.  What once worked for security may be so outdated that it is no longer effective and with hackers acutely aware of the gaps, like heat-seeking missiles, will swiftly find and attack those weak spots.  A breach is often the result of an un-tested system so facility managers must get more vigilant about education and testing.  Ignorance is far from bliss in this case.  The threat landscape is constantly changing so IT facilities can better protect themselves through a combination of education, real-time monitoring, protection of servers, and a dynamic multi-level approach to security.  Information must be protected within storage devices inside a facility, throughout information transmission between facility servers and clients, and throughout use within an application.  And, as mentioned above, a healthy dose of paranoia never hurt anyone when it comes to protecting secure information.  Through an extensive effort of limiting exposure on every possible front and a commitment to staying ahead of the hackers as much as possible, data center security can begin to reach the level of protection that customers expect.

Posted in data center equipment, Data Center Infrastructure Management, Data Center Security, DCIM | Tagged , | Comments Off

Data Centers Utilizing Wind Power

windenergygoogleEco-friendly and energy efficient remain the focus of data centers across the nation and around the world.  Every step a facility takes towards improvement is a step towards reduced energy consumption and significant savings.  Many facilities specifically choose to place their locations where the climate allows for natural cooling using outside air which lowers the use of air conditioning systems.  Now, many facilities are making a move toward using wind power.  These locations are using utility power derived from wind generation.  This form of renewable energy is eco-friendly because it is sustainable and dramatically reduces the need for other sources of utility power.  In some cases, data centers are becoming 100% wind powered!

There are some restrictions in place for businesses can source their wind power but this move is incredibly positive and will certainly become more and more popular over time.  Facebook has utilized wind power for one previous location and has opted to design its newest facility to be 100% wind powered because they recognize that it is an inexpensive and effective form of clean energy.  Fortune elaborates on Facebook’s latest undertaking, “Facebook announced on Tuesday that it’s building a large $1 billion data center in Ft. Worth, Texas. The facility, which is already under construction, will be Facebook’s fifth data center, and will be built on land purchased from a real estate company run by the eldest son of former Presidential candidate Ross Perot. The data center will use wind power from a large wind farm that is also under construction on 17,000 acres of land in Clay County about 90 miles from the data center. By agreeing to buy the power from the 200-megawatt wind farm, Facebook helped bring the clean power project onto the grid. A report issued earlier this month from the European Commission Joint Research Centre found that there were about 370 gigawatts of wind turbines installed by the end of 2014. One gigawatt is the equivalent to a large coal or natural gas plant… Facebook will presumably buy the wind power at a fixed low rate over several decades. If grid energy prices rise, the deal could actually save Facebook money on its energy bill.”  Additionally, Data Center Knowledge notes that it is not just IT facilities that are making this move but customers as well, “Salesforce has contracted for 40 megawatts of wind power from a West Virginia wind farm, becoming the latest cloud giant to enter into a utility-scale renewable-energy purchase agreement… The purchase covers more capacity than all of the cloud-based business software giant’s servers consume in data centers that host them.”  This shift in the industry shows that businesses, customers, and even employees are demanding more renewable energy sources for data centers and, in addition to being eco-friendly, they are significantly impacting company’s bottom lines.

Posted in Computer Room Design, Data Center Battery, Data Center Build, Data Center Construction, data center cooling, Data Center Design, Datacenter Design, Power Management, Uncategorized | Tagged , , , , , , | Comments Off

Tips to Prolong the Life of a Data Center UPS Battery

3 Data centers rely on their UPS battery to keep their infrastructure up and running.  Implementing uninterruptible power supplies with a good, reliable, long-lasting UPS battery is an expensive endeavor but one that is more than worthwhile if the power supply does what it is supposed to and provides protection.  We have discussed UPS system TCO in the past, and it is important to evaluate TCO when determining what Uninterruptible Power Supply system to implement but TCO is only accurate when you take life-extending measures to keep your Uninterruptible Power Supply system running as it should, for as long as possible.  A neglected backup power source, or one that is not properly implemented, may have a dramatically reduced life which is frustrating and costly.  It is important that a data center manager make prolonging the life of its backup power supply battery a priority so that investment is maximized and power is properly protected.  Below are some tips to prolong the life of your battery without jeopardizing the uptime of your facility,  so that you can have peace of mind that you facility is covered and you are maximizing the investment you have made.

  • Purchase the Correct UPS Battery for Your Unique Data Center
    • This is often where mistake #1 occurs. It is important to consider total cost of ownership when choosing the right backup power system and power unit for your data facility but total cost of ownership is not necessarily the full picture.  Some high-rate discharge batteries have a shorter lifespan so if a longer lifespan is a high priority it may be best to opt for a different kind of UPS battery.  A flooded or wet cell option will cost more than a VLRA battery but it will be more reliable and have a longer lifespan.  With a good picture of your data center’s specific needs, and a proper analysis of TCO you can narrow in on the proper continuous power unit to provide reliability and long lifespan within your budget.  And, once you have chosen the correct one, make sure it is installed properly.  An incorrectly installed backup power battery will often have a shorter lifespan.
  • Maintenance, Maintenance, Maintenance
    • If there is one thing that might make the biggest difference in prolonging the life of a UPS battery it is maintenance. Maintenance must be performed routinely according to a pre-determined schedule so that you are certain your backup power supply is not being neglected.  They are very sensitive to temperature and so it is important to have a monitoring system in place that alerts you if the temperature fluctuates outside of a certain range (keep it as close to  75 – 77 degrees Fahrenheit as possible).  By maintaining the correct temperature you can significantly prolong it’s life.  While automated monitoring of certain factors is important, a routine visual inspection should be part of your maintenance schedule as well because you can look for obvious damage such as loose intercell connections, damaged post seals, corrosion or fires.
  • Do Not Use Your UPS Battery Beyond Its Capacity
    • A battery is still functioning and your UPS is still doing its job, sure it may be low on life, but it is still working so why waste it, right?   It is critical that you do not push your backup power battery beyond its capacity or you greatly risk having no backup in the event of a power failure.  You should never use it beyond 80% of its rate capacity.  Once it hits 80% it will begin to deteriorate more rapidly, putting your data center at risk.  For this reason, it is imperative you not exceed a Uninterruptible Power Supply battery’s capacity.
Posted in Data Center Battery, data center equipment, Data Center Infrastructure Management, data center maintenance, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , , , , | Comments Off

Data Center Cyber Attack Prevention & Protection

truth-about-hackersEverywhere we turn we seem to be hearing about another cyber attack.  Sensitive customer information, compromised.  Angry businesses.  Concerned customers.  It is a major problem and it is one that data centers must be supremely aware of and vigilant in protecting against  since facilities have access to and store so much sensitive information.  Data centers must protect networks, applications and end points from highly sophisticated, ever-evolving threats.

Protection techniques vary by each location depending on the size of the facility, the information it stores, and the specific needs of their clients.  But, in the end, all data facilities need to be actively preventing cyber attacks through a variety of means and approaches.  There are intrusion prevention systems that can be implemented by any data center that are scalable and designed to protect against the most current threats.  Data Center Dynamics explains why security must be uniquely designed for the data site, technology-driven, and innovative to truly protect data centers from potential threats, “Many Internet-edge security solutions, like next-generation firewalls, are being inappropriately positioned in the data center where the need is visibility and control over custom applications, not traditional web-based applications, and the systems that keep them operational. Security must be integrated into the data center fabric, in order to handle not only north-south (or inbound and outbound) traffic, but also east-west traffic flows between devices, or even between data centers. Security also needs to be able to dynamically handle high-volume bursts of traffic to accommodate how highly-specialized data center environments operate today. And to be practical, centralized security management is a necessity. Today’s data center environments are highly dynamic and security solutions must be as well. As they evolve from physical to virtual to next-generation SDN and ACI environments, data center administrators must be able to easily apply and maintain protections… They must also be intelligent, so that administrators can focus on providing services and building custom applications to take full advantage of the business benefits these new environments enable, without getting bogged down in administrative security tasks, or risking reduced levels of protection… Traditional data center security approaches offer limited threat awareness – especially with regards to custom data center applications and the SCADA systems that keep them running 24×7. They typically deliver limited visibility across the distributed data center environment and focus primarily on blocking at the perimeter. As a result, they fail to effectively defend against the emerging, unknown, threats that are targeting them. What’s needed is a threat-centric approach to holistically secure the data center, that includes protection before, during, and after an attack – one that understands, and can provide protection for, specialized data center traffic and the systems that keep them running. With capabilities like global intelligence, coupled with continuous visibility, analysis, and policy enforcement across the distributed data center environment, administrators can gain automation, with control, for the protection they need. Advanced attackers are infiltrating networks and moving laterally to reach the data center. Once there, the goal is to exfiltrate valuable data or cause disruption. Data center administrators need technologies that allow them to be as ‘centered’ on security as attackers are on the data center.”  Protection must be multi-level and provide protection, contingency and backup for multiple-stages of a potential attack.  While implementation of such protection may be time-consuming and costly, it is far better than having a massive cyber attack that compromises senstive customer information or that forces your data center into prolonged downtime.  Through better protection and an ever-evolving approach based on the most current information about cyber threats , customer trust is protected and business can continue to not only function but thrive.

Posted in Uncategorized | Tagged , , | Comments Off

Calculating PUE: Importance of Accurate Calculation in Data Centers

Advanced PDUData center power usage effectiveness, or PUE, is a calculation that is an essential part of determining how efficient a facility is and what improvements need to be made.  The operating costs for data facilities are constantly rising and there is a constant demand from the powers above to cut costs and lower expenses but, for data centers, this is challenging.  Data centers must continue to meet the demand of businesses, provide mission critical services, maximize uptime and protect information but with the constant pressure to cut costs.  How, with technology constantly evolving and needs constantly changing, can a facility manager assess its own efficiency and effectiveness and make adjustments to continue to improve without diminishing its ability to perform?  Data centers have been calculating their PUE in an attempt to do so for a long time but those calculations can be a bit challenging and inaccurate so – what is the best way to calculate PUE for each individual site?

When calculating PUE, a location must look at how much power is being used by servers, storage equipment, network equipment, other IT equipment, cooling and so much more.  A PUE calculation is a specific metric that can serve as benchmark for data centers and, after the first calculation, future calculations can be used to compare whether improvement is happening or not.  The trouble is, some calculations are inaccurate.  Data Center Knowledge explains how to improve calculations to ensure accuracy, ” While PUE has become the de facto metric for measuring infrastructure efficiency, data center managers must clarify three things before embarking on their measurement strategy: There must be agreement on exactly what devices constitute IT loads, what devices constitute physical infrastructure, and what devices should be excluded from the measurement. Without first clarifying these three things, it can be difficult for data center managers to ensure the accuracy of their PUE… The first part of this methodology is to establish a standard to categorize data center subsystems as either (a) IT load or (b) physical infrastructure or (c) determine whether the subsystem should be excluded in the calculation. While it’s fairly simple to designate servers and storage devices as an IT load, and to lump the UPSs and HVAC systems into physical infrastructure, there are subsystems in the data center that are harder to classify…. Some devices that consume power and are associated with a data center are shared with other uses such as a chiller plant or a UPS that also provides cooling or power to a call center or office space. Even an exact measurement of the energy use of these shared devices doesn’t directly determine the data center PUE, since only the device’s data center-associated power usage can be used in the PUE calculation. One way to handle this is to omit the shared devices from the PUE, but this approach can cause major errors, especially if the device is a major energy user like a chiller plant. A better way to measure this shared device is to estimate the fraction of losses that are associated with the data center, and then use those losses to determine the PUE… While every device in the data center that uses energy can be measured, it can be impractical, complex, or expensive to measure its energy use. Consider a power distribution unit (PDU). In a partially loaded data center, the losses in PDUs can be in excess of 10 percent of the IT load. These loss figures can significantly impact PUE, yet most data center operations omit PDU losses in PUE calculations because they can be difficult to determine when using the built-in PDU instrumentation. Fortunately, the losses in a PDU are quite deterministic and can be directly calculated from the IT load with precise accuracy if the load is known in either watts, amps or VA. In fact, this tends to be more accurate than the built-in instrumentation approach. Once the estimated PDU losses are subtracted from the UPS output metering to obtain the IT load, they can be counted as a part of the infrastructure load. This method improves the PUE calculation, as opposed to ignoring PDU.”  These guidelines will help data center managers determine a specific plan to calculate PUE.  By adhering to the pre-determined PUE calculation method, results will be more accurate across the board and over time so that progress can be seen and further improvements can be made.

 

Posted in Data Center Battery, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit | Tagged , | Comments Off

High-Density Data Center Advantages and Considerations

interxion-containment-overhAs we have previously discussed, increasing rack density and consolidating data centers is all the rage, especially going into 2016.  This is a trend we do not see going anywhere.  Many businesses are opting for colocation as a way to save money and achieve better IT management and protection.  In data facilities, space is a precious commodity.  One of the main reasons often cited for needing to relocate is simply not having enough space.  As the trend continues towards cloud storage, with the help of increasing rack density and consolidation, many data centers may just find they have more room than they think and can even implement more focused, better cooling strategies that will also help save on energy costs.  Facility rent is far from cheap so maximizing space is critical in achieving a cost-effective method of managing data.  Horizontal expansion is not the answer, vertical expansion through increased rack density and consolidation is how data centers can continue to adapt to meet their own needs without having to relocate.

Data Center Journal provides a helpful description of what high density looks like and why it makes such a big impact, “A number of different approaches to increasing power density have expanded the computing power per square foot of data center space. According to a Gartner press release (“Gartner Says More Than 50 Percent of Data Centers to Incorporate High-Density Zones by Year-End 2015”), “Traditional data centers built as recently as five years ago were designed to have a uniform energy distribution of around 2 kilowatts (kW) to 4kW per rack.” But the addition of high-density zones can increase this energy distribution several times over in certain areas of the facility. “Gartner defines a high-density zone as one where the energy needed is more than 10kW per rack for a given set of rows. A standard rack of industry-standard servers needs 30 square feet to be accommodated without supplemental cooling, and a rack that is 60 percent filled could have a power draw as high as 12kW. Any standard rack of blade servers that is more than 50 percent full will need to be in a high-density zone.”  Of course, increasing density in individual server racks, while beneficial to consolidation, brings challenges that must be addressed.  Power distribution and cooling needs are vastly different for high density racks vs. traditional server racks.  Not only must high-density power be properly supplied by energy, and properly cooled, but all of the components must have adequate backup power in the form of a sufficient UPS and UPS battery that can maintain the high-density needs should a power failure occur.  Data Center Journal elaborates on the challenges, “One constraint on power density is obviously the power-distribution infrastructure, both at the level of the utility-provided power and the backup facilities. For each watt supplied by the utility, the data center must have sufficient UPS and diesel-generator capacity to continue operations in the event of a power outage. And that, of course, is above the cabling, power-distribution units (PDUs) and so on dedicated to delivering the power to the racks. Coughlin notes that “most data centers don’t have much new power available for their facilities, so they likely have to get more power from the utility and spend a lot of money on core data center infrastructure (electrical and mechanical infrastructure, generators, power distribution and so on) just to be able to provide it. So access to more power and cost are two important variables.” But the other and perhaps more pressing need is cooling: every watt consumed by the facility is a watt of waste heat that must be removed to maintain the desired operating temperature. Herein lies what may be the biggest challenge facing higher density—particularly for facilities not originally intended to handle it. “When you increase density considerably at the rack level, much more heat is generated by the servers and a lot more cooling is required,” said Coughlin. “Cooling infrastructure is very expensive, but the biggest challenge may be trying to retrofit an old data center. Most of these older data centers were built with low ceilings and there is no easy way to improve density in many cases other than ripping up the data center—which is incredibly difficult to do, especially with live customers.”  Ultimately, if these challenges can be overcome, high-density will drive a data center’s ability to lower costs and maximize efficiency, a focus that is on the mind of every facility manager.

 

Posted in data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Power Management | Tagged , , , , , | Comments Off

Data Center Colocation and Cloud Computing Remain Popular Going Into 2016

DCIMAs we come to the close of another year and get ready to embark on a new year we reflect on the things that we did right and wrong and make an effort to improve for the new year.  Data centers will continue to learn from the past, attempt to stay on the cutting edge of technology, provide better service including improved uptime and energy efficiency, and keep information secure.  Trends can sometimes be fast in passing, a quick blip on the radar soon to be forgotten.  But, sometimes, trends are indicative of a bigger shift in the industry that all are taking notice of and making adjustments to accommodate.  One trend that seems to be sticking around is a shift towards reducing smaller computer rooms or IT sites that are outdated and instead opting for data center colocation and cloud computing to meet the needs of most businesses.

Over the past few years we have seen a big shift towards businesses eliminating their small on-site IT and computer rooms in favor of data center colocation projects as well as utilizing the cloud.  Data Center Knowledge elaborates on how the cloud has impacted data centers and continues to be a strong trend going into 2016, “A few years back, there was talk of the cloud having the potential to “kill” the data center. However, over time we’ve seen that cloud and data centers are not in competition, rather they complement one another and need to work together in order to properly function. We’ll see this trend carry over into 2016. Cloud-based businesses increasingly rely on colocation providers to support their large data storage needs. Data center management teams need to focus part of their efforts on supporting increased usage from cloud-based companies and staying leading contenders in the data center space. By 2020, IDC found that 40 percent of data in the digital universe will be “touched” by the cloud, meaning either stored, perhaps temporarily, or processed in some way. And with the digital universe experiencing unprecedented growth, we’ll see cloud capabilities being a must in data centers for most customers going forward in 2016 and beyond.”

In addition to colocation and cloud storage, many data centers continue to have increased density demands.  As more facilities move towards high-density storage and computing the needs of the data facility, including uninterruptible power supply, UPS battery, rack storage, PDU, etc., shift as well.  Forsythe elaborates on high density demands and reinforces the shift towards colocation, “By 2020, U.S. data centers will require six times the electricity of New York City. Since the average U.S. data center is approaching 20 years of age, most existing data center facilities can’t meet today’s power demands. Trying to run higher power density technologies in an aging data center usually takes significant capital investments – if it can even be accomplished. Lower-density data centers also require you to procure additional IT cabinets and their associated infrastructure (power whips, power strips, patch panels, etc.). This added cost is due to the inability of lower-density data centers to provide enough power on a per-cabinet basis to make total use of every cabinet’s vertical rack space… You have the opportunity to reduce your costs and improve your performance if you move to a facility that accommodates higher density. In a higher-density data center, you may end up requiring just half of the space that you would require at a lower-density facility. If you upgrade your technology and increase your power density, you can support the same amount of equipment with fewer cabinets. This allows you to improve your efficiencies and power usage effectiveness (PUE), significantly lowering your capital and operational costs.”

 

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Infrastructure Management, DCIM | Tagged , , | Comments Off

Using Lithium-Ion Batteries in Data Center UPS Systems

interxion-containment-overhIn the data center world, aside from maximizing uptime, there is always a focus on using less energy and spending less money.  Large centers often set the tone for how this can be achieved because if it can be achieved on a large scale, it can frequently also be achieved in smaller scale facilities.  It is especially important to focus on these areas in large data centers because by reducing energy use it can dramatically improve expenditures, freeing up money in the budget.  Implementing an effective Uninterruptible Power Supply system is incredibly important and a good one can be the lifeblood of a data center – providing necessary backup power in the event of a power failure.  A UPS system is only as good as its batteries, if the batteries do not work, the whole system will not work.  Microsoft has recently implemented the use of new batteries in their facilities that are dramatically cutting costs.

Data centers, whether large or small, go through a lot of batteries to power their UPS system.  Batteries must be checked often and replaced as needed to ensure that when the system is needed during a power failure, they will be able to provide the necessary support.  TheNextPlatform describes how traditional batteries function, “In a traditional datacenter design, companies deploy uninterruptible power supply, or UPS, systems that are giant banks of lead acid batteries. The UPS provides power to the servers, storage and networks if there is a short glitch in the power feed that might otherwise cause the machinery to fail or reboot. The UPS sits in between the high voltage feed coming into the datacenter from the electrical grid substations and the server and storage machinery that runs at a much lower voltage inside the datacenter.”  Microsoft continues to move toward innovation within the technology industry by implementing the use of lithium-ion batteries in their UPS systems.  By making the switch, Microsoft reduces the need for a large equipment room footprint to house UPS systems which saves space and utilities for cooling and energy.  PCWorld elaborates on the advantages of the switch Microsoft has made, “The LES can replace traditional UPSes (Uninterruptible Power Supplies) for providing backup power to servers and other IT gear, Microsoft said. A UPS is designed to kick in fast if there’s an interruption to the main power, keeping equipment running during the seconds it takes for a diesel generator to start up and take over. Traditional UPSes use lead acid batteries, but they’re bulky and require a lot of maintenance. Microsoft says its lithium-ion battery system is five times cheaper than traditional UPSes, factoring in the cost to purchase, install and maintain them over several years. They also take up 25 percent less floor space, because they’re installed directly within the server racks… The batteries are hot-swappable, meaning they can be replaced without shutting down servers, and LES is suitable for data centers of all sizes, Harris said, including a data center closet with only a few servers… Microsoft isn’t the only company using lithium-ion batteries for backup power. Facebook submitted a somewhat similar design to the Open Compute Project last year and is using that in its own data centers. “The inflection point has just happened in the industry where lithium-ion is cheaper to deploy than lead-acid for a data center UPS,” Matt Corddry, Facebook’s director of hardware engineering, said last year.”  With such massive forces in the technology industry proving the advantages of switching to lithium-ion, many data centers of all sizes are sure to follow in their wake.

Posted in Back-up Power Industry, Data Center Battery, Power Management, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , | Comments Off

Data Center UPS Trends and Management

data_center_facebookIn a data center the delicate balance of performing mission critical tasks, storing and protecting information, maximizing uptime and also being energy efficient all happen simultaneously.  Today clients demand their information systems to run effectively and run efficiently, and they demand them to be in use whenever they want them there.  Data centers must continue to look at ways to avoid power failures and maximize efficiency through an effective monitoring plan and a reliable UPS.  Proper redundancy to maximize uptime can be costly and drain a lot of energy.  But, without proper redundancy, a data center could experience catastrophic downtime.  The correct Uninterruptible Power Supply, UPS battery and monitoring must be in place to prevent problematic power failures from occurring.

There are many emerging trends in data center Uninterruptible Power Supply systems and management.  Major facilities are looking at ways to reduce power supply needs by implementing data networks so that, if a power outage occurs, data demands can be shifted from one server to another until uptime is restored.  Data Center Knowledge explains how, and why, big facilities are making a shift away from traditional UPS systems and UPS batteries to improve efficiency while maintaining and maximizing uptime, “Big uninterruptible power supply cabinets and rows of batteries that are similar in size to the ones under the hood of your car have been an unquestioned data center mainstay for years. This infrastructure is what ensures servers keep running between the time the utility power feed goes down and backup generators get a chance to start and stabilize. But companies that operate some of the world’s largest data centers – companies like Microsoft, Facebook, or Google – are in the habit of questioning just such mainstays. At their scale, even incremental efficiency improvements translate into millions upon millions of dollars saved, but something like being able to shave 150,000 square feet off the size of a facility or improve the Power Usage Effectiveness rating by north of 15 percent has substantial impact on the bottom line. Those are the kinds of efficiency improvements Microsoft claims to have achieved by rethinking (and finally rejecting) the very idea of the big central stand-alone data center UPS system. The company now builds what essentially is a mini-UPS directly into each server chassis – an approach it has dubbed Local Energy Storage… It saves physical space (150,000 square feet for a typical 25-megawatt data center, according to Shaun Harris, director of engineering for cloud server infrastructure at Microsoft, who blogged about LES this week). It is also more energy efficient, because it avoids double conversion electricity goes through in a traditional data center UPS. Finally, Microsoft saves by not adding reserve UPS systems (in case the primary ones fail) and by not having to build a “safety margin” in the primary UPS. Data center designers usually go through a lot of trouble to make sure the central UPS plant doesn’t fail, because if it does, every server downstream will go down when the utility feed fails.”  The need for an effective and efficient UPS is not going anywhere anytime soon, especially not for smaller locations that cannot rely on implementing a network of data sites.  Ensuring that your facility batteries and backup power supply are not only sufficient for your data center but are actually being monitored and will work if needed are critical steps in the process to maximizing uptime in the event of a power failure.

Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , | Comments Off

Data Center PDUs – Why Intelligent Is Better Than Basic

dataencryptionphotoEvery data center manager is familiar with power distribution units or PDUs.  PDUs help distribute power throughout a location to storage devices, servers and networking equipment so that it can function seamlessly and properly.  Facility needs and infrastructure are not static, they continue to evolve over time and power distribution units are no exception.  Basic power distribution units are what most data centers are used to but today, much like in other areas of technology, intelligence is the name of the game.  Facility managers are on a quest for improved monitoring and maintenance that not only alerts them but is intelligent and capable of making proactive helpful decisions on its own to keep a data center functioning effectively and efficiently.  In the realm of PDUs, this comes in the form of intelligent PDUs.

Intelligent power distribution units are a high availability solution for data centers looking to move in an efficient and intelligent direction with their infrastructure so that uptime can be maximized while saving a significant amount of money.  What is the difference between intelligent and basic units?  Intelligent PDUs provide some of the most important things data center managers are looking for – functionality, adaptability, reliability and much more.  Raritan points out that intelligent power distribution units are capable of power distribution and multi-point metering, sequenced outlet power cycling, remote management, environmental monitoring, and asset tracking and infrastructure security.  These invaluable advantages would benefit any operation, large or small.  Data Center Knowledge points out why intelligent PDUs will not only help play a vital role in converting data center infrastructure to a more intelligent system but will also make a significant impact on the bottom line, “Organizations and data center administrators are constantly looking for ways to improve data center control and overcome these kinds of challenges. Consider this – a recent Ponemon Institute study showed that in 2013, the average cost of downtime was a staggering $7,908 per minute. The very same study also showed us that the cost of a data breach to a company is on average $145 per affected individual and $3.5M per incident. This means we’re dealing with real capacity, management, and even security challenges when it comes to data center control. This is where intelligence can begin to make a real difference… This means creating an architecture built around intelligence and one that can resolve some of the most pressing data center control challenges out there.”  While the upfront cost of an intelligent PDU may be a challenging pill to swallow for those who determine the budget, overall they will help contribute to a massive data center overhaul that will save a significant amount of money in the long run, more than paying for themselves.

Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Power Management | Tagged , , , , | Comments Off