header-about-us-sub

Calculating PUE: Importance of Accurate Calculation in Data Centers

Advanced PDUData center power usage effectiveness, or PUE, is a calculation that is an essential part of determining how efficient a facility is and what improvements need to be made.  The operating costs for data facilities are constantly rising and there is a constant demand from the powers above to cut costs and lower expenses but, for data centers, this is challenging.  Data centers must continue to meet the demand of businesses, provide mission critical services, maximize uptime and protect information but with the constant pressure to cut costs.  How, with technology constantly evolving and needs constantly changing, can a facility manager assess its own efficiency and effectiveness and make adjustments to continue to improve without diminishing its ability to perform?  Data centers have been calculating their PUE in an attempt to do so for a long time but those calculations can be a bit challenging and inaccurate so – what is the best way to calculate PUE for each individual site?

When calculating PUE, a location must look at how much power is being used by servers, storage equipment, network equipment, other IT equipment, cooling and so much more.  A PUE calculation is a specific metric that can serve as benchmark for data centers and, after the first calculation, future calculations can be used to compare whether improvement is happening or not.  The trouble is, some calculations are inaccurate.  Data Center Knowledge explains how to improve calculations to ensure accuracy, ” While PUE has become the de facto metric for measuring infrastructure efficiency, data center managers must clarify three things before embarking on their measurement strategy: There must be agreement on exactly what devices constitute IT loads, what devices constitute physical infrastructure, and what devices should be excluded from the measurement. Without first clarifying these three things, it can be difficult for data center managers to ensure the accuracy of their PUE… The first part of this methodology is to establish a standard to categorize data center subsystems as either (a) IT load or (b) physical infrastructure or (c) determine whether the subsystem should be excluded in the calculation. While it’s fairly simple to designate servers and storage devices as an IT load, and to lump the UPSs and HVAC systems into physical infrastructure, there are subsystems in the data center that are harder to classify…. Some devices that consume power and are associated with a data center are shared with other uses such as a chiller plant or a UPS that also provides cooling or power to a call center or office space. Even an exact measurement of the energy use of these shared devices doesn’t directly determine the data center PUE, since only the device’s data center-associated power usage can be used in the PUE calculation. One way to handle this is to omit the shared devices from the PUE, but this approach can cause major errors, especially if the device is a major energy user like a chiller plant. A better way to measure this shared device is to estimate the fraction of losses that are associated with the data center, and then use those losses to determine the PUE… While every device in the data center that uses energy can be measured, it can be impractical, complex, or expensive to measure its energy use. Consider a power distribution unit (PDU). In a partially loaded data center, the losses in PDUs can be in excess of 10 percent of the IT load. These loss figures can significantly impact PUE, yet most data center operations omit PDU losses in PUE calculations because they can be difficult to determine when using the built-in PDU instrumentation. Fortunately, the losses in a PDU are quite deterministic and can be directly calculated from the IT load with precise accuracy if the load is known in either watts, amps or VA. In fact, this tends to be more accurate than the built-in instrumentation approach. Once the estimated PDU losses are subtracted from the UPS output metering to obtain the IT load, they can be counted as a part of the infrastructure load. This method improves the PUE calculation, as opposed to ignoring PDU.”  These guidelines will help data center managers determine a specific plan to calculate PUE.  By adhering to the pre-determined PUE calculation method, results will be more accurate across the board and over time so that progress can be seen and further improvements can be made.

 

Posted in Data Center Battery, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit | Tagged , | Comments Off

High-Density Data Center Advantages and Considerations

interxion-containment-overhAs we have previously discussed, increasing rack density and consolidating data centers is all the rage, especially going into 2016.  This is a trend we do not see going anywhere.  Many businesses are opting for colocation as a way to save money and achieve better IT management and protection.  In data facilities, space is a precious commodity.  One of the main reasons often cited for needing to relocate is simply not having enough space.  As the trend continues towards cloud storage, with the help of increasing rack density and consolidation, many data centers may just find they have more room than they think and can even implement more focused, better cooling strategies that will also help save on energy costs.  Facility rent is far from cheap so maximizing space is critical in achieving a cost-effective method of managing data.  Horizontal expansion is not the answer, vertical expansion through increased rack density and consolidation is how data centers can continue to adapt to meet their own needs without having to relocate.

Data Center Journal provides a helpful description of what high density looks like and why it makes such a big impact, “A number of different approaches to increasing power density have expanded the computing power per square foot of data center space. According to a Gartner press release (“Gartner Says More Than 50 Percent of Data Centers to Incorporate High-Density Zones by Year-End 2015”), “Traditional data centers built as recently as five years ago were designed to have a uniform energy distribution of around 2 kilowatts (kW) to 4kW per rack.” But the addition of high-density zones can increase this energy distribution several times over in certain areas of the facility. “Gartner defines a high-density zone as one where the energy needed is more than 10kW per rack for a given set of rows. A standard rack of industry-standard servers needs 30 square feet to be accommodated without supplemental cooling, and a rack that is 60 percent filled could have a power draw as high as 12kW. Any standard rack of blade servers that is more than 50 percent full will need to be in a high-density zone.”  Of course, increasing density in individual server racks, while beneficial to consolidation, brings challenges that must be addressed.  Power distribution and cooling needs are vastly different for high density racks vs. traditional server racks.  Not only must high-density power be properly supplied by energy, and properly cooled, but all of the components must have adequate backup power in the form of a sufficient UPS and UPS battery that can maintain the high-density needs should a power failure occur.  Data Center Journal elaborates on the challenges, “One constraint on power density is obviously the power-distribution infrastructure, both at the level of the utility-provided power and the backup facilities. For each watt supplied by the utility, the data center must have sufficient UPS and diesel-generator capacity to continue operations in the event of a power outage. And that, of course, is above the cabling, power-distribution units (PDUs) and so on dedicated to delivering the power to the racks. Coughlin notes that “most data centers don’t have much new power available for their facilities, so they likely have to get more power from the utility and spend a lot of money on core data center infrastructure (electrical and mechanical infrastructure, generators, power distribution and so on) just to be able to provide it. So access to more power and cost are two important variables.” But the other and perhaps more pressing need is cooling: every watt consumed by the facility is a watt of waste heat that must be removed to maintain the desired operating temperature. Herein lies what may be the biggest challenge facing higher density—particularly for facilities not originally intended to handle it. “When you increase density considerably at the rack level, much more heat is generated by the servers and a lot more cooling is required,” said Coughlin. “Cooling infrastructure is very expensive, but the biggest challenge may be trying to retrofit an old data center. Most of these older data centers were built with low ceilings and there is no easy way to improve density in many cases other than ripping up the data center—which is incredibly difficult to do, especially with live customers.”  Ultimately, if these challenges can be overcome, high-density will drive a data center’s ability to lower costs and maximize efficiency, a focus that is on the mind of every facility manager.

 

Posted in data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Power Management | Tagged , , , , , | Comments Off

Data Center Colocation and Cloud Computing Remain Popular Going Into 2016

DCIMAs we come to the close of another year and get ready to embark on a new year we reflect on the things that we did right and wrong and make an effort to improve for the new year.  Data centers will continue to learn from the past, attempt to stay on the cutting edge of technology, provide better service including improved uptime and energy efficiency, and keep information secure.  Trends can sometimes be fast in passing, a quick blip on the radar soon to be forgotten.  But, sometimes, trends are indicative of a bigger shift in the industry that all are taking notice of and making adjustments to accommodate.  One trend that seems to be sticking around is a shift towards reducing smaller computer rooms or IT sites that are outdated and instead opting for data center colocation and cloud computing to meet the needs of most businesses.

Over the past few years we have seen a big shift towards businesses eliminating their small on-site IT and computer rooms in favor of data center colocation projects as well as utilizing the cloud.  Data Center Knowledge elaborates on how the cloud has impacted data centers and continues to be a strong trend going into 2016, “A few years back, there was talk of the cloud having the potential to “kill” the data center. However, over time we’ve seen that cloud and data centers are not in competition, rather they complement one another and need to work together in order to properly function. We’ll see this trend carry over into 2016. Cloud-based businesses increasingly rely on colocation providers to support their large data storage needs. Data center management teams need to focus part of their efforts on supporting increased usage from cloud-based companies and staying leading contenders in the data center space. By 2020, IDC found that 40 percent of data in the digital universe will be “touched” by the cloud, meaning either stored, perhaps temporarily, or processed in some way. And with the digital universe experiencing unprecedented growth, we’ll see cloud capabilities being a must in data centers for most customers going forward in 2016 and beyond.”

In addition to colocation and cloud storage, many data centers continue to have increased density demands.  As more facilities move towards high-density storage and computing the needs of the data facility, including uninterruptible power supply, UPS battery, rack storage, PDU, etc., shift as well.  Forsythe elaborates on high density demands and reinforces the shift towards colocation, “By 2020, U.S. data centers will require six times the electricity of New York City. Since the average U.S. data center is approaching 20 years of age, most existing data center facilities can’t meet today’s power demands. Trying to run higher power density technologies in an aging data center usually takes significant capital investments – if it can even be accomplished. Lower-density data centers also require you to procure additional IT cabinets and their associated infrastructure (power whips, power strips, patch panels, etc.). This added cost is due to the inability of lower-density data centers to provide enough power on a per-cabinet basis to make total use of every cabinet’s vertical rack space… You have the opportunity to reduce your costs and improve your performance if you move to a facility that accommodates higher density. In a higher-density data center, you may end up requiring just half of the space that you would require at a lower-density facility. If you upgrade your technology and increase your power density, you can support the same amount of equipment with fewer cabinets. This allows you to improve your efficiencies and power usage effectiveness (PUE), significantly lowering your capital and operational costs.”

 

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Infrastructure Management, DCIM | Tagged , , | Comments Off

Using Lithium-Ion Batteries in Data Center UPS Systems

interxion-containment-overhIn the data center world, aside from maximizing uptime, there is always a focus on using less energy and spending less money.  Large centers often set the tone for how this can be achieved because if it can be achieved on a large scale, it can frequently also be achieved in smaller scale facilities.  It is especially important to focus on these areas in large data centers because by reducing energy use it can dramatically improve expenditures, freeing up money in the budget.  Implementing an effective Uninterruptible Power Supply system is incredibly important and a good one can be the lifeblood of a data center – providing necessary backup power in the event of a power failure.  A UPS system is only as good as its batteries, if the batteries do not work, the whole system will not work.  Microsoft has recently implemented the use of new batteries in their facilities that are dramatically cutting costs.

Data centers, whether large or small, go through a lot of batteries to power their UPS system.  Batteries must be checked often and replaced as needed to ensure that when the system is needed during a power failure, they will be able to provide the necessary support.  TheNextPlatform describes how traditional batteries function, “In a traditional datacenter design, companies deploy uninterruptible power supply, or UPS, systems that are giant banks of lead acid batteries. The UPS provides power to the servers, storage and networks if there is a short glitch in the power feed that might otherwise cause the machinery to fail or reboot. The UPS sits in between the high voltage feed coming into the datacenter from the electrical grid substations and the server and storage machinery that runs at a much lower voltage inside the datacenter.”  Microsoft continues to move toward innovation within the technology industry by implementing the use of lithium-ion batteries in their UPS systems.  By making the switch, Microsoft reduces the need for a large equipment room footprint to house UPS systems which saves space and utilities for cooling and energy.  PCWorld elaborates on the advantages of the switch Microsoft has made, “The LES can replace traditional UPSes (Uninterruptible Power Supplies) for providing backup power to servers and other IT gear, Microsoft said. A UPS is designed to kick in fast if there’s an interruption to the main power, keeping equipment running during the seconds it takes for a diesel generator to start up and take over. Traditional UPSes use lead acid batteries, but they’re bulky and require a lot of maintenance. Microsoft says its lithium-ion battery system is five times cheaper than traditional UPSes, factoring in the cost to purchase, install and maintain them over several years. They also take up 25 percent less floor space, because they’re installed directly within the server racks… The batteries are hot-swappable, meaning they can be replaced without shutting down servers, and LES is suitable for data centers of all sizes, Harris said, including a data center closet with only a few servers… Microsoft isn’t the only company using lithium-ion batteries for backup power. Facebook submitted a somewhat similar design to the Open Compute Project last year and is using that in its own data centers. “The inflection point has just happened in the industry where lithium-ion is cheaper to deploy than lead-acid for a data center UPS,” Matt Corddry, Facebook’s director of hardware engineering, said last year.”  With such massive forces in the technology industry proving the advantages of switching to lithium-ion, many data centers of all sizes are sure to follow in their wake.

Posted in Back-up Power Industry, Data Center Battery, Power Management, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , | Comments Off

Data Center UPS Trends and Management

data_center_facebookIn a data center the delicate balance of performing mission critical tasks, storing and protecting information, maximizing uptime and also being energy efficient all happen simultaneously.  Today clients demand their information systems to run effectively and run efficiently, and they demand them to be in use whenever they want them there.  Data centers must continue to look at ways to avoid power failures and maximize efficiency through an effective monitoring plan and a reliable UPS.  Proper redundancy to maximize uptime can be costly and drain a lot of energy.  But, without proper redundancy, a data center could experience catastrophic downtime.  The correct Uninterruptible Power Supply, UPS battery and monitoring must be in place to prevent problematic power failures from occurring.

There are many emerging trends in data center Uninterruptible Power Supply systems and management.  Major facilities are looking at ways to reduce power supply needs by implementing data networks so that, if a power outage occurs, data demands can be shifted from one server to another until uptime is restored.  Data Center Knowledge explains how, and why, big facilities are making a shift away from traditional UPS systems and UPS batteries to improve efficiency while maintaining and maximizing uptime, “Big uninterruptible power supply cabinets and rows of batteries that are similar in size to the ones under the hood of your car have been an unquestioned data center mainstay for years. This infrastructure is what ensures servers keep running between the time the utility power feed goes down and backup generators get a chance to start and stabilize. But companies that operate some of the world’s largest data centers – companies like Microsoft, Facebook, or Google – are in the habit of questioning just such mainstays. At their scale, even incremental efficiency improvements translate into millions upon millions of dollars saved, but something like being able to shave 150,000 square feet off the size of a facility or improve the Power Usage Effectiveness rating by north of 15 percent has substantial impact on the bottom line. Those are the kinds of efficiency improvements Microsoft claims to have achieved by rethinking (and finally rejecting) the very idea of the big central stand-alone data center UPS system. The company now builds what essentially is a mini-UPS directly into each server chassis – an approach it has dubbed Local Energy Storage… It saves physical space (150,000 square feet for a typical 25-megawatt data center, according to Shaun Harris, director of engineering for cloud server infrastructure at Microsoft, who blogged about LES this week). It is also more energy efficient, because it avoids double conversion electricity goes through in a traditional data center UPS. Finally, Microsoft saves by not adding reserve UPS systems (in case the primary ones fail) and by not having to build a “safety margin” in the primary UPS. Data center designers usually go through a lot of trouble to make sure the central UPS plant doesn’t fail, because if it does, every server downstream will go down when the utility feed fails.”  The need for an effective and efficient UPS is not going anywhere anytime soon, especially not for smaller locations that cannot rely on implementing a network of data sites.  Ensuring that your facility batteries and backup power supply are not only sufficient for your data center but are actually being monitored and will work if needed are critical steps in the process to maximizing uptime in the event of a power failure.

Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , | Comments Off

Data Center PDUs – Why Intelligent Is Better Than Basic

dataencryptionphotoEvery data center manager is familiar with power distribution units or PDUs.  PDUs help distribute power throughout a location to storage devices, servers and networking equipment so that it can function seamlessly and properly.  Facility needs and infrastructure are not static, they continue to evolve over time and power distribution units are no exception.  Basic power distribution units are what most data centers are used to but today, much like in other areas of technology, intelligence is the name of the game.  Facility managers are on a quest for improved monitoring and maintenance that not only alerts them but is intelligent and capable of making proactive helpful decisions on its own to keep a data center functioning effectively and efficiently.  In the realm of PDUs, this comes in the form of intelligent PDUs.

Intelligent power distribution units are a high availability solution for data centers looking to move in an efficient and intelligent direction with their infrastructure so that uptime can be maximized while saving a significant amount of money.  What is the difference between intelligent and basic units?  Intelligent PDUs provide some of the most important things data center managers are looking for – functionality, adaptability, reliability and much more.  Raritan points out that intelligent power distribution units are capable of power distribution and multi-point metering, sequenced outlet power cycling, remote management, environmental monitoring, and asset tracking and infrastructure security.  These invaluable advantages would benefit any operation, large or small.  Data Center Knowledge points out why intelligent PDUs will not only help play a vital role in converting data center infrastructure to a more intelligent system but will also make a significant impact on the bottom line, “Organizations and data center administrators are constantly looking for ways to improve data center control and overcome these kinds of challenges. Consider this – a recent Ponemon Institute study showed that in 2013, the average cost of downtime was a staggering $7,908 per minute. The very same study also showed us that the cost of a data breach to a company is on average $145 per affected individual and $3.5M per incident. This means we’re dealing with real capacity, management, and even security challenges when it comes to data center control. This is where intelligence can begin to make a real difference… This means creating an architecture built around intelligence and one that can resolve some of the most pressing data center control challenges out there.”  While the upfront cost of an intelligent PDU may be a challenging pill to swallow for those who determine the budget, overall they will help contribute to a massive data center overhaul that will save a significant amount of money in the long run, more than paying for themselves.

Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Power Management | Tagged , , , , | Comments Off

Data Center UPS Total Cost of Ownership (TCO)

Utility IncentivesEvery data center operates with a budget that plays a major role in determining specific data center infrastructure choices.  Any time major decisions must be made the budget will come into play, there will be discussion of upfront cost and what the potential return on investment is.  But, beyond those two things there are other factors to consider.  To get a true picture of value and help you make a better picture it is wise to look at the total cost of ownership, or TCO.  This is especially true when determining what UPS system is best for your data center.  By calculating UPS Total Cost of Ownership you will be able to realize both energy and cost saving potential over the life of the system and make a better, well-informed decision for your data center and its specific needs.

When calculating UPS Total Cost of Ownership, there are a few key areas to look at that will make an impact.  First are the initial purchase and installation costs.  While the initial purchase of a UPS may seem significant, and while it is important, it is far from the only thing that will influence the Total Cost of Ownership.  While the lowest cost solution may seem ideal because it will require the least investment up front, and it may seem like a “bargain,”  you often get what you pay for and it is frequently not the best overall investment.  For instance, if your UPS system will go through batteries more frequently, the cost of UPS batteries, as well as the additional installation time may end up adding substantial hidden cost to the TCO.  In addition to initial purchase and installation costs, UPS efficiency must also be considered.  Nothing will be a bigger drain on your energy, and thus your money than an inefficient UPS.  What major facilities have shown us is that even what seems to be a small gain will often yield substantial savings.  Even a 1 or 2 percent energy savings from a more efficient UPS has the potential to save millions of dollars for a data center over the life cycle of a UPS system.   Additionally, it is important to assess the cooling needs of a UPS system before installing.  While a small gain in UPS efficiency may yield savings, those savings could be quickly drained by a system that requires a substantial amount of cooling when compared with another UPS system.  Finally, it is wise to look at the maintenance and component requirements of the UPS system as these two things will add considerable cost to the cost of ownership.  Some UPS batteries may need to be checked multiple times per year while others may only need to be checked annually thus saving maintenance time.  Data Center Knowledge elaborates on how maintenance costs influence TCO, ” For example, does the UPS topology have sufficient redundancy that allows a single UPS unit to be taken off-line for maintenance or evaluation, or does the entire power plant need to shut down while maintenance or repair is performed? Even scheduled maintenance has an effect on uptime, data and processing transfer time and costs, including labor costs. Scheduled battery replacement is probably the major OpEx cost of a UPS, representing a significant part of a maintenance budget. If TCO is a critical evaluation factor, then understanding which battery technologies can extend the life cycle of a UPS becomes important. The same is true for remote UPS monitoring systems that improve battery life, maintenance and upgrade strategies.”  Finally, do not forget to factor in the end-of-service costs that come with ending the use of a particular UPS system and changing infrastructure as a result.  There are many metrics available for calculating data center UPS Total Cost of Ownership so that you can have a full picture of a UPS system before deploying it in your data center.

 

Posted in Back-up Power Industry, Data Center Battery, data center equipment, DCIM, Power Distribution Unit, Power Management, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , , | Comments Off

Implementing a Large UPS in a Data Center

Advanced PDUImplementing an Uninterruptible Power Supply system in any data center, of any size, faces challenges and potential pitfalls.  While the end game of installing a UPS system in a data center is added protection against power failures, choosing the right UPS and implementing it can be fraught with problems.  This is true in any size facility center but in large locations with large installations, the problems compound and increase.

One of the first areas that can be challenging is choosing the right Uninterruptible Power Supply system for a large scale application.  With increased power demands and more complicated infrastructure, the more performance ability and capacity a UPS system must have.  Modular systems can be helpful as the offer scalability as needs change.  Because implementing a large scale UPS system can present challenges, it is best to walk through the entire facility to ensure the proper infrastructure is in place before any system goes live.  The more due diligence you do, the less likely there will be errors in choosing the appropriate UPS, components such as batteries, or anything else.  Each problem encountered not only wastes times and slows the process down but can also be quite costly.  Additionally, it is wise to examine the data center to make sure necessary items, such as proper amount of electrical outlets, are in place before you order your backup power system and attempt to implement it because the last thing you want to do is overload it and create more problems than solve them.  Many data centers with high capacity and big demands may consider implementing a large-scale parallel UPS system for increased redundancy and protection.  Through the use of a PDU (power distribution unit), and a communications cable, parallel systems in tandem to support critical data loads so that, should a problem occur in one system, the parallel system can support the load in the interim.  In addition to systems working parallel for redundancy, UPS systems can be connected together so that their combined power supports the demand in a team effort of sorts.  This cannot be done by combining any backup power supply systems that you can get your hands on but, rather, manufacturers create systems capable of being configured to work together.  Finally, the location that experiences success when implementing (or terminating) anything on a large scale does so not by happenstance but with careful planning and consideration.  In the end, the most important thing is to protect uptime and mission critical information.  Make a plan for implementation or termination of your data center UPS system, ensure that batteries are properly functioning, make sure everyone is on the same page and then execute the plan, being sure to have a backup plan in place in case anything should go wrong.  After all, data centers know that redundancy is often the key to success.

Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , | Comments Off

Improving Data Center CRAC/CRAH Energy Efficiency

DCIM

 

Every business in the world can benefit from improving energy efficiency and data centers are certainly no exception.  If anything, data centers are the prime example of the importance of a constant effort to improve energy efficiency. Energy usage is very costly and energy waste is bad for the environment so every measure to increase efficiency in a real and sustainable way is a significant improvement.  One area to target when improving energy efficiency in just about any facility is the Computer Room Air Conditioner/Computer Room Air Handler (CRAC/CRAH) units.  There are a lot of theories and strategies when it comes to proper cooling.  The focus always has to be twofold: to improve energy efficiency while still maximizing uptime in a mission critical facility so that it can perform its vital task to the best of its ability.  Uptime cannot be sacrificed in the name of efficiency so a way to improve efficiency must work within those parameters.

With new technology that is constantly evolving, the two goals and focuses: maximizing efficiency while maximizing uptime do not have to be mutually exclusive.  Whether you are making improvements to a legacy center or maximizing efficiency in a new data center build, there are always ways to make improvements.  One of the most effective ways to make improvements when it comes to the efficiency of a data center’s CRAC/CRAH is to properly implement hot and cold containment systems such as a hot aisle/cold aisle layout.  Data Center Knowledge discusses the importance of keeping hot air and cold air separate in a data center, “Air mixing is the enemy of effective cooling. In-row or close-coupled cooling solutions greatly reduce air mixing by closely coupling the IT equipment’s hot air discharge with the CRAC/CRAH’s hot air return and the CRAC/CRAH’s cold air supply with the IT equipment’s inlet. Additionally, close-coupled CRAC/CRAH units have the capability of varying their airflow, thereby balancing their supply CFM commensurate with the CFM requirements of the IT equipment using either temperature and/or pressure as a control. Air mixing can further be reduced by implementing partial hot-aisle containment by deploying air containment curtains and/or doors at the ends of each “hot” aisle.”  Additionally, evaluate what the ideal temperature is to maintain efficiency without compromising efficiency.  While traditional school of thought is to keep a facility as cool as possible, by maintaining a temperature of even a degree warmer may still keep all infrastructure functioning properly but can dramatically improve efficiency.  Lastly, if improving efficiency in a legacy center, a hard look should be taken at mechanical systems.  Upgrading a CRAC/CRAH system is  a relatively economical option and may make a dramatic difference in cooling efficiency. Many CRAC systems can even be retrofit with Variable Frequency Drives (VFDs) that allow air conditioning units to work at different energy loads depending on the varied programming that has been predetermined for the data center.  This conserves energy during low demand times but still meets energy requirements at all times.  While there are many more ways to improve energy efficiency for CRAC/CRAH systems, the specific needs of a data center will determine what the best approach is but any energy efficiency improvements are a step in the right direction.

Posted in Data Center Design, Data Center Infrastructure Management | Comments Off

How the Cloud Is Impacting Data Centers

cloudcomputing

 

Cloud computing has impacted, and will continue to impact, the technology world as we know it.  The cloud is not only greatly impacting the way individuals utilize technology but the way data centers operate as well.  Data Center Knowledge explains the cloud revolution that is happening in data centers, “It’s important to quickly understand that cloud computing isn’t going anywhere. In fact, the proliferation of cloud computing and various cloud services is only continuing to grow. Recently, Gartner estimated that global spending on IaaS is expected to reach almost US$16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate (CAGR) from 2014 to 2019 forecast at 29.1 percent. There is a very real digital shift happening for organizations and users utilizing cloud services. The digitization of the modern business has created a new type of reliance around cloud computing. However, it’s important to understand that the cloud isn’t just one platform. Rather, it’s an integrated system of various hardware, software and logical links working together to bring data to the end-user.”

With the new changes that are occurring in facilities from the cloud revolution it is no surprise that some data centers are still catching up to the technology and all that it entails.  Cloud computing has impacted everything from how we deploy applications, to how we deliver resources, to how we control users.  It helps to connect locations within networks and has greatly impacted the types of infrastructure we choose to deploy.  How a facility utilizes the cloud will impact what kind of UPS and back-up generator are utilized, what kind of cooling is needed, what security measures are deployed, the layout of a data center and much more.  Today, most facilities are implementing the cloud on some level and many new locations are simply starting out utilizing the cloud.  Data Center Knowledge explains why this is so and why more and more data centers will continue to implement cloud computing going forward, “Already we are seeing entire organizations be born within the cloud. As IT consumerization continues and more devices connect into a cloud service, it’ll be crucial to work with a partner that understands the big cloud picture. When creating your cloud infrastructure, planning around resource not only creates a more robust platform, it’ll also save your organization money. It’s time to better understand resource utilization within your cloud – and how you can align key cloud services with your organization’s goals.”

Posted in Data Center Design, Data Center Infrastructure Management | Comments Off