Implementing an Uninterruptible Power Supply system in any data center, of any size, faces challenges and potential pitfalls. While the end game of installing a UPS system in a data center is added protection against power failures, choosing the right UPS and implementing it can be fraught with problems. This is true in any size facility center but in large locations with large installations, the problems compound and increase.
One of the first areas that can be challenging is choosing the right Uninterruptible Power Supply system for a large scale application. With increased power demands and more complicated infrastructure, the more performance ability and capacity a UPS system must have. Modular systems can be helpful as the offer scalability as needs change. Because implementing a large scale UPS system can present challenges, it is best to walk through the entire facility to ensure the proper infrastructure is in place before any system goes live. The more due diligence you do, the less likely there will be errors in choosing the appropriate UPS, components such as batteries, or anything else. Each problem encountered not only wastes times and slows the process down but can also be quite costly. Additionally, it is wise to examine the data center to make sure necessary items, such as proper amount of electrical outlets, are in place before you order your backup power system and attempt to implement it because the last thing you want to do is overload it and create more problems than solve them. Many data centers with high capacity and big demands may consider implementing a large-scale parallel UPS system for increased redundancy and protection. Through the use of a PDU (power distribution unit), and a communications cable, parallel systems in tandem to support critical data loads so that, should a problem occur in one system, the parallel system can support the load in the interim. In addition to systems working parallel for redundancy, UPS systems can be connected together so that their combined power supports the demand in a team effort of sorts. This cannot be done by combining any backup power supply systems that you can get your hands on but, rather, manufacturers create systems capable of being configured to work together. Finally, the location that experiences success when implementing (or terminating) anything on a large scale does so not by happenstance but with careful planning and consideration. In the end, the most important thing is to protect uptime and mission critical information. Make a plan for implementation or termination of your data center UPS system, ensure that batteries are properly functioning, make sure everyone is on the same page and then execute the plan, being sure to have a backup plan in place in case anything should go wrong. After all, data centers know that redundancy is often the key to success.
Posted in Back-up Power Industry, Data Center Battery, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit, Uninterruptible Power Supply, UPS Maintenance
Tagged Uninterruptible Power Supply, Uninterruptible Power Systems, UPS, UPS Batteries, UPS maintenance, UPS Systems
Every business in the world can benefit from improving energy efficiency and data centers are certainly no exception. If anything, data centers are the prime example of the importance of a constant effort to improve energy efficiency. Energy usage is very costly and energy waste is bad for the environment so every measure to increase efficiency in a real and sustainable way is a significant improvement. One area to target when improving energy efficiency in just about any facility is the Computer Room Air Conditioner/Computer Room Air Handler (CRAC/CRAH) units. There are a lot of theories and strategies when it comes to proper cooling. The focus always has to be twofold: to improve energy efficiency while still maximizing uptime in a mission critical facility so that it can perform its vital task to the best of its ability. Uptime cannot be sacrificed in the name of efficiency so a way to improve efficiency must work within those parameters.
With new technology that is constantly evolving, the two goals and focuses: maximizing efficiency while maximizing uptime do not have to be mutually exclusive. Whether you are making improvements to a legacy center or maximizing efficiency in a new data center build, there are always ways to make improvements. One of the most effective ways to make improvements when it comes to the efficiency of a data center’s CRAC/CRAH is to properly implement hot and cold containment systems such as a hot aisle/cold aisle layout. Data Center Knowledge discusses the importance of keeping hot air and cold air separate in a data center, “Air mixing is the enemy of effective cooling. In-row or close-coupled cooling solutions greatly reduce air mixing by closely coupling the IT equipment’s hot air discharge with the CRAC/CRAH’s hot air return and the CRAC/CRAH’s cold air supply with the IT equipment’s inlet. Additionally, close-coupled CRAC/CRAH units have the capability of varying their airflow, thereby balancing their supply CFM commensurate with the CFM requirements of the IT equipment using either temperature and/or pressure as a control. Air mixing can further be reduced by implementing partial hot-aisle containment by deploying air containment curtains and/or doors at the ends of each “hot” aisle.” Additionally, evaluate what the ideal temperature is to maintain efficiency without compromising efficiency. While traditional school of thought is to keep a facility as cool as possible, by maintaining a temperature of even a degree warmer may still keep all infrastructure functioning properly but can dramatically improve efficiency. Lastly, if improving efficiency in a legacy center, a hard look should be taken at mechanical systems. Upgrading a CRAC/CRAH system is a relatively economical option and may make a dramatic difference in cooling efficiency. Many CRAC systems can even be retrofit with Variable Frequency Drives (VFDs) that allow air conditioning units to work at different energy loads depending on the varied programming that has been predetermined for the data center. This conserves energy during low demand times but still meets energy requirements at all times. While there are many more ways to improve energy efficiency for CRAC/CRAH systems, the specific needs of a data center will determine what the best approach is but any energy efficiency improvements are a step in the right direction.
Cloud computing has impacted, and will continue to impact, the technology world as we know it. The cloud is not only greatly impacting the way individuals utilize technology but the way data centers operate as well. Data Center Knowledge explains the cloud revolution that is happening in data centers, “It’s important to quickly understand that cloud computing isn’t going anywhere. In fact, the proliferation of cloud computing and various cloud services is only continuing to grow. Recently, Gartner estimated that global spending on IaaS is expected to reach almost US$16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate (CAGR) from 2014 to 2019 forecast at 29.1 percent. There is a very real digital shift happening for organizations and users utilizing cloud services. The digitization of the modern business has created a new type of reliance around cloud computing. However, it’s important to understand that the cloud isn’t just one platform. Rather, it’s an integrated system of various hardware, software and logical links working together to bring data to the end-user.”
With the new changes that are occurring in facilities from the cloud revolution it is no surprise that some data centers are still catching up to the technology and all that it entails. Cloud computing has impacted everything from how we deploy applications, to how we deliver resources, to how we control users. It helps to connect locations within networks and has greatly impacted the types of infrastructure we choose to deploy. How a facility utilizes the cloud will impact what kind of UPS and back-up generator are utilized, what kind of cooling is needed, what security measures are deployed, the layout of a data center and much more. Today, most facilities are implementing the cloud on some level and many new locations are simply starting out utilizing the cloud. Data Center Knowledge explains why this is so and why more and more data centers will continue to implement cloud computing going forward, “Already we are seeing entire organizations be born within the cloud. As IT consumerization continues and more devices connect into a cloud service, it’ll be crucial to work with a partner that understands the big cloud picture. When creating your cloud infrastructure, planning around resource not only creates a more robust platform, it’ll also save your organization money. It’s time to better understand resource utilization within your cloud – and how you can align key cloud services with your organization’s goals.”
Maintenance is the key to extending just about anything in life and data center UPS batteries are no exception. When data center UPS batteries are neglected, what could be small and easily fixed problems, or completely preventable issues, grow and grow without notice until, one day, they become a problem. Routine monitoring, coupled with appropriate maintenance are a combination that will not only help maximize uptime in your facility but can help prevent major disasters from happening. How reliable your data center is depends on your UPS system, and how reliable that UPS system is depends on the quality of batteries and how well they have been maintained. Without proper upkeep, a troublesome domino effect begins that will eventually become a major problem that would have likely been prevented with routine monitoring and upkeep.
Downtime is both frustrating and costly, with just seconds of downtime posing a major concern. Data Center Knowledge points out why UPS battery maintenance should be made a priority, “Data center surveys have shown that anywhere between 65 percent to as high as 85 percent of unplanned downtime can be attributed to battery failure of some kind. This means your facility is almost certainly at the mercy of a room full of what basically remains 1800’s technology. It only takes a single unit failure within a string of lead acid batteries to make that entire string useless so it follows that even several strings of batteries need only have a few bad units scattered throughout it to render the entire emergency power system useless.” The tricky part of data center battery upkeep is that there is no one-size-fits-all approach that will be successful. Rather, each data center must assess the infrastructure in their specific data center, as well as demands and goals and develop a proper monitoring plan. This can include automated monitoring with alarms but should not rely solely on it. An effective plan will also include routine, visual battery inspections in person to ensure that you have the best knowledge with which to make decisions. When figuring out what should be concerning and incite action with a battery, the individual battery and specific manufacturer instructions must be factored in. Data Center Knowledge elaborates on this concept, “It would be much simpler if every battery had a one simple set of parameters however the reality is that these parameters vary from battery manufacturer and battery model. There are many considerations, from simple float voltage to the temperature compensated settings of the rectifier being used. The alarms can signal issues with string voltage, unit voltage, impedance, ambient temperature, unit temperature, ripple and record discharge. These alarm limits have different priorities, ranging from lower priority maintenance pointers to more immediate critical issues. So which are the important ones? All of them. If unsure, talk to the battery manufacturer about what limits to set alarms to.” Through the implementation of an effective monitoring system, as well as proper routine maintenance, the life of a UPS battery in a data center can be dramatically extended, saving money and protecting a data center from downtime.
When we discuss data centers it seems that a conversation about UPS (uninterruptible power supply) and generators go hand in hand. Redundant power with a reliable UPS battery or backup generator have long been the standard approach to ensuring a facility can maximize uptime and that a mission critical facility can fulfill the demand placed on it. As often happens with data centers, many facility managers are keeping an eye on larger locations from companies like Google or Yahoo, ready to take cues from what works on such a large scale and implement them to better improve energy efficiency and make technological advancements. Yahoo has moved towards centers that do not use such traditional protection but, rather, a network of facilities that can absorb the load should an outage occur.
This shift toward eliminating the use of Uninterruptible Power Supply and backup generators is unprecedented and may significantly improve energy efficiency if it is able to be successfully implemented. Additionally, it will reduce the need for routine UPS maintenance or the use of UPS batteries. Data Center Knowledge explains how a company like Yahoo is making forward-thinking changes to improve efficiency, “But Yahoo is considering going without UPS and generators for some future data center projects. It’s not alone in advocating design choices that represent a huge departure from current practice. A number of data center designers are urging clients to consider limiting UPS support to loads that are genuinely critical. Scott Noteboom, the head of data center operations at Yahoo, said in his keynote at last month’s 7×24 Exchange conference that the Internet portal is exploring scenarios in which it would build data centers without generators or UPS, and use its network to route around any power outages that occur at those facilities. That’s a strategy that only the largest data center providers can contemplate, as it requires multiple data centers in major network capacity. Google has pursued a similar strategy during maintenance on some of its data centers, shifting capacity to other facilities… Data center designer KC Mares of Megawatt Consulting says he urges customers to take a hard look at which IT functions are truly “always on” essential, and which systems can afford interruptions.” While this may not be a realistic approach for smaller operations without a massive network, perhaps, some locations can dramatically reduce the usage of a UPS and backup generator. Data Center Knowledge also points out that if complete elimination is not an option, many pioneers in the industry are shifting towards a making a decrease as well as why making this shift may have a significant upfront cost but actually will more than pay for itself over time, “Yahoo plans to invest at least $500 million in further expanding its data center infrastructure and shifting its operations to newer, highly-efficient infrastructure. The company is also preparing a new data center design for a series of next-generation facilities it plans to build in 2012 and beyond, in which much of the infrastructure will operate with minimal UPS and generator support. “We are in essence rewiring the entire infrastructure of Yahoo,” said Scott Noteboom, the head of data center operations at Yahoo. “We’ve gained approval to invest half a billion dollars to build new data centers. We’ll be migrating the entire footprint of Yahoo to these more efficient facilities”… “All this efficiency is cool,” said Noteboom, who announced the expansion Thursday at the DataCenterDynamics New York conference. “But we’re saving our company $200 million a year. At our scale, these (new data centers) have a three-year payback.” What is being shown clearly is that more and more facilities are able to make a shift towards a reduced or completely eliminated use of UPS and generators.
In a world where technology moves at a lightning pace and everything is becoming more and more advanced, sometimes, it is best to get back to basics. Color coding could not possibly sound more “old school” or basic but that does not mean it isn’t an invaluable tool. Downtime is the sworn enemy of every data center because it is frustrating and costly but, unfortunately, many data centers do experience downtime each year. While focusing on improving things like uninterruptible power supply will help prevent downtime the fact of the matter is that a huge percentage of downtime is the result of human error. Preventable human error. By color coding your PDU you can help prevent human error and maximize uptime for the benefit of your data center and business.
Start by using separate colors to identify your A and B power feeds. By doing this it makes working on your power supplies and in your racks easier not only for you but for technicians as well. It will be a significant time saver by eliminating confusion and will also help prevent needless outages that lead to downtime. In addition to clearly defining which power supply is which through color coding, you can also distinguish voltages. For those looking to further optimize the use of color, facilities can even opt to “white out” their server racks and distribution cabinets because white is a reflective color and by using all white it will reduce a data center’s lighting needs dramatically. Raritan offers a variety of color coded products to make the switch as easy as possible.
How big of a difference will color coding really make? Just imagine giving instructions to a technician and telling them to look for the red cord in the rack. It is that simple. No longer will they need to search and cross their fingers that they have selected the correct cord. Only to realize that – oops – it actually was the wrong cord and caused an unexpected outage. Color coding eliminates the guessing game which may not sound by much but Datacenter Knowledge points out just how significant it really is, “More than 90% of the data center operators responding to a recent survey reported that their data center had at least one unplanned outage in the past two years (Ponemon Institute)…The overwhelming majority of outages were attributed to human error.” By color coding your PDU, routine maintenance and monitoring is no longer daunting and maintaining uptime is a much easier undertaking which is something every data center can appreciate.
Many businesses need data center space but do not have the resources or desire to run an entire facility themselves. These colocation customers rent space from data centers with managers and maintenance capabilities in place. With this service there is an expectation of security, proper cooling, efficiency, maintenance of uptime and so much more. These things are certainly written into a contract but – are they really happening. In an era of so much transparency, many customers want to know if their colocation facility is truly providing all of the services they say they are. It can be difficult for customers to truly grasp or quantify what is happening if there is no proof or transparency. But, can colocation facilities actually provide that transparency for customers and, if so, what does that look like?
Shy of physically being in the data center it can be tricky for colocation customers to have a good grasp on what is actually happening on site. But, this is beginning to change with the advent of increased monitoring that is smart and intuitive and capable of being remotely accessed. This monitoring not only tracks what is happening but can put the information it collects into charts and graphs that help give customers a clear picture of what is happening. With today’s monitoring not only can you see what has been happening in previous months but customers can also access real-time information. For customers that want complete transparency, it is important to seek out a colocation facility that is capable of and comfortable with allowing customers access to this sort of information. What data centers must understand is that to provide this level of transparency means that they must also truly deliver on what they are promising to their clients. What customers must understand is that the technology is still trying to catch up to modern data centers and today’s customer’s desires for transparency. When renting space for colocation that houses multiple clients it can be tricky to provide customers with truly unique information and statistics that specifically pertains to their business and their business alone. But, developers realize that and technology is getting there because the need exists. While every data center is different, most modern facilities should have a DCIM plan in place that includes extensive monitoring. This is not only because data center managers need this in order to properly maintain their facility and maximize uptime but because without it they cannot truly provide transparency to customers. When choosing a data center, it is imperative that customers talk about monitoring options with their managers. If your data center does not currently have these capabilities in place, discuss what options will be going forward. It is beneficial for everyone involved that transparency is the name of the game so customers should seek out locations that prioritize transparency and colocation facilities should make every effort to move in the direction of transparency to stay at the forefront of technology.
Any business or office must periodically assess itself from top to bottom in order to continue to succeed. This is done in different ways depending on the type of business, size of business and more but regardless of the fine details, every business must do it. If a business is not assessing itself it is not learning where potential problems lie so that they can be prevented and they are not maximizing efficiency or profit. Success begins with a full and complete picture of even minor aspects of a business and a data center is no different. Data center managers that obtain a full picture of what is happening in their facility with regular assessments are able to see where energy is being used efficiently, where it is being wasted, where potential threats to sensitive information and uptime exist and can then make full and informed decisions about how to make adjustments. By doing so on a regular basis there is no time or money wasted on things that are not working or inefficient
With regular assessments, data center managers can see where things stand with capacity, efficiency and storage needs which is important as many data centers find themselves in predicaments that could have been otherwise avoided. How often do we hear about a data center running out of room and having to suddenly move. A data center move is a major undertaking and the more time to carefully execute a move, the better. Are you about to roll out something new like cloud storage or virtualization? An assessment must be completed before rolling out something new to ensure that it will be successful and not create problems. Additionally, how many times have we heard that power needs exceeded power supply ability? It is exactly these scenarios that remind us that many problems can be avoided with assessments.
It can be easy to talk about the need for monitoring but the true challenge is implementing consistent schedule. Will they be conducted from within or will an outside party be hired to conduct assessments? How often will they be completed? What will be assessed? All of these questions must be answered and a precise strategy implemented, as well as communicated to staff so that everyone is on the same page and assessments can provide real and accurate information. Physical infrastructure must be assessed because so often this is where we see major problems arise. Whether there is inadequate backup power supply, inefficient PUE, infrastructure is starting to outgrow existing space or infrastructure can actually be reduced and efficiency improved – a current assessment of infrastructure will provide a significant amount of information about a data center. Because everything is connected and somehow interrelated it is important to assess everything in its entirety to ensure that nothing is missed and nothing accidentally negatively impacts another aspect of the data center. Once an appropriate assessment plan is determined a schedule should be set and it should be executed regularly and consistently moving forward which will help a data center to remain efficient and effective in the future.
Sometimes it may seem like all we talk about is backup power supply but there is a reason – when it comes to data centers, a reliable and effective backup power supply can make or break a facility. Yes, it is that serious. Downtime can create a nightmare for clients and, if even for a short time, can be very costly. Because downtime is often preventable and the result of human error it points toward a need for better maintenance and management of backup power supply. Redundancy is key and with a knowledge of your location’s specific needs you can see how to best provide redundancy 24/7. Slowly but surely, more and more data centers are recognizing that servers should have batteries built in.
Data centers are forever on a quest to increase energy efficiency while maximizing uptime and it seems that servers with batteries kills two birds with one stone. Each server can be equipped with a built-in battery pack that eliminates the need for big uninterruptible power supplies that are far less efficient. Built-in server batteries can also help reduce conversion losses. Energy losses during power conversion can create a number of problems and built-in batteries help solve the problem by eliminating the risk. Data Center Knowledge explains how savings can be achieved in a variety of ways and efficiency dramatically improved, “Facebook says it expects to gain similar efficiency benefits, reducing the energy loss during power distribution from the current 35 percent to about 15 percent. The company said it can lower its power bill by simplifying how electricity travels to its servers. In most data centers, a UPS system stands between the utility power grid and the data center equipment. When there is a grid outage, the UPS taps a large bank of batteries (or in some cases, a flywheel) for “ride-through” power until the generator can be started. The AC power from the grid is converted into DC power to charge the batteries, and then converted back to AC for the equipment… yield enormous savings on equipment purchases. “You no longer need to buy a traditional UPS and PDU system,” said Michael. “On the capex side, it’s a huge win. This can save millions of dollars that you no longer have to spend on a UPS system. We hope to see the industry move to a model like this.” Facebook’s enormous growth is clearly giving it leverage with its vendors, which are working with the company as it customizes its equipment. An example: typical servers use 208 volt power to the servers. Most power supplies can also support the 230 volt and 240 volt configurations now being implemented to capture extra efficiency.” Facebook and Google have actively implemented servers with built-in batteries and with such enormous data centers proving how effective it can be it has other data centers taking notice and making the shift.
Technology is never static, it is in a constant state of evolution. Because of this, data centers must be vigilant about monitoring and maintaining their infrastructure so that they can see when updates are needed so that their infrastructure remains compatible and efficient with current technology. Every year, it seems, there are a few common things that become popular infrastructure updates in data centers. 2015 was no exception. By analyzing what infrastructure trends were popular in 2015 among data center it will help managers to better formulate a picture and plan going into 2016 so that their data center can get the necessary updates it may need and remain at the forefront of technology.
An increased focus on energy efficiency has led to a popular infrastructure trend in 2015 – in-row cooling. Traditional perimeter computer room cooling may get the job done but it will probably not do it in the most efficient way. Rather than turning the temperature way down and hoping for cool air to be circulated where it needs to in a room, focused in-row cooling achieves the same cooling effect but more efficiently because it is directed at what specifically needs cooling. Another infrastructure trend has been high density data with an increased use of cloud storage. As more and more facilities strive to operate within their existing space and be more efficient, a big shift towards cloud storage has occurred because it is one of the easiest and most efficient ways to add storage to a data center without dramatically increasing space or cooling needs. Continuing with the more efficient and eco-friendly theme of infrastructure trends is a shift toward double-conversion, multi-mode UPSs. These UPSs are far more efficient than even traditional double-conversion UPSs so many data centers wanting the best possible backup power supply (and they should!) are often opting to update their infrastructure with more eco-friendly and efficient UPSs. Additionally, a trend toward maximizing uptime has been seen in 2015. After many facilities experienced so many natural disasters, as well as man-made problems, many data centers are looking at their infrastructure to see what they can do to maximize uptime. It is critical to have infrastructure that can be continuously monitored. This will allow managers to have the most current information possible about their data center through improved technology so that they can make the most informed decisions about not only emergencies but maintenance that can help prevent emergencies. All of these trends have been pretty common among data centers in 2015 and for good reason – all of these will help maximize uptime, improve efficiency, lower expenses, protect data and keep customers happy.