Data Center RTOI

Technology is evolving minute by minute and data centers must work to keep up with the lightening-paced evolution.  We have discussed the Internet of Things (IOT) before – the world is becoming increasingly dependent on the internet and every day processes are becoming digitized for efficiency and savings.  But, as more and more of the world becomes digitized, technology advances and data grows, and that data must be effectively and efficiently stored.  Data centers make investments in infrastructure, backup power, security and more so that they can adequately store that growing and evolving data but when things move so quickly, constant monitoring must be happening to ensure that data is not just stored properly but safely and efficiently.  Old methods of collecting and analyzing data are archaic and simply not practical.  Analyzing what went wrong after the fact, or realizing something is about to go wrong when there is not enough time to fix the problem is useless.  And, ultimately, these traditional methods are responsible for a lot of downtime in data centers.  Accurate, actionable information in real time is the only way data centers can effectively operate moving forward.

rtoi imageData centers are notoriously energy-inefficient but most data centers today are making efforts to improve and be more energy efficient.  The undertaking is not simple or straightforward because every data center is different and has unique needs.  Data centers cannot run at capacity because, should capacity change, data centers will be ill-equipped.  But, at the same time, data centers should not run way beyond what is necessary because that is a waste of energy.  More and more data centers managers are realizing the need for Real Time Operational Intelligence (RTOI).  Having access to current, accurate information is the only way to make intelligent and informed decisions about how to best manage the infrastructure of a data center.  What does RTOI look like in a data center? TechTarget provides a brief explanation of what RTOI is in a practical sense, “Real-time operational intelligence (RtOI) is an emerging discipline that allows businesses to intelligently transform vast amounts of operational data into actionable information that is accessible anywhere, anytime and across many devices, including tablets and smartphones. RtOI products turn immense amounts of raw data into simple, actionable knowledge. RtOI pulls together existing infrastructure to manage all of the data that is being pulling from a variety of sources, such as HMI/SCADA, Lab, Historian, MES, and other servers/systems. It then has the ability to organize and connect the data so it’s meaningful to the users. By integrating directly with existing systems and workflows, it can help assets perform better and help workers share more information.”  As more and more people, businesses and data centers are utilizing the cloud, and the cloud’s complexity continues to change, data management needs change and data centers struggle just to keep pace.

RTOI can greatly reduce waste and improve energy efficiency by helping identify what is in use and what is not so that things can be turned off strategically for energy savings.  Just think of all of the infrastructure in a data center that is consuming power even though it is not mission critical or in even in use. For example, determining which servers are in use and which servers can be, at least temporarily, powered down will yield significant energy savings.

One of the most significant advantages of well-executed RTOI is immediate knowledge of potential threats and the ability to deal with them before they cause downtime.  As we have often discussed, downtime is incredibly costly (costing, on average, thousands of dollars per minute).  No data center wants to experience downtime but, unfortunately, the vast majority will face it at one point or another.  Data centers can significantly reduce their risk of downtime with current, accurate, actionable information about what is happening in the data center.  As we have seen, anticipation of problems can only go so far.  Data centers simply cannot properly manage what they do not see or have knowledge about.  That is where RTOI comes in.

RTOI not only aggregates data but it measures it, tracks it, and, if well-executed, puts it easy-to-understand terms and statistics so that you can use the information to make informed decisions as well as to properly manage assets going forward.  RTOI can assist data centers in improving capacity planning, anticipating asset lifecycle and properly planning management, maintain and continuously meet regulatory compliance, optimize energy efficiency and more.

DCIM_RTOI_imagePlanning for data center capacity is far easier at the building stage but, once a data center has been built and is in operation, anticipating capacity needs, particularly as new technology means big data storage, is very challenging.  In fact, it is one of the biggest challenges data centers face today.  Panduit explains why capacity management is such a challenge in data centers, “Proactive capacity management ensures optimal availability of four critical data center resources: rack space, power, cooling and network connectivity. All four of these must be in balance for the data center to function most efficiently in terms of operations, resources and associated costs. Putting in place a holistic capacity plan prior to building a data center is a best practice that goes far to ensure optimal operations. Unfortunately, once the data center is in operation, it is all too common for it to fall out of balance over time due to organic growth and ad hoc decisions on factors like power, cooling or network management, or equipment selection and placement. The result is inefficiency and in the worstcase scenario, data center downtime. For example, carrying out asset moves, adds and changes (MACs) without full insight into the impact of asset power consumption, heat dissipation and network connectivity changes can create an imbalance that can seriously compromise the data center’s overall resilience and, in turn, its stability and uptime…Leveraging real-time infrastructure data and analytics provided by DCIM software helps maximize capacity utilization (whether for a greenfield or existing data center) and reduce fragmentation, saving the cost of retrofitting a data center or building a new one. Automating data collection via sensors and instrumentation throughout the data center generates positive return on investment (ROI) when combined with DCIM software to yield insights for better decision making.”

With accurate information in real time you can manage capacity needs and, in a moment’s notice, add capacity so that there are no problems.  Additionally, that kind of historical information is useful for predicting the need for data center expansion going forward.  For example, data centers often have orphan servers that are sitting doing nothing but collecting dust and sucking up resources like cooling and power.  Without careful and accurate management, these orphan servers could sit like this for weeks, months or even years, wasting resources that could be better allocated.  With real-time statistics about what exactly is going on in your data center, you can find these orphan servers and clean them out, freeing up capacity for other infrastructure.  In fact, carefully managing your data center’s capacity needs and more accurately anticipating future needs can mean saving millions of dollars in the long run.

DCIM and RTOI go hand in hand.  Without a proper plan for data center infrastructure management, and sophisticated monitoring software, RTOI is not achievable.  DCIM tools are necessary to measure, monitor, and manage data center operations including energy consumptions and all IT equipment as well as the facility infrastructure.  Fortunately, there are sophisticated DCIM software products available that will track sophisticated information all the way down to the rack level so that monitoring is made easy, even remotely. As mentioned, it is critical to leave behind old and archaic forms of DCIM, there is simply no way to really keep up.  Data centers, regardless of size, must focus on real-time operational intelligence as a means of accuracy. TechTarget explains why it is critical to focus on RTOI as a way of staying ahead potential problems, “Taking a new big data approach to IT analytics can provide insights not readily achievable with traditional monitoring and management tools, Volk said…For example, particularly with cloud resources, it can be difficult to anticipate how applications and data movement will affect each other. Cloud Physics allows cross-checking of logs and other indicators in real time to achieve that. This new approach is “leading edge, not bleeding edge,” Volk said. Its value to an organization will depend on the maturity and complexity of a given data center. Small and medium-sized businesses and organizations without much complexity will benefit, he said, “but companies with large and heterogeneous data centers will benefit even more.”  RTOI helps data centers provide better service to their customers, minimize downtime, improve efficiency, maximize reputation, and ultimately, save money through vastly improved operations.

Posted in Cloud Computing, data center equipment, Data Center Infrastructure Management, DCIM, Internet of Things, Mission Critical Industry | Tagged , , , , | Comments Off

Data Center Business Continuity

titan-power-business-continuity-infographicWhether you operate a data center or any other business, business continuity is incredibly important.  We all think we are immune to disaster but the reality is, if you have not formed a business continuity plan for disasters, you are leaving your data center at severe risk.  Imagine what it would be like if a disaster struck (flood, fire, etc.) and you could not get into your data center for a few hours – problematic, right?  What if that disaster was really bad and you could not get into your data center for a few days or weeks – huge problem. Business cannot come to a screaming halt so a strategy for maintaining business continuity is a must. A strategically formed, well-thought-through business continuity plan should be a part of any data center’s disaster recovery program.  A disaster recovery plan will be the big umbrella under which we will talk about business continuity because the two are inextricably related.  This is because disaster recovery focuses heavily on data recovery and management but, beyond maintaining and protecting data in the event of a disaster, a data center business and the businesses it serves must be able to continue to meet its most basic objectives.  During a disaster a data center may experience downtime in which all business operations come to a halt.  This is not a small problem – downtime may cost as much as $7,900 per minute.  A disaster recovery plan, along with a business continuity plan, will help a data center reduce downtime in the event of a disaster as well as operate continuously to meet business objectives.

To formulate a business continuity plan we must first outline what makes a successful business continuity plan.  A data center’s business continuity plan will function as a roadmap.  If a disaster strikes, you will hopefully be able to find the type of disaster in your business continuity plan and then begin following the “map” to get to the solution and restore your data center to business as usual. First and foremost, a proper business continuity plan will focus on what can be done to prevent disasters so that business continuity is never interrupted in the first place. Data centers must consider what their unique needs are because there is no such thing as a generic data center business continuity plan – it would never work.  Data centers must identify and asses all mission critical assets and risks.  Once they have been identified it will be far easier to formulate a business continuity plan with specific goals in mind.  You can prioritize your most problematic risks by focusing on the risk they pose to mission critical assets. In considering individual needs it is imperative that data centers determine what applications and processes are mission critical. For example, you’re your mission critical systems be maintained remotely? Additionally, in today’s data center world where security is a top concern, maintaining data security should be an important part of your business continuity plan.

Disaster prevention is a central part of your data center’s business continuity plan.  Identifying business continuity goals and potential problem areas will help you lay out a proper disaster prevention plan.  Depending on your unique data center, certain measures may be beneficial such as increased inspections of infrastructure, better surveillance, enhanced security in various areas including data centers grounds security and rack-based security, increased redundancy, and more.  Think in terms of real problems and real consequences; be specific so that you can make specific business continuity plans and strategies.

Some data centers may want to relocate their data center if a disaster is incredibly large but the logistics of this are far from simple.  Relocating for a disaster safely, rapidly, and securely is no simple task.  And, beyond that, it is expensive which is why many data centers – even large enterprise data centers – do not do this.  To do this properly as part of a business continuity plan, a detailed data center migration plan must accompany the business continuity plan.  Some enterprises may want to utilize regionally diverse data centers that mirror each other but this is also expensive and exceptionally complex to implement – though it can be very effective at maintaining uptime, maximizing security, and optimizing business continuity.

As mentioned, redundancy is an important part of maximizing uptime and maintaining business continuity in a data center. As part of your data center’s business continuity plan, you may want to implement load balancing and link load balancing.  Server load balancing and link load balancing are two strategies that may be used to help prevent the loss of data from an overload or outage in a data center. Continuity Central Archive explains how these two strategies can be used in data centers, “Server load balancing ensures application availability, facilitates tighter application integration, and intelligently and adaptively load balances user traffic based on a suite of application metrics and health checks. It also load balances IPS/IDS devices and composite IP-based applications, and distributes HTTP(S) traffic based on headers and SSL certificate fields. The primary function of server load balancing is to provide availability for applications running within traditional data centers, public cloud infrastructure or a private cloud. Should a server or other networking device become over-utilized or cease to function properly, the server load balancer redistributes traffic to healthy systems based on IT-defined parameters to ensure a seamless experience for end users…Link load balancing addresses WAN reliability by directing traffic to the best performing links. Should one link become inaccessible due to a bottleneck or outage, the ADC takes that link out of service, automatically directing traffic to other functioning links. Where server load balancing provides availability and business continuity for applications and infrastructure running within the data center, link load balancing ensures uninterrupted connectivity from the data center to the Internet and telecommunications networks. Link load balancing may be used to send traffic over whichever link or links prove to be most cost-effective for a given time period. What’s more, link load balancing may be used to direct select user groups and applications to specific links to ensure bandwidth and availability for business critical functions.”

Cloud computing flowchart with businessmanData centers are also utilizing the cloud for their business continuity plans because it is cost-efficient and highly effective.  The cloud platform is exceptionally effective for business continuity, particularly as data centers move more and more towards virtualization.  A cloud service with proper SLA (service level agreement) can ensure that data will be continuously saved and protected even in the event of a disaster.  This is where identifying mission critical applications and information are important.  The entirety of the data center’s workload does not need to be recovered in an instant, only that which has been determined mission critical.

In addition to the cloud, many data centers opt to implement image-based backup for continuity.  Data Center Knowledge provides a helpful description of what image-based backup is and how it can be used uniquely in data centers, “Hybrid, image-based backup is at the core of successful business continuity solutions today. A hybrid solution combines the quick restoration benefits of local backup with the off-site, economic advantages of a cloud resource. Data is first copied and stored on a local device, so that enterprises can do fast and easy restores from that device. At the same time, the data is replicated in the cloud, creating off-site copies that don’t have to be moved physically. Channel partners are also helping enterprises make a critical shift from file-based backup to image-based. With file-based backup, the IT team chooses which files to back up, and only those files are saved. If the team overlooks an essential file and a disaster occurs, that file is gone. With image-based backup, the enterprise can capture an image of the data in its environment. You can get exact replications of what is stored on a server — including the operating system, configurations and settings, and preferences. Make sure to look for a solution that automatically saves each image-based backup as a virtual machine disk (VMDK), both in the local device and the cloud. This will ensure a faster virtualization process.”

While not every data center will experience a “major” disaster where they cannot get into their facility for weeks, many data centers will experience some type of disaster.  And, as mentioned, mere minutes can cost tens of thousands of dollars.  Beyond the bottom line, the inability to continuously maintain data center business may damage your reputation irreparably.  An effective business continuity plan is capable of pivoting around both people and processes depending on the specific circumstances.  Rapidly restoring data and operations is the goal and data centers should take that goal and work backwards from there to determine the best path to maintaining business continuity.

Posted in Back-up Power Industry, Cloud Computing, Computer Room Design, DCIM, Uninterruptible Power Supply | Tagged , , , , , | Comments Off

Controlling Rack Access for Data Center Security

AdobeStock_56769671Stringent security protocols are one of the most important aspects of properly running any data center.  With constant, round-the-clock advancements in technology, the focus of security protocols is often on things like cloud/cyber security, particularly because there have been any significant security breaches recently.  Cyber security is certainly important and nothing to ignore, but it is also important to not forget about physical security.  To provide the optimal and industry-acceptable level of security, data centers must provide security on multiple levels.  This will help dramatically reduce the risk of a security breach, allow data centers to remain compliant to certain industry regulations, and will provide peace of mind to customers that everything is being done to protect data integrity.  Ensuring proper physical security compliance will help data centers avoid costly data breaches, and the resulting penalties that may arise as well.

So often, physical security efforts are focused on access to data center grounds and to the facility itself.  These efforts, while valuable and necessary are not where physical security measures should stop.  Once inside the data center facility itself there should not be unrestricted access to server racks.  There are a wide variety of individuals that must pass through a data center on a daily basis, including internal engineers, external engineers, data center staff, cleaning staff and more.  Unfortunately, many data breaches are actually “inside jobs” and therefore security at the rack level is vitally important.

Colocation data centers must be particularly vigilant with rack level security because they often house multiple businesses’ security within the same data center and some of those businesses may even be in competition.  It may sound like there is a simple solution – locked doors or cages for server racks – right?  Unfortunately, wrong.  Traditional locks can only be so complex and if a threat is able to gain access to data center grounds or get inside a facility, they can likely handle those locks.  To meet industry standards and comply with federal regulations, it simply must go beyond that, as Schneider Electric points out, “Further increasing the pressure on those managing IT loads in such locations, regulations concerning the way data is stored and accessed extends beyond cyber credentialing, and into the physical world. In the US, where electronic health records (EHR) have become heavily incentivized, the Healthcare Insurance Portability & Accountability Act (HIPAA) demands safeguards, including “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.” Similar measures are also demanded, e.g., by the Sarbanes-Oxley Act and Payment Card Industry Data Security Standard (PCI DSS) for finance and credit card encryption IT equipment. In addition to building and room security, it has become vital to control rack-level security so you know who is accessing your IT cabinets and what they’re doing there.”

biometrics-154662_1280For best security, custom rack enclosures can provide peace of mind that they are far harder to access than standard, “off the shelf” enclosures.  Additionally, many data centers are opting for biometric security, pin pads (where codes are changed frequently) or keycards.  Biometric locks do not use traditional keys, rather, they scan things like fingerprints or handprints. Biometric locking systems have grown significantly in popularity because they provide truly unique access.  Keycards can get lost and pin codes can be shared but a fingerprint or handprint cannot be easily shared or duplicated so it is a far more sophisticated security measure. Many worry about the consistency, accuracy and performance of biometric security but it has become incredibly advanced, as Data Center Knowledge notes, “The time taken to verify a fingerprint at the scanner is now down to a second. This is because the templates – which can be updated / polled to / from a centralized server on a regular basis – are maintained locally, and the verification process can take place whether or not a network connection is present. The enrollment process is similarly enhanced with a typical enroll involving three sample fingerprints being taken on a terminal, with the user then able to authenticate themselves from that point onwards. This level of efficiency, cost effectiveness and all round reliability of fingerprint security means that a growing number of clients are now securing their IT resources at the cabinet level and integrating the data feed from the scanner to other forms of security such as video surveillance.”

These electric locks that restrict rack access provide multiple levels of enhanced security.  For example, with electric locks, when a user scans a fingerprint or inputs a code, a central server validates authenticity and then allows or restricts access. An additional advantage of using this method is that the electronic system will automatically generate a log that details who has accessed what, and when.  This electronic tracking is far more convenient, as well as far more accurate, than manual tracking of access.  These electronic systems can be directly connected to data center facility security systems so that, should there be a problem, systems can go into automatic lockdown and alarms can be sounded in an instant.  Also, there are video surveillance options that come along with electronic-based security and monitoring.  Video surveillance can be programmed to turn on when biometric scanning is being performed, when pin codes are being entered, when security cards are being swiped or more.  Additionally, video surveillance can be programmed so that, when someone is accessing a rack it automatically captures an image of who is accessing the rack and sends it to the data center manager.  The data center manager can then choose to watch the surveillance as it happens for an enhanced level of security. This level of security also may reduce the cost and need for a physical security guard, particularly when each rack is monitored by video surveillance. With this sort of security implemented at the rack level, there will be a detailed log of who is accessing what server and when, and should a problem arise, it will be immediately apparent at which server there has been a security breach.  Further, with advanced electrical-based locking systems, they can be pre-set to only allow access at certain times.  For example, if there should never be access “after hours” to certain racks, they can be set to only allow access for pre-determined times.

Another advantage of advanced electronic locking mechanisms is that they can be easily and effectively remotely monitored.  Having on-site security staff is beneficial but is not always possible and, as previously discussed, it is advantageous to have multiple levels of security which is why remote monitoring is important.  Many government and industry regulations now have strict security parameters that data centers must remain in compliance with or face strong penalties.  These security standards are set to help protect secure financial, health and other sensitive information and they require multiple levels of security and that includes rack level security.  To not protect rack level security means that many data centers will not be in compliance – a major (and costly!) problem.

While cost of implementation may seem prohibitive to some, many are now recognizing that the cost of a breach will likely be far higher.  The same level of security used for facility access points should also be used at the rack level when optimizing data center security protocols.  Whether you are retrofitting an existing data center or building a new data center, and whether your data center has 1 rack or 100 racks, they should each be secured separately at the rack level.  Cyber security is a growing and complex arena, easily grabbing the attention of both the customer and the data center facility manager but it is critically important that physical security not be neglected.  In an age where many businesses are foregoing their enterprise data center in favor of colocation, colocation providers must be stringent in their protection of their customer’s data – not just for peace of mind and best practices, but to remain compliant with federal regulations.  If you think you are immune to a data breach, IBM Security’s most recent study will not put you at ease because they found that the global risk for a data breach in the next 24 months at 26 percent.  And, the cost will not be small!  The average consolidated total cost of a data breach is $4 million.  While the cost to implement state-of-the-art rack level security will not be small, it is will continually pay for itself over time and will likely be far less than the cost of a security breach.


Posted in computer room construction, Computer Room Design, Data Center Construction, Data Center Design, data center equipment, Data Center Infrastructure Management, Data Center Security, DCIM | Tagged , , , , | Comments Off

Strategies For Monitoring UPS Batteries & Preventing Failure

Aside from security, maximizing uptime is likely the top priority of just about any data center, regardless of size, industry or any other factors.  Most businesses today run on data and that data is being facilitated by a data center.  Businesses, and their employees and customers, depend on data being available at all times so that business processes are not interrupted.  Every second a data center experiences downtime, their clients experience downtime as well.  Data center managers and personnel are on a constant mission to prevent downtime and they must be vigilant because downtime can occur for a variety of reasons but one has been and remains the #1 threat – UPS battery failure.

UPS (Uninterruptible Power Supply) is the redundant power supply that is supposed to back up a data center in the event of an energy problem such as power failure, or a catastrophic emergency.  Having an uninterruptible power supply is necessary in any size data center because no batteries last forever and, unfortunately, even the most observant and effective data center managers cannot prevent some power failures.  The UPS also contains a battery that will kick in should the primary power source fail so that a data center (and its clients) can experience continuous operation.  Unfortunately, the very thing that is supposed to provide backup power – the UPS – can sometimes fail as well.  Emerson Network Power conducted a 2016 study to determine the cost of and root causes of unplanned data center outages, “The average total cost per minute of an unplanned outage increased from $5,617 in 2010 to $7,908 in 2013 to $8,851 in this report… The average cost of a data center outage rose from $505,502 in 2010 to $690,204 in 2013 to $740,357 in the latest study. This represents a 38 percent increase in the cost of downtime since the first study in 2010…UPS system failure, including UPS and batteries, is the No. 1 cause of unplanned data center outages, accounting for one-quarter of all such events.”


Batteries lose capacity as they age justifying the need for a preventive maintenance program. Image Via: Emerson Network Power

In order to properly for a strategy for UPS failure prevention, it is important to look at why UPS failure occurs in the first place.  At the heart of the UPS system is its battery which powers its operation.  UPS batteries cannot simply be installed and then left alone until an emergency occurs.  Even if a brand-new battery is installed and the UPS system is never needed, the battery has a built-in lifespan and it will, over time, die.  So even if you think you are safe with your UPS system and your unused battery, if you are not keeping an eye on it, you may be in trouble when a power outage occurs.

Beyond basic life-expectancy in ideal conditions, UPS battery effectiveness may be reduced or batteries may fail for other reasons.  Ambient temperatures around the UPS battery, if too warm, may damage the UPS battery.  Another reason a battery may fail is what is called “over-cycling” – when a battery is discharged and recharged so many times that it reduces capacity of the battery over time.  Further, UPS batteries may fail due to incorrect float voltage.  Every battery brand is manufactured differently and has a specific charge voltage range that is acceptable.  If a battery is constantly charged outside the recommended charge voltage range – whether undercharging or overcharging – it will reduce the battery’s capacity and may lead to battery failure during a power emergency.

Fortunately, many of these UPS failures can be traced back to human errors that are preventable.  This means that data centers looking to prevent UPS failures and maximize uptime can do so by implementing and vigilantly following a UPS failure prevention strategy.  First, it is important to develop a maintenance schedule, complete with checklists for consistency, and actually stick to it.  Don’t let routine battery maintenance fall off of your priority list, while it may not seem urgent, it will feel very urgent if the power fails.

One of the first and most important things that a data center should implement in their strategy is proper monitoring of batteries.  Every battery will have an estimated battery life determined by the manufacturer, some even boast as long of a life cycle as 10 years!  But, as any data center manager knows, UPS batteries do not last as long as their estimated life cycle because of a variety of factors. Just how long they will actually last will vary which is why monitoring is incredibly important. Batteries must be monitored at the cell level on a routine schedule, either quarterly or semi-annually and it is important to also check each string of batteries.  By doing this on a routine schedule, you can determine if a battery is near its end of life cycle or has already reached its end of life cycle and make any necessary repairs or replacements.  If it appears a battery is nearing the end of its life cycle it may be best to simply replace it so as not to risk a potential failure.  In addition to physically checking and monitoring UPS batteries, there are battery monitoring systems that can be used.  While physical checks are still critical, battery monitoring systems can provide helpful additional support that may prevent a UPS failure.  Schneider Electric describes how battery monitoring systems can be a useful tool, “A second option is to have a battery monitoring system connected to each battery cell, to provide daily automated performance measurements. Although there are many battery monitoring systems available on the market today, the number of battery parameters they monitor can vary significantly from one system to another.

- A good battery monitoring system will monitor the battery parameters that IEEE 1491 recommends be measured. The 17 criteria it outlines include:

- String and cell float voltages, string and cell charge voltages, string and cell discharge voltages, AC ripple voltage

- String charge current, string discharge current, AC ripple current

- Ambient and cell temperatures

- Cell internal resistance

- Cycles

With such a system, users can set thresholds so they get alerted when a battery is about to fail. While this is clearly a step up from the scheduled maintenance in that the alerts are more timely, they are still reactive – you only get an alert after a problem crops up.”  Further, as your monitor your batteries it is important to collect and analyze the data so that you can make informed decisions about how to best maximize battery life.

Next, it is important to properly store your battery when not in use to maximize its lifespan which will help it function properly in the event of use.  A UPS battery must be charged every few months while in storage or its lifespan will be diminished.  If you cannot periodically charge your UPS battery while in storage, most experts recommend storing your battery in cooler temperatures – 50°F (10°C) or less – which will help slow down the degradation of your battery.

To keep your UPS battery functioning in optimal conditions, ambient temperature should not exceed 77 degrees Fahrenheit and should stay, generally, as close to that as possible.  It is important to not just prevent temperatures from exceeding that but prevent temperatures from frequently fluctuating because it will greatly tax UPS batteries and reduce their life expectancy.  It is important that your UPS is stored in an area of your data center where temperatures are carefully monitored and maintained to help promote proper function of your UPS in the event of an emergency.  Ideally, your UPS would be maintained in an enclosure with temperature and humidity control.


An increase in the number of annual preventive maintenance visits increases. Image Via: Emerson Network Power Network

While routine maintenance will require attention and dedication, it is not without merit.  In fact, Data Center Knowledge notes that there are statistics that back up the argument that routine maintenance really does prevent UPS failure, “In one study of more than 5,000 three-phase UPS units and more than 24,000 strings of batteries, the impact of regular preventive maintenance on UPS reliability was clear. This study revealed that the Mean Time Between Failure (MTBF) for units that received two preventive maintenance (PM) service visits a year is 23 times better than a UPS with no PM visits. According to the study, reliability continued to steadily increase with additional visits completed by skilled service providers with very low error rates.” Data centers must implement their own unique UPS maintenance strategy, tailored specifically to individual needs, and remain vigilant in their follow through.  Implementing UPS maintenance best practices, including maintaining proper temperatures, maintaining proper float voltage, avoiding over-cycling, properly storing batteries, utilizing UPS battery monitoring systems, and performing routine visual inspections, will help significantly decrease the risk of UPS failure.

Posted in Back-up Power Industry, computer room maintenance, Data Center Battery, data center equipment, Data Center Infrastructure Management, data center maintenance, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , | Comments Off

Private Cloud vs. Public Cloud vs. Hybrid Cloud

Cloud computing, in one form or another, is here and it is not going anywhere.  It is for a good many reasons – it provides easy scalability, is less expensive than expanding infrastructure to add storage, is less expensive to maintain because it does require additional power and cooling, makes project deployment easier and quicker, and is easy to create redundancy and reliability.  While these benefits are significant, they are just the surface of the advantages of utilizing the cloud but there is debate over what type of cloud is best – public, private or hybrid.

Both the public cloud and private cloud offer a variety of advantages and drawbacks and what will work best for a data center will have to be decided on a case by case basis.  Cisco’s Global Cloud Index, which forecasts cloud usage for the years 2015-2020, provides some interesting insights into how cloud usage will transform data centers and enterprises going forward, “By 2020, 68 percent of the cloud workloads will be in public cloud data centers, up from 49 percent in 2015 (CAGR of 35 percent from 2015 to 2020)… By 2020, 32 percent of the cloud workloads will be in private cloud data centers, down from 51 percent in 2015 (CAGR of 15 percent from 2015 to 2020).”

Private cloud is essentially an internal, enterprise cloud that is privately managed and maintained.  The data center is responsible for hosting the private cloud within its own intranet and is protected by the data center’s firewall.  The private cloud provides all of the efficiency, agility, and scalability but also provides better control and security.  This is a great option for a data center that already has a robust infrastructure and enterprise set up, but it does demand more than the public cloud.  If a data center employs the private cloud, all management, maintenance and security falls squarely on the data center’s personnel.

One of the distinct advantages of the private cloud is how much control you have over how it works for your unique needs.  Private clouds can be configured to meet your needs, rather than you configuring your applications and infrastructure to meet the needs of the public cloud.  Many data centers have legacy applications that cannot always adapt well to the needs of the public cloud, but with the customizability of the private cloud, a data center can easily adapt the private cloud to meet the needs of the enterprise.

If your data center or enterprise prioritizes control, security, privacy and management visibility and worries about the security and privacy risks of shared resources in a public cloud, the private cloud may be the right fit for you because it will provide peace of mind that you know exactly where your data is and how it is being managed and protected at all times.  However, it is important to note that while having control of cloud management is seen as an advantage by many enterprises, the challenge of adequately managing the cloud can be significant, as noted by RightScale who conducted a study and survey of cloud computing trends for 2016, “26 percent of respondents identify cloud cost management as a significant challenge, a steady increase each year from 18 percent in 2013…Cloud cost management provides a significant opportunity for savings, since few companies are taking critical actions to optimize cloud costs, such as shutting down unused workloads or selecting lower-cost cloud or regions.”

The level of security provided by utilizing the private cloud may be particularly important for those enterprises involved in healthcare or banking/finance because of the strict regulations and requirements placed on security and privacy.  If you possess and work with data that is restricted by security and privacy mandates like HIPAA, Sarbanes Oxley, or PCI, you cannot use the public cloud to secure your data.  For such highly-sensitive information, you are required to store your data on the private cloud to remain compliant or else face high penalties.

The public cloud also provides the fundamental benefits of cloud computing, as well as its own advantages, but offers less control over maintenance, management and security.  Some enterprises may see the requirement of less management and maintenance as an advantage because they simply do not have the resources or personnel to manage the cloud themselves.  By opting for the public cloud, your data is stored in a data center and that data center is responsible for the management and maintenance of the cloud data.  For enterprises that do not have extremely sensitive data, the trade-off of security and control for less management and maintenance may be completely acceptable.  While you do not have as much control of management and security, data does remain separate from other enterprises in the cloud.

The public cloud does save on hardware and maintenance costs that would typically be incurred by your business.  You pay for the public cloud to use storage capacity and processor power so that you do not have to manage or pay for that capacity or power on your own.  Because you are paying for a service, it is easy to scale up or down quickly without much preparation or change on your end. The public cloud often functions on a “pay-per-use” model so you can quickly make changes, scaling up or down in literally a matter of minutes. For small businesses that do not work with highly sensitive data, the public cloud may be ideal.  But ultimately, it all comes down to how much control you need over the management and security of your data.

It is important to not forget that there is actually a third option – the hybrid cloud. The hybrid cloud is a blend of both private and public cloud, offering enterprises a solution that may provide the best of both worlds.  With the use of the hybrid cloud, enterprises can leverage the advantages of both the private and public cloud in partial ways that best suit the needs and resources of the enterprise.  By doing this, all sensitive data can be managed with the private cloud and the private cloud can be customized the suit any less-flexible applications.  Likewise, the public cloud can be used for information that is not as sensitive or governed by privacy and security mandates, and can also be used for on-demand scalability.

Hybrid cloud is a mix and match solution of the best elements of both private and public clouds for those enterprises with diverse needs.  This diversity is what will help many enterprises evolve and be flexible as IT innovations emerge.  What is interesting about a hybrid cloud solution is that it serves both large and small organizations well because of it offers flexibility, scalability, and security on an as-needed basis.  It allows organizations to slowly “dip their toes” in the public cloud pool while maintain control over sensitive data, via the private cloud, that they are not yet ready to put in the public cloud. RightScale’s survey of cloud computing trends for 2016 notes that hybrid cloud usage is on the rise, “In the twelve months since the last State of the Cloud Survey, we’ve seen strong growth in hybrid cloud adoption as public cloud users added private cloud resource pools. 77 percent of respondents are now adopting private cloud up from 63 percent last year. As a result, use of hybrid cloud environments has grown to 71 percent. In total, 95 percent of respondents are now using cloud up from 93 percent in 2015.”  Additionally, the hybrid cloud may be a very cost-effective solution, allowing enterprises to assign available resources to private cloud needs without having to retain vast additional resources that might be necessary if only using private cloud.

How enterprises use the cloud will depend heavily on resources, security and control needs, privacy restrictions, and scalability needs.  If you are struggling to decide what the right fit is for your enterprise, consider carefully what applications you intend to move to the cloud, how you currently use those applications, any regulatory concerns, scalability needs and your ability to adequately manage whatever your choice ultimately is.  If you have the infrastructure and resources to manage your cloud well, as well as security concerns, the private cloud may be the best option for your needs.  However, if you are a smaller organization with lower security concerns, offloading management responsibility by utilizing the public cloud may relieve a lot of the strain that a private cloud might place. Whether using public cloud, private cloud or a hybrid of the two, one thing is certain – almost everyone is using the cloud.  And, if they are not yet, they will likely be using it soon.


Posted in Cloud Computing, data center equipment, Data Center Infrastructure Management, Data Center Security | Tagged , , , , | Comments Off

Do You Have a DCOI Compliance Strategy?

Spring Cleaning Checklist

The recently established Data Center Optimization Initiative (DCOI) is an important mandate for federal data centers that encourages the sharing of information to encourage optimization of infrastructure and reduce inefficiency in data centers.  Nothing is more of a hot topic in data centers than the need to improve efficiency on all levels to remain sustainable and effective.  The White House describes the requirements of DCOI as follows, “The DCOI, as described in this memorandum, requires agencies to develop and report on data center strategies to consolidate inefficient infrastructure, optimize existing facilities, improve security posture, achieve cost savings, and transition to more efficient infrastructure, such as cloud services and inter-agency shared services.”

This initiative focuses on data center consolidation and optimization of existing data centers to reduce redundancy.  These measures will help make data centers more eco-friendly which benefits the environment but by improving efficiency also provides significant cost savings.  Further, DCOI recognizes and encourages the utilization of the cloud to improve efficiency and scale operations without expanding physical footprint.  Undoubtedly, plans and goals will have to be established to meet the demands of DCOI so early adoption is the best approach. Schneider Electric explains the importance of complying with DCOI, “ One of the key requirements for existing data centers is to “achieve and maintain” a PUE score of 1.5 or less…new, proposed data centers must be designed and operated at 1.4 or less with 1.2 being “encouraged”.  Another key requirement is deployment of data center infrastructure management tools (DCIM) in all Federal data centers since manual collection of PUE data will no longer be acceptable.  If Agency CIOs fail to achieve these scores and implement DCIM by September 30th, 2018, “Agency CIOs shall evaluate options for consolidation or closure…”. In other words, comply or be assimilated. Fortunately for these CIOs, legacy data centers often have plenty of room to improve infrastructure efficiency by reducing power and cooling energy losses to bring PUE scores within these limits.  In addition, DCOI targets are expected to result in the closure of approximately 52% of the overall Federal Data Center inventory1.  So it’s important to try to make as many improvements as is feasible even if you’re meeting the required 1.5 (or 1.4 for new) …i.e., increase your odds of survival by being as good as you can be. Agencies should start with an efficiency assessment of the site in question.  Find out where you’re at now and identify areas for improvement.”  DCIM technology should be implemented to monitor energy usage and improve energy consumption and every effort should be made by federal data centers to comply with DCOI going forward.

Posted in Cloud Computing, computer room construction, Computer Room Design, Data Center Build, Data Center Construction, Data Center Infrastructure Management, DCIM, Power Management | Tagged , , , , | Comments Off

3 Trends in Data Center Design

DCIMEvery year we see certain trends arise in data center design and construction and, as 2016 winds down; we are able to take a look back at the year and anticipate what may be ahead in 2017.  Data center design is constantly evolving as infrastructure changes and storage needs shift.  Data center design is an expanding facet of the data center industry because every data center must be flexible and always constantly capable of adaptation and updating to stay current.  Technavio points out just how important data center design is and will be looking forward, “This is why the global data center design market, which was valued at just $516 million in 2015, is expected to top $1.2 billion by 2020, growing at a cumulative average growth rate of 19.03%.” A data center that can scale on demand will be the data center that flourishes in 2017.

When it comes to data center design, one of the most common problems that data centers encounter is outgoing their existing space.  This is combatted in a number of ways including increasing rack density, colocation, and usage of the cloud. Colocation will be a big trend moving forward for many businesses that have simply outgrown their enterprise data centers, or do not want to take on the task of managing IT infrastructure and protecting data security in a world where threats are constantly evolving.  When businesses opt for colocation, it often opens the door to on-demand scalability and peace of mind that IT experts are managing data security.  Additionally, we will see an increased use of the cloud in data centers to meet growing data demands. CloudTweaks offers a concise explanation of why the cloud will be transformative in data center design moving forward, “While organizations continue to consolidate facilities to save money, their need for effective data management and storage have increased exponentially. The volume of digital data is growing at an unprecedented rate, doubling every two years. Today’s IT execs are under phenomenal pressure to deliver value, while maintaining cost and efficiency. This is where the cloud can be most effective. Through economies of scale, cloud vendors are able to deliver the same, if not better, performance than in-house data centers at a lower cost. Furthermore, the cloud provides a centralized computing system that enables data and applications to be accessible from anywhere, anytime, yielding operational efficiencies.”  Lastly, we will see a continued trend of making data centers more “green.”  This means making efforts in design to more efficiently cool data centers and more efficiently supplying necessary power.  These three trends are sure to be strong going into 2017 and will directly impact data center design.

Posted in Data Center Design, Datacenter Design | Tagged , | Comments Off

Securing the Cloud in the Modern Data Center

cloudcomputingAsk just about any client what one of the most important things they are looking for in a data center is and you will likely hear, “security” over and over again.  Securing the traditional data center is a challenge unto itself but now many data centers are hybrid of traditional storage and cloud storage which complicates security immensely.  Information Week describes the challenges faced in data center cloud security, a well as the strengths it will have moving forward, “Moving beyond traditional perimeter security into public, private, and hybrid cloud architectures stretches the capabilities of traditional security tools. It creates new security holes and blind spots that were not there previously. But cloud security is looking brighter by the day, and very soon cloud security tools will outmatch any type of non-cloud parameter security architecture. In many ways, cloud security is gaining in strength based on a seemingly inherent weakness. Cloud service providers are in a unique position to absorb vast amounts of data. Because large clouds are geographically dispersed in data centers around the globe, they can pull in all kinds of security intelligence as data flows in and out of the cloud. This intelligence can then be used to track security threats and stop them far more quickly.”

The problem is not a static one, it is a fast-paced, growing challenge.  The more heavily the cloud is used, the more information it stores, the more security is needed but also the more potential holes in security there are.  Security must scale at the same rate, or faster, than the growth of the data center.  Because it is relatively new, and rapidly evolving, there are not clear-cut standards for cloud security in place.  It is important for data centers to pay attention to what is happening in the industry and look to others, such as the U.S. government, for what is working in cloud security, which TechTarget elaborates on, “For example, cloud providers that handle confidential financial data should underscore their compliance with the Payment Card Industry Data Security Standard (PCI DSS) specification as proof of the integrity and security of their operations. PCI DSS does outline requirements related to cloud-specific aspects of security, stipulating that providers must segregate cardholder data and control access in addition to providing the means for logging, audit trails and forensic investigations. But the highly dynamic nature of most cloud-based applications — which often lack built-in auditing, encryption and key management controls — makes it expensive and impractical to apply the PCI standard to most cloud applications. Providers and enterprises seeking answers on cloud standards for security have found guidance from an unlikely source:  the U.S. government. Though not usually perceived as a leading-edge technology adopter, the public sector has been engaged in aggressive security standards development efforts to support its Cloud First initiative, which requires federal agencies to select a cloud service for new deployments when a stable, secure and cost-effective offer is available. The Federal CIO Council laid out 150 cloud security controls for its Federal Risk Assessment Program (FedRAMP), which provides a baseline for common security requirements that agencies can use to verify that a prospective cloud provider supplies adequate cloud application security. Compliance will be validated by third-party assessment organizations. Using cloud-specific security requirements created by the National Institute of Standards and Technology (NIST), FedRAMP offers agencies a common set of cloud standards they can use to sanction a cloud provider. If the particular agency has additional security requirements, then the provider can build on its baseline controls to address these needs.”  The  a cloud is a cost-effective way for many data centers to scale to meet customers’ needs but security protocols must be in place to ensure that, as scaling occurs, data continues to be properly secured.



Posted in Cloud Computing, Computer Room Design | Tagged , , | Comments Off

3 Trends in Data Center Cooling

Melting-ice-cubes.jpgData center power consumption is evolving all the time, becoming more efficient but, generally, growing.  While many data centers are making green initiatives and finding ways to make their energy usage as efficient as possible, data demands are constantly growing, rack density is being increased and the need for effective cooling growing along with it.  There are many approaches to data center cooling and even single data centers are implementing a variety of approaches to best cool their data center.  Large data centers from companies like Yahoo or Apple are setting the trend in green data center cooling initiatives and small data centers are not only taking note but are also implementing those trends in their own data centers.  Below are 5 exciting trends in data center cooling.

  1. Liquid Cooling
    • Using liquid to cool, instead of air, is a great way to cool higher density racks and can be used in a variety of ways in the data center. TechTarget elaborates on the use of liquid cooling in data centers, “Now, new technologies can put 250 kW in a single rack, using liquid immersion cooling to play an important role for certain systems, such as high-performance computing, Cecci said. The pluses of liquid cooling include the ability to deploy it in specific areas — by row and rack — and it is very quiet and reliable, with few moving parts. Despite its benefits, liquid cooling is not in many data centers today, he said. “Most of these technologies — we will see them in the next two to three years,” Cecci said.”
  2. CRAC
    • CRAC (computer room air conditioner) cooling systems has been used for a considerable amount of time in data centers. While they may be the old standard, CRAC has continued to evolve over time with new strategies to be an effective form of cooling in data centers, which TechTarget explains, “The easiest way to save money is to reduce the number of running CRAC units. If half the amount of cooling is required, turning off half the CRAC units will give a direct saving in energy costs — and in maintenance costs. Using variable-speed instead of fixed-speed CRAC units is another way to accomplish this, where the units run only at the speed required to maintain the desired temperature. The units run at their most effective levels only when they run at 100%, and some variable speed systems don’t run at a fully optimized rate when operating at partial load. Running standard, fixed-rate CRAC units in such a way as to build up “thermal inertia” can be cost- effective. Here, the data center is cooled considerably below the target temperature by running the units, and then they are turned off. The data center then is allowed to warm up until it reaches a defined point, and the CRAC units are turned back on.”
  3. Bypass Air
    • Bypass air is any air that is conditioned that does not pass through IT equipment before returning to the cooling unit. In essence, it is a waste of cooled air which has led many data centers to make efforts to reduce the problem.  Data Center Dynamics explains the problem and how data centers are fixing it to improve data center cooling, “The velocity of the cool air stream exceeds the ability of the server fans to draw in the cool air; as a result the cool air shoots beyond the face of the IT rack.  Cool supply air can join the return air stream before passing through servers, weakening cooling efficiency. Eager to combat the inefficiencies above and keep pace with steadily climbing data center temperatures, businesses often adopt hot aisle/cold aisle rack orientation arrangements, in which only hot air exhausts and cool air intakes face each other in a given row of server racks. Such configurations generate convection currents that produce improved airflow. Although superior to chaos air distribution, hot aisle/cold aisle strategies have proven only marginally more capable of cooling today’s increasingly dense data centers, largely because both approaches ultimately share a common, fatal flaw: They allow air to move freely throughout the data center. This flaw eventually led to the introduction of containment cooling strategies. Designed to organize and control air streams, containment solutions enclose server racks in sealed structures that capture hot exhaust air, vent it to the CRAC units and then deliver chilled air directly to the server equipment’s air intakes.”
Posted in Computer Room Design, data center cooling, Data Center Design | Tagged , , , | Comments Off

Importance of Renewable Energy in Data Centers

Advanced PDU

“Renewable energy.”  “Clean energy.”  These may sound like buzzwords – trendy little catchphrases meant to grab your attention and sound good but they are far more than buzzwords, they are the reality and the future of data centers.  As we make pushes to become sustainable in all industries one of the biggest focuses will likely be towards sustainable, renewable energy in the data center.  Large data centers use exponential amounts of energy, equivalent to the energy some small cities use, so it only goes to logic that there would be a push to make that energy usage as clean as possible.  So, just how critical will it be that data centers focus on sustainability through renewable and clean energy going forward? Very.  In fact, Data Center Knowledge notes that a study was completed and what consumers want now and going forward are data centers focused on sustainability, “A recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, found that 70 percent of these users consider sustainability issues when selecting data center providers. About one-third of the ones that do said it was very important that their data center providers power their facilities with renewable energy, and 15 percent said it was critical.Most respondents said their interest in data centers powered by renewable energy would increase over the next five years. More than 60 percent have an official sustainability policy, while 25 percent are considering developing one within the next 18 months.”

As data center space across the globe continues to rapidly grow, so will the amount of energy used.  That energy use is often not only bad for the environment but quite costly.  We have already seen large companies like Google and Apple focus on renewable energy and, as we often see, smaller data centers will likely follow in their footsteps.  Small and large data centers are undertaking renovations and making changes towards renewable energy because even the tiniest improvements in efficiency and sustainability are saving big bucks.  What do these renewable energy efforts look like?  There are a vast array of options and approaches but Data Center Frontier elaborates on a few, “In broad terms, “clean” or “green” energy comes from renewable sources such as the sun (solar), wind, the movement of water in rivers and oceans (hydroelectricity), biofuels (fuel derived from organic matter), and geothermal activity. Today, there are big trends showing that tech giants are moving towards renewable energy sources in their green data centers. Digital Realty, along with certain major technology companies and other pioneers, are showing that clean energy can be used to power even the largest and most high-performance data centers. And as more organizations consider moving from traditional to cleaner sources of power, they are also showing that renewable energy can be cost-effective.”  This is not a fleeting trend.  Renewable energy is here to stay, it is the future of data centers, and all data centers should b emaking efforts to make small and big changes towards renewable energy for the future.


Posted in Data Center Infrastructure Management | Comments Off