The UPS Debate: A Conversation on High Efficiency, Multi-Mode UPSs


High Efficiency, Multi-Mode UPSs

Uninterruptible power supplies (UPS) are designed to protect against power surges and to keep data center equipment running long enough to power down safely during electrical outages. With the normal expense of powering data centers and the increased cost in dollars and data that can be caused by downtime due to power failure, high efficiency, multi-mode UPSs are fast becoming standard equipment in computer data centers.

How Does Multi-Mode Work?

A multi-mode UPS is capable of switching between two modes of operation. Premium efficiency mode, also called eco-mode, provides superior operation efficiency while double conversion mode provides superior power protection. Multi-mode UPSs can switch between these two modes in a matter of milliseconds to compensate for power deviations and anomalies. In fact, most UPSs have transfer speeds of less than eight milliseconds and, in many cases, as few as two milliseconds, so as not to exceed the tolerance levels of data center equipment. Multi-mode UPSs have the ability to deliver continual computer grade power while reducing both energy costs and environmental impact.

What Are the Benefits of Multi-mode UPS?

Uninterruptable power supplies come in the form of single-conversion systems, double-conversion systems and multi-mode systems. By incorporating the features of both single and double conversion, multi-mode UPS systems are able to offer the following three major benefits:

  • Maximize critical load protection without sacrificing operating efficiency.
  • Extended parts life of one to two years minimum.
  • Output fault mitigation immediately handled by upstream overcurrent protection.


The ability to switch between power protection and operating efficiency modes increases the energyefficiency of multi-mode UPSs to 98 or 99 percent. This automatic switching capability also reduces the load on heating, ventilation and air condition systems. Decreased energy consumption plus decreased load on HVAC systems equals decreased total cost of ownership.

How to Choose the Right High Efficiency, Multi-Mode UPS

As with most products, not all UPS systems are created equally. When selecting the multi-mode UPS for your data center needs,Pedro Robredo at Eaton suggests asking these five questions:

  1. Does the UPS sacrifice protection to gain high efficiency?
  2. How does the UPS achieve its high efficiency?
  3. How efficient is the UPS when lightly loaded?
  4. How quickly does the UPS detect and respond to power events?
  5. What extras does the UPS offer for maximum protection?


The answers to these questions and an understanding of UPS will better equip managers to select the right high efficiency, multi-mode UPS system for their data centers.

Posted in Uninterruptible Power Supply, UPS Maintenance | Comments Off

How to Get the Best Return on Your DCIM

Advanced PDUReaping the Benefits of DCIM

DCIM (Data Center Infrastructure Management) software solutions maximize efficiency and performance, improve resource and capacity management and reduce riskin one centralized process. These solutions are an investment, so when choosing a Data Center Infrastructure Management solution, it is important to know what benefits the DCIM will bring to your organization.

Energy Efficiency and Performance

DCIM solutions maximize efficiency and performance by integrating IT and facilities management tools to provide a more complete picture of power utilization. This allows managers to better identify variances in power consumption and make adjustments based on DCIM information tosimultaneously increase energy efficiency and boost equipment performance.DCIM solutions can also monitor and maintain the optimum room temperature of the data center. Immediate notification when there is an increase or decrease in thermostat readings allows personnel to act quickly to ensure continued optimum performance.

Resource and Capacity Management

Resources include servers, storage, network switches, power distribution units, racks and the room that houses this equipment. Capacity management involves having the necessary elements in place to maximize the use and performance of those resources and includes the optimization of four distinct areas.

  1. Rack Space

Enough space to house additional assets as they become necessary is essential. DCIM manages rack space by showing which assets are no longer required, which could be combined to free up space for new equipment and which can be reconfigured to make room for additional ones.

  1. Network Connectivity

DCIM also tracks the availability and location of network ports. This eliminates having to physically locate an active network port, and it provides information on where new ports might need to be installed.

  1. Power

Consistent and accurate monitoring and reporting of power consumption by DCIM tools allows companies to operate data centers in the most energy efficient manner possible.

  1. Cooling

By identifying hot and cold spots, DCIM provides information necessary to address thermal and airflow behavior in the center. This allows managers to better utilize the current cooling system to avoid hot spots than can inhibit equipment performance.

Risk Reduction

The top benefit of DCIM solutions isreduced risk of downtime. With constant monitoring of the primary aspects of the data center, from power usage to rack space to temperature, DCIM can identify potential problems, thus allowing them to be repaired before they lead to system failure.

Data Center Infrastructure Management solutions provide a means of streamlining data center operations, which can ultimately lead to cost savings and reduced downtime.

Posted in Data Center Infrastructure Management | Comments Off

Which Infrastructure Convergence Is Best for You?

Infrastructure convergence is the grouping of several IT components into one computer package. With several possible components (servers, networking equipment, data storage centers, and software), there are also several different types of convergence. Take a look at the three current options. With more information, you may be better able to determine which option is best for your situation.

Unified Architecture

This convergence method is based on the traditional rack-mounted server situation. However, with a move toward virtualization, the modern blade-and-chassis environment is markedly different. Users are now able to tie directly into fabric interconnects. Users can create their own hardware and service profiles. These powerful systems maintain their agility but come with a hefty price tag. A high-end architecture, although expensive, might be the best option for large service providers with many different resources and hundreds or thousands of racks.

Converged Infrastructure

This type of grouping is a node-based unit with both storage and computing equipment in one location, sometimes called an appliance. When consumers want to expand, they simply add another node. This offers a lot of versatility and some financial savings as big workloads can be merged into smaller infrastructure nodes. Who would benefit the most from this type of setup? Users are often mid-sized organizations looking for a way to offload VDI and move toward a price-conscious option. Of course, the converged infrastructure is easy to upgrade, so downsizing initially won’t prevent an organization from growing in the future.

Hyper-Converged Infrastructures

While the other two groupings rely heavily on the hardware, this infrastructure converges all of the components of data processing in one single computer layer. The benefits are simplified storage, easier networking, and an abstract, software-controlled process. The hardware stack can be custom built or modified without fear of damaging the infrastructure. This could lead naturally to some important financial savings. A vital part of this hyper-converged virtual appliance is the hypervisor. The hypervisor allows control of resources, API integration, and the convergence of compute, storage, and networking all in one device. Users of this system may be those organizations that are growing rapidly with the need for quick changes, usage growth, and new additions.

Focus on an Agile Infrastructure

Is it possible that hardware stacks will go the way of the floppy disk? As this hardware becomes condensed and abstracted, it may be possible. If data, VMs, and applications are able to move quickly and easily from data centers to the cloud and among many users, the future of data centers will continue to evolve. As businesses rely more and more on data centers, the drive for change is sure to remain constant.

Posted in Data Center Build, data center equipment, Data Center Infrastructure Management | Tagged , | Comments Off

The Changing Face of the Colocation Industry

In the world of colocation data center services, there has been a lot of change. Some of these emerging trends are really affecting the way that business is done and in competitive ways. The biggest players in the market are making changes and purchasing acquisitions to maintain their place in their sector of the business world. What are the biggest trends?

High-End Luxuries or Affordably Pure

As providers incorporate higher-level technologies such as cloud, hosting, and interconnection services, other providers are turning to basic services. These providers can deliver low-cost space, power, and cooling to customers looking for good-quality, affordable options.

Customer Use Flexibility

Another trend is the desire for increased capacity and density options. Customers are becoming more familiar with the role that data centers play. They want flexibility in capacity, the ability to adjust the amount of colocation capacity, and the freedom to pay for only what they are using.

New Management Software

Data center infrastructure management (DCIM) software is finding its place and impacting both colo providers and customers. For example, the providers are enjoying increased efficiency, more information for decision making, and plenty of high-tech add-ons. Customers enjoy clear visibility into the levels of power they’re consuming, one of the things that most affects their colo costs.

Increasing Numbers of Acquisitions

Mergers and acquisitions have been a normal part of the colo industry, but lately there’s been an increase in acquisitions. These exchanges can lead to more opportunities and a broader reach, both geographically and in product selection. Big acquisitions can also lead to gorilla providers in control of pricing and availability.

New Business Relations

Today representatives of the colocation providers are discussing terms and pricing with different buyers than they did in the past. The rep from the data center may be speaking with cloud architects and others within the customer organizations. This trend is affected by the natural alliance between cloud and colo services.

Edge Market Services

Another important trend is the storage of a wide variety of data from edge markets in colo centers. To provide these services, edge data centers must also provide connectivity, internal interconnection, and WAN capacity, as well as attracting content providers, long-haul carriers, and last-mile ISVs.

The Internet of Things

Both inside and outside of the IT world, the Internet of Things is a popular topic. The data necessary to run and track the use of everyday objects needs to have a home somewhere. This network connectivity allows companies and consumers to both send and receive data. It wouldn’t be possible without the colocation providers.

Posted in Data Center Infrastructure Management, data center maintenance, DCIM | Tagged , | Comments Off

Inefficient Cooling in Small Data Centers Impacts the Entire Industry

Some of the biggest costs for data centers come from overhead energy consumption, and a great deal of that energy goes toward cooling the centers. In fact, some statistics show that cooling systems take up about half of a data center’s energy intake. Really large data center operators can afford to run off of super-efficient designs, but what can smaller centers do? Inefficient cooling is a big problem for many smaller data centers.

Small Data Centers Exist in Every Community

Visit a university campus IT center or a local government IT facility. These smaller centers don’t get a lot of attention, but they are home to a large amount of the world’s IT equipment. This also means that the smaller facilities are responsible for a big chunk of the energy used up by the data center industry. It follows logically that inefficient cooling remains a problem for the entire IT industry.

The Main Problem

The problem then is that these smaller data centers are operating with inefficient cooling systems that impact the entire industry, but they don’t have the resources to significantly improve. The small-town university IT department doesn’t have room in the budget for major infrastructure upgrades. Further compounding the problem is the fact that some of these data center teams don’t even see the energy bills and remain unaware of their role in this problem. Without that awareness, there’s no motivation to make an effort to reduce energy consumption or improve cooling efficiency.

Hot Spots and Redundancy

The heart of the problem is that too many data centers are being overcooled. This happens for two reasons. The first is that hot spots must be treated, and the rest of the center is overcooled as a byproduct of that intense cooling – basically a result of improper air management systems. The second reason is redundancy. This preventative step is necessary in any data center, but with the same solution (improved air management), inefficient cooling can be reduced.

The Search for a Solution

A solution has already been determined: Simply install the proper controls and increase knowledge of the centers’ actual cooling needs. This will keep redundant units in standby mode, only kicking them on when they become necessary. Sadly, too many smaller data centers don’t have those systems or the resources to implement them. Reliability must be the top priority for data centers, but the development of efficient cooling practices must become more important, because without some improvement from these smaller data centers, the entire industry will continue to be plagued by this problem.

Posted in Data Center Construction, Data Center Design | Tagged , | Comments Off

How Flash Storage Has Changed the Face of Enterprise IT

Enterprise-class IT, or enterprise IT as it’s commonly known, refers to a combination of the hardware and software systems that have been designed to fulfill the needs of large organizations. As its name suggests, this type of IT system is used in situations that require a great deal of processing power. Those qualities that describe any type of IT system are expected to be present and fully functional but to a greater degree than found in smaller systems. For large and complex organizations, the IT requirements for performance, security, compatibility, reliability, availability, and scalability are a driving force behind productivity.


The Evolution of VDI

The virtual desktop infrastructure (VDI) that was once the norm for the biggest projects and organizations has evolved to keep up with current advancements in the technology. As networks and storage have been changed by the rapid evolution of digital business technology, VDI offerings include the use of all-flash storage. With the introduction of flash storage, the end users will enjoy much higher levels of performance.


How Does Flash Storage Fit?

To truly understand the benefits of flash storage, it’s necessary to understand what it is. Most people are probably familiar with flash drives. Basically, flash storage is any kind of storage system or data repository that runs with flash memory. The storage memory is a form of electrically erasable, programmable, read-only memory, but it is a non-volatile memory type. This is an advantage because it means that no power is required to keep the stored data intact. The flash technology is able to erase large blocks of data at once and can rewrite without completely erasing older data.


The Benefits of Flash Storage

Another advantage of flash storage is that its increased performance does not require a correlating amount of increased cost. High levels of data reduction and lower storage costs translate into savings at the data center in physical space, power, and cooling. This translates into a storage solution that is very cost efficient. You’ll find that the systems provide higher levels of data control and management. New layers of data abstraction provide deduplication, acceleration, and data encryption. Flash storage offers seamless integration with the cloud and with virtualization layers. Plus, this storage solution comes in a tiny package.


Solutions for Enterprise IT

If your large business has outgrown your old IT system and you’re ready for enterprise IT, give a thought to flash storage. This vital part of large, capable hardware and software systems and modern storage management provides organization IT departments with a new and evolved solution.


Posted in data center equipment, DCIM | Tagged , , | Comments Off

Data Centers: Securing the Cloud

truth-about-hackersIn an increasingly digital world where just about all of our personal and business-related information is stored, relayed and transacted online security concerns continue to grow and grow.  We hear about hack after hack and the need for data centers to increase their security.  As more and more location move towards cloud computing, how can they increase not only the security of their infrastructure but their overall security?  There are growing concerns that the public cloud may actually be more secure than the IT facility cloud.  Infoworld explains the concern and what the main contributing factors to the problem are, “What public clouds bring to the table are better security mechanisms and paranoia as a default, given how juicy they are as targets. The cloud providers are much better at systemic security services, such as looking out for attacks using pattern matching technology and even AI systems. This combination means they have very secure systems. It should be no surprise that the hackers move on to easier pickings: enterprise data centers. The on-premises systems that IT manages is typically a mix of technologies from different eras. The aging infrastructure is often less secure — and less securable — than the modern technology used by cloud providers simply because the old, on-premises technology was designed for an earlier era of less-sophisticated threats. The mixture of different technologies in the typical on-premises data center also opens up more gaps for hackers to exploit.”  So, does it just boil down to a narrowed focus paired with hyper-awareness of threats?  Is it just that the cloud can simply focus on its unique set of challenges whereas the traditional facilities have a wide range of weaknesses that pose potential threats and therefore security is spread thin across the board?

Cloud computing has more than proved its value so it is certainly not going anywhere.  Facilities are getting on board with it and more making the switch.  The problem is that they still have a wide range of infrastructure that must also be kept safe and protected, and traditional security approaches for facilities are different in the digital space.  What once worked for security may be so outdated that it is no longer effective and with hackers acutely aware of the gaps, like heat-seeking missiles, will swiftly find and attack those weak spots.  A breach is often the result of an un-tested system so facility managers must get more vigilant about education and testing.  Ignorance is far from bliss in this case.  The threat landscape is constantly changing so IT facilities can better protect themselves through a combination of education, real-time monitoring, protection of servers, and a dynamic multi-level approach to security.  Information must be protected within storage devices inside a facility, throughout information transmission between facility servers and clients, and throughout use within an application.  And, as mentioned above, a healthy dose of paranoia never hurt anyone when it comes to protecting secure information.  Through an extensive effort of limiting exposure on every possible front and a commitment to staying ahead of the hackers as much as possible, data center security can begin to reach the level of protection that customers expect.

Posted in data center equipment, Data Center Infrastructure Management, Data Center Security, DCIM | Tagged , | Comments Off

Data Centers Utilizing Wind Power

windenergygoogleEco-friendly and energy efficient remain the focus of data centers across the nation and around the world.  Every step a facility takes towards improvement is a step towards reduced energy consumption and significant savings.  Many facilities specifically choose to place their locations where the climate allows for natural cooling using outside air which lowers the use of air conditioning systems.  Now, many facilities are making a move toward using wind power.  These locations are using utility power derived from wind generation.  This form of renewable energy is eco-friendly because it is sustainable and dramatically reduces the need for other sources of utility power.  In some cases, data centers are becoming 100% wind powered!

There are some restrictions in place for businesses can source their wind power but this move is incredibly positive and will certainly become more and more popular over time.  Facebook has utilized wind power for one previous location and has opted to design its newest facility to be 100% wind powered because they recognize that it is an inexpensive and effective form of clean energy.  Fortune elaborates on Facebook’s latest undertaking, “Facebook announced on Tuesday that it’s building a large $1 billion data center in Ft. Worth, Texas. The facility, which is already under construction, will be Facebook’s fifth data center, and will be built on land purchased from a real estate company run by the eldest son of former Presidential candidate Ross Perot. The data center will use wind power from a large wind farm that is also under construction on 17,000 acres of land in Clay County about 90 miles from the data center. By agreeing to buy the power from the 200-megawatt wind farm, Facebook helped bring the clean power project onto the grid. A report issued earlier this month from the European Commission Joint Research Centre found that there were about 370 gigawatts of wind turbines installed by the end of 2014. One gigawatt is the equivalent to a large coal or natural gas plant… Facebook will presumably buy the wind power at a fixed low rate over several decades. If grid energy prices rise, the deal could actually save Facebook money on its energy bill.”  Additionally, Data Center Knowledge notes that it is not just IT facilities that are making this move but customers as well, “Salesforce has contracted for 40 megawatts of wind power from a West Virginia wind farm, becoming the latest cloud giant to enter into a utility-scale renewable-energy purchase agreement… The purchase covers more capacity than all of the cloud-based business software giant’s servers consume in data centers that host them.”  This shift in the industry shows that businesses, customers, and even employees are demanding more renewable energy sources for data centers and, in addition to being eco-friendly, they are significantly impacting company’s bottom lines.

Posted in Computer Room Design, Data Center Battery, Data Center Build, Data Center Construction, data center cooling, Data Center Design, Datacenter Design, Power Management, Uncategorized | Tagged , , , , , , | Comments Off

Tips to Prolong the Life of a Data Center UPS Battery

3 Data centers rely on their UPS battery to keep their infrastructure up and running.  Implementing uninterruptible power supplies with a good, reliable, long-lasting UPS battery is an expensive endeavor but one that is more than worthwhile if the power supply does what it is supposed to and provides protection.  We have discussed UPS system TCO in the past, and it is important to evaluate TCO when determining what Uninterruptible Power Supply system to implement but TCO is only accurate when you take life-extending measures to keep your Uninterruptible Power Supply system running as it should, for as long as possible.  A neglected backup power source, or one that is not properly implemented, may have a dramatically reduced life which is frustrating and costly.  It is important that a data center manager make prolonging the life of its backup power supply battery a priority so that investment is maximized and power is properly protected.  Below are some tips to prolong the life of your battery without jeopardizing the uptime of your facility,  so that you can have peace of mind that you facility is covered and you are maximizing the investment you have made.

  • Purchase the Correct UPS Battery for Your Unique Data Center
    • This is often where mistake #1 occurs. It is important to consider total cost of ownership when choosing the right backup power system and power unit for your data facility but total cost of ownership is not necessarily the full picture.  Some high-rate discharge batteries have a shorter lifespan so if a longer lifespan is a high priority it may be best to opt for a different kind of UPS battery.  A flooded or wet cell option will cost more than a VLRA battery but it will be more reliable and have a longer lifespan.  With a good picture of your data center’s specific needs, and a proper analysis of TCO you can narrow in on the proper continuous power unit to provide reliability and long lifespan within your budget.  And, once you have chosen the correct one, make sure it is installed properly.  An incorrectly installed backup power battery will often have a shorter lifespan.
  • Maintenance, Maintenance, Maintenance
    • If there is one thing that might make the biggest difference in prolonging the life of a UPS battery it is maintenance. Maintenance must be performed routinely according to a pre-determined schedule so that you are certain your backup power supply is not being neglected.  They are very sensitive to temperature and so it is important to have a monitoring system in place that alerts you if the temperature fluctuates outside of a certain range (keep it as close to  75 – 77 degrees Fahrenheit as possible).  By maintaining the correct temperature you can significantly prolong it’s life.  While automated monitoring of certain factors is important, a routine visual inspection should be part of your maintenance schedule as well because you can look for obvious damage such as loose intercell connections, damaged post seals, corrosion or fires.
  • Do Not Use Your UPS Battery Beyond Its Capacity
    • A battery is still functioning and your UPS is still doing its job, sure it may be low on life, but it is still working so why waste it, right?   It is critical that you do not push your backup power battery beyond its capacity or you greatly risk having no backup in the event of a power failure.  You should never use it beyond 80% of its rate capacity.  Once it hits 80% it will begin to deteriorate more rapidly, putting your data center at risk.  For this reason, it is imperative you not exceed a Uninterruptible Power Supply battery’s capacity.
Posted in Data Center Battery, data center equipment, Data Center Infrastructure Management, data center maintenance, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , , , , | Comments Off

Data Center Cyber Attack Prevention & Protection

truth-about-hackersEverywhere we turn we seem to be hearing about another cyber attack.  Sensitive customer information, compromised.  Angry businesses.  Concerned customers.  It is a major problem and it is one that data centers must be supremely aware of and vigilant in protecting against  since facilities have access to and store so much sensitive information.  Data centers must protect networks, applications and end points from highly sophisticated, ever-evolving threats.

Protection techniques vary by each location depending on the size of the facility, the information it stores, and the specific needs of their clients.  But, in the end, all data facilities need to be actively preventing cyber attacks through a variety of means and approaches.  There are intrusion prevention systems that can be implemented by any data center that are scalable and designed to protect against the most current threats.  Data Center Dynamics explains why security must be uniquely designed for the data site, technology-driven, and innovative to truly protect data centers from potential threats, “Many Internet-edge security solutions, like next-generation firewalls, are being inappropriately positioned in the data center where the need is visibility and control over custom applications, not traditional web-based applications, and the systems that keep them operational. Security must be integrated into the data center fabric, in order to handle not only north-south (or inbound and outbound) traffic, but also east-west traffic flows between devices, or even between data centers. Security also needs to be able to dynamically handle high-volume bursts of traffic to accommodate how highly-specialized data center environments operate today. And to be practical, centralized security management is a necessity. Today’s data center environments are highly dynamic and security solutions must be as well. As they evolve from physical to virtual to next-generation SDN and ACI environments, data center administrators must be able to easily apply and maintain protections… They must also be intelligent, so that administrators can focus on providing services and building custom applications to take full advantage of the business benefits these new environments enable, without getting bogged down in administrative security tasks, or risking reduced levels of protection… Traditional data center security approaches offer limited threat awareness – especially with regards to custom data center applications and the SCADA systems that keep them running 24×7. They typically deliver limited visibility across the distributed data center environment and focus primarily on blocking at the perimeter. As a result, they fail to effectively defend against the emerging, unknown, threats that are targeting them. What’s needed is a threat-centric approach to holistically secure the data center, that includes protection before, during, and after an attack – one that understands, and can provide protection for, specialized data center traffic and the systems that keep them running. With capabilities like global intelligence, coupled with continuous visibility, analysis, and policy enforcement across the distributed data center environment, administrators can gain automation, with control, for the protection they need. Advanced attackers are infiltrating networks and moving laterally to reach the data center. Once there, the goal is to exfiltrate valuable data or cause disruption. Data center administrators need technologies that allow them to be as ‘centered’ on security as attackers are on the data center.”  Protection must be multi-level and provide protection, contingency and backup for multiple-stages of a potential attack.  While implementation of such protection may be time-consuming and costly, it is far better than having a massive cyber attack that compromises senstive customer information or that forces your data center into prolonged downtime.  Through better protection and an ever-evolving approach based on the most current information about cyber threats , customer trust is protected and business can continue to not only function but thrive.

Posted in Uncategorized | Tagged , , | Comments Off