header-about-us-sub

How To Create a Successful Corporate IT Department

googledatacenter1Corporate IT departments are complex and critical to the success of any business.  In today’s world, technology is constantly evolving, more and more is being done in the online environment and customers are demanding constant uptime.  A corporate IT department, run by a CIO, needs to stay on the cutting edge and meet demands to stay effective and successful.    The successful CIO will undoubtedly be a trendsetter by nature, driven by technology and all of its constant changes.  To be anything else will leave the CIO, and the entire IT department, reactive in its approach.

Because IT departments demand a lot of time and money for any business, their expenses and budget are often under great scrutiny.  Infrastructure changes often which leads to constant expense and, in the corporate world, the expense could be significant, particularly for companies with hundreds or thousands of employees.  One step to achieving a successful corporate IT department is to try to limit IT infrastructure expenses to around 50% of the budget, as opposed to the more common 70% of the budget.  In today’s workforce most companies are dealing with a wide-range of technology knowledge amongst their employees but, the great majority, are relatively tech-savvy.  This means that when a corporate IT department does not perform as it should, employees are more than aware.  But, that does not mean that there are not individuals in the workforce that are significantly less tech-savvy.  So, how does a corporate IT department manage such a wide employee knowledge and skill base?  CIOs have an important task when choosing a product suite for their business to suit the needs of their employee demographic while pushing technology and infrastructure forward to stay current.  But, when choosing infrastructure the budget must be an important consideration as well so that infrastructure costs do not become astronomically out of proportion to the rest of the budget.  As more and more IT departments and data centers move towards automation and cloud computing, infrastructure can be greatly reduced which frees up more money in the IT department budget.  This is something that not only the CIO can be happy about but that the CFO and rest of the management will be happy about.  To truly have the most successful IT department in a corporate setting, the CIO must lead the ship and carefully direct the path of the department through careful consideration of all products, well-executed management and training of corporate personnel, an eye on the bottom line of the budget, and, ultimately, a technology-driven, cutting edge perspective that is focused on innovation.

Posted in Data Center Infrastructure Management, DCIM, Technology Industry | Tagged | Comments Off

Data Center Automation

cloudcomputingData centers are supposed to be technology driven and on the cutting edge of technological advances.  But, when technology changes in the blink of an eye, data center infrastructure and programs can very quickly become outdated.  A data center is the core of any IT department and when things are outdated they no longer run smoothly or independently and become quite problematic.  As a way to mitigate these issues and hopefully avoid them altogether, many data centers are making the move toward automation to support cloud computing.  When automation works as it should, processes run smoothly and, ideally, on their own, to perform desired tasks and keep a data center functioning as it should.

When it comes to automation, testing, more testing and, yes, even more testing is critical.  Everything must be checked to ensure that it is running properly before full implementation.  When automation is done correctly, it allows a data center to be more scalable so that load can be actively balanced and more efficient which increases response time.  Because efficiency is the name of the game, more and more data centers are finding ways to reduce and better improve infrastructure and ultimately reduce overhead and overall data center expenses.  Cloud computing, coupled with automation, provides an opportunity to do so, lowering expenses and remaining more competitive without sacrificing quality of service.  By automating a data center personnel are able to have control of all resources at both the software and hardware level.  Data Center Knowledge provides a great description of the different layers of automation available and what that means for data centers:

The automation and orchestration layers

  • Server layer. Server and hardware automation have come a long way. As mentioned earlier, there are systems now available which take almost all of the configuration pieces out of deploying a server. Administrators only need to deploy one server profile and allow new servers to pick up those settings. More data centers are trying to get into the cloud business. This means deploying high-density, fast-provisioned, servers and blades. With the on-demand nature of the cloud, being able to quickly deploy fully configured servers is a big plus for staying agile and very proactive.
  • Software layer. Entire applications can be automated and provisioned based on usage and resource utilization. Using the latest load-balancing tools, administrators are able to set thresholds for key applications running within the environment. If a load-balancer, a NetScaler for example, sees that a certain type of application is receiving too many connections, it can set off a process that will allow the administrator to provision another instance of the application or a new server which will host the app.
  • Virtual layer. The modern data center is now full of virtualization and virtual machines. In using solutions like Citrix’s Provisioning Server or Unidesk’s layering software technologies, administrators are able to take workload provisioning to a whole new level. Imagine being able to set a process that will kick-start the creation of a new virtual server when one starts to get over-utilized. Now, administrators can create truly automated virtual machine environments where each workload is monitored, managed and controlled.
  • Cloud layer. This is a new and still emerging field. Still, some very large organizations are already deploying technologies like CloudStack, OpenStack, and even OpenNebula. Furthermore, they’re tying these platforms in with big data management solutions like MapR and Hadoop. What’s happening now is true cloud-layer automation. Organizations can deploy distributed data centers and have the entire cloud layer managed by a cloud-control software platform. Engineers are able to monitor workloads, how data is being distributed, and the health of the cloud infrastructure. The great part about these technologies is that organizations can deploy a true private cloud, with as much control and redundancy as a public cloud instance.
  • Data center layer. Although entire data center automation technologies aren’t quite here yet, we are seeing more robotics appear within the data center environment. Robotic arms already control massive tape libraries for Google and robotics automation is a thoroughly discussed concept among other large data center providers. In a recent article, we discussed the concept of a “lights-out” data centerin the future. Many experts agree that eventually, data center automation and robotics will likely make its way into the data center of tomorrow. For now, automation at the physical data center layer is only a developing concept.
Posted in Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM | Tagged | Comments Off

Introduction to VMWare & It’s Common Terms and Acronyms

cloud_computing_2013_censusAre you familiar with VMWare?  If not, you should be.  VMWare is a cloud computing and virtualization software that is exclusively compatible with X86 hardware.  In IT, by going virtual, IT related expenses can be greatly reduced and with VMWare multiple virtual machines can be run on a single real machine.  VMWare is a leader in the virtualization field and with many IT departments choosing to utilize their technology, it is important to understand common VMWare terms and acronyms so that everyone can be well-informed and on the same page.  Recently, TechRepublic shared a list of 50 VMWare terms and acronyms with which everyone in IT should familiarize themselves.

 

  1. VM: Virtual Machine. Okay, that’s easy enough!
  2. ESXi: The vSphere Hypervisor from VMware. For extra trivia points, know that Elastic Sky was the original proposed name of the hypervisor and is now the name of a band made up of VMware employees.
  3. vmkernel: Officially the “operating system” that runs ESXi and delivers storage networking for VMs.More info.
  4. VMFS: Virtual Machine File System for ESXi hosts, a clustered file system for running VMs.
  5. iSCSI: Ethernet-based shared storage protocol.
  6. SAS: Drive type for local disks (also SATA).
  7. FCoE: Fibre Channel over Ethernet, a networking and storage technology.
  8. HBA: Host Bus Adapter for Fibre Channel storage networks.
  9. IOPs: Input/Outputs per second, detailed measurement of a drive’s performance.
  10. VM Snapshot: A point-in-time representation of a VM.
  11. ALUA: Asymmetrical logical unit access, a storage array feature. Duncan Eppingexplains it well.
  12. NUMA: Non-uniform memory access, when multiple processors are involved their memory access is relative to their location.
  13. Virtual NUMA: Virtualizes NUMA with VMware hardware version 8 VMs.
  14. LUN: Logical unit number, identifies shared storage (Fibre Channel/iSCSI).
  15. pRDM: Physical mode raw device mapping, presents a LUN directly to a VM.
  16. vRDM: Virtual mode raw device mapping, encapsulates a path to a LUN specifically for one VM in a VMDK.
  17. SAN: Storage area network, a shared storage technique for block protocols (Fibre Channel/iSCSI).
  18. NAS: Network attached storage, a shared storage technique for file protocols (NFS).
  19. NFS: Network file system, a file-based storage protocol.
  20. DAS: Direct attached storage, disk devices in a host directly.
  21. VAAI: vStorage APIs for Array Integration, the ability to offload I/O commands to the disk array.
  22. SSD: Solid state disk, a non-rotational drive that is faster than rotating drives.
  23. VSAN: Virtual SAN, a new VMware announcement for making DAS deliver SAN features in a virtualized manner.
  24. vSwitch: A virtual switch, places VMs on a physical network.
  25. vDS: vNetwork Distributed Switch, an enhanced version of the virtual switch.
  26. ISO: Image file, taken from ISO 9660file system for optical drives.
  27. vSphere Client: Administrative interface of vCenter Server.
  28. vSphere Web Client: Web-based administrative interface of vCenter Server.
  29. Host Profiles: Feature to deploy a pre-determined configuration to an ESXi host.
  30. Auto Deploy: Technique to automatically install ESXi to a host.
  31. VUM: vSphere Update Manager, a way to update hosts and VMs with latest patches, VMware Tools and product updates.
  32. vCLI: vSphere Command Line Interface, allows tasks to be run against hosts and vCenter Server.
  33. vSphere HA: High Availability, will restart a VM on another host if it fails.
  34. vCenter Server Heartbeat: Will keep the vCenter Server available in the event a host fails which is running vCenter.
  35. Virtual Appliance: A pre-packed VM with an application on it.
  36. vCenter Server: Server application that runs vSphere.
  37. vCSA: Virtual appliance edition of vCenter Server.
  38. vCloud Director: Application to pool vCenter environments and enable self-deployment of VMs.
  39. vCloud Automation Center: IT service delivery through policy and portals,get familiar with vCAC.
  40. VADP: vSphere APIs for Data Protection, a way to leverage the infrastructure for backups.
  41. MOB: Managed Object Reference, a technique vCenter uses to classify every item.
  42. DNS: Domain Name Service, a name resolution protocol. Not related to VMware, but it is imperative you set DNS up correctly to virtualize with vSphere.
  43. vSphere: Collection of VMs, ESXi hosts, and vCenter Server.
  44. SSH to ESXi host: The administrative interface you want to use for troubleshooting if you can’t use the vSphere Client or vSphere Web Client.
  45. vCenter Linked Mode: A way of pooling vCenter Servers, typically across geographies.
  46. vMotion: A VM migration technique.
  47. Storage vMotion: A VM storage migration technique from one datastore to another.
  48. vSphere DRS: Distributed Resource Scheduler, service that manages performance of VMs.
  49. vSphere SDRS: Storage DRS, manages free space and datastore latency for VMs in pools.
  50. Storage DRS Cluster: A collection SDRS objects (volumes, VMs, configuration).
Posted in Technology Industry | Tagged | Comments Off

Data Center Capacity Management

IMG_0746Data center capacity management is a hot topic among data center managers and staff.  Effectively managing data center capacity is a challenging undertaking and one that is constantly evolving which keeps everyone on their toes.  Without a capacity management plan that involves the entire data center team, as well as regular auditing of infrastructure, usage and capacity, a data center could quickly run into problems.

One of the best ways to help keep data center capacity management under control is to be well informed.  How does a data center manager and team go about doing this?  Regular data center capacity audits.  As soon as you begin audits you will establish a baseline and as you evaluate your audits each month you will begin to see trends, ways to improve and be able to anticipate whether or not you may exceed capacity, and when.  Your baseline will include information about how your storage, server, and network infrastructure are used.  Within your data center infrastructure management (DCIM) everything is monitored and data is collected to allow for improved operational support and more efficient capacity planning.  Capacity management is becoming more and more important as an emphasis is put on lowering data center expenses and improving efficiency.  The logical step for many data center managers is to add more to their existing load within a data center rather than expanding or changing locations.  But, the more you add the more your infrastructure must be capable of handling, and handling efficiently.  When you can assess trends and more accurately predict future capacity needs it allows for better budgeting accuracy.  This will help data center managers better manage expenses, investments, changes in infrastructure and more.  This approach improves operational transparency and allows data center managers to anticipate future needs rather than react to them as they occur.

Of course, another part of capacity management is to establish plans.  It is great to use data collected from analyzing usage, manage capacity and plan for future capacity needs but, beyond that, data center managers must formulate various plans for different potential scenarios and inform all data center employees of their roles in said plans.  You cannot plan for just one scenario because, unfortunately, in an ever-evolving world, scenarios can change very quickly.  It is simply best practice to plan for multiple contingencies and involve the entire team so that when it is time to execute plans it can be done efficiently and effectively.  This form of anticipation, planning and execution is exactly the kind of capacity management that prevents downtime from occurring and allows a data center to run as it should at all times.

Posted in Data Center Design, data center equipment, Data Center Infrastructure Management, data center maintenance, Datacenter Design, DCIM | Tagged , , , , | Comments Off

Should You Relocate Your Data Center?

Data Center Best LocationA lot of data centers are doing it but is it right for you?  Relocation – how do you really know when it is time?  For data centers, there may be some tell-tale signs that it is time to relocate.  And, if your business is being relocated a data center or computer room may simply have no choice but to relocate.  Relocating a data center is no small task and requires a good deal of planning and preparation so that it can be executed smoothly, downtime can be avoided and important data protected.

A data center manager needs to be on the lookout for signs that it may be time to relocate so that potential problems can be avoided.  One of the first signs it may be time to relocate are a wide array of power problems.  Whether your data center is no longer energy efficient (especially in spite of energy efficient improvements) or your power source is unstable or unreliable, it is probably time to relocate data centers.  Energy usage, energy efficiency and the cost to power a data center are major topics of conversation for any data center and the bottom line is that if you cannot achieve energy efficiency within the existing data center, it will dramatically impact the bottom line and cost more money than it should.  A data center designed for energy efficiency will not only be more eco-friendly but will save money which is important to any business.  Additionally, if your power supply is unreliable that is just unacceptable in the data center business.  If the power supply is not working and your backup power fails for some reason your data center could experience small amounts of downtime or, worst case scenario, significant downtime.  Even small amounts of downtime come at a significant expense and prolonged downtime could mean lost revenue, frustrated customers and ultimately could cause a business to have to shut their doors.  If your data center has any form of consistent power problems, it is time to relocate.  The next problem that could indicate it is time to relocate is that there is no room to grow.  As data centers grow in capacity and add more server racks and other assorted equipment spare room could very quickly run out.  Once you have outgrown your space there is not much that can be done which means that it is probably time to relocate.  When choosing a location, do not simply accommodate growth that has recently occurred but anticipate additional growth that could occur in the future so that you will not have to move again right away.  An additional reason to consider relocating, especially if you have any of the aforementioned concerns, is if your location is not right.  Wrong location could translate to a few different things – it may be inconvenient or too remote, it may be in an area where real estate is too pricey or it may not be in an area near the target market that you serve.  If any of these are true they are probably frustrating and expensive issues and the best way to solve a location problem is to relocate.   Determine if the benefits and potential savings outweigh the costs and difficulty involved with relocating a data center but a move may be just the right thing to mitigate problems for your data center.

Posted in computer room construction, Computer Room Design, Construction Industry, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , , , | Comments Off

Benefits of a Server-Room-In-A-Box for Healthcare Facilities

UPS TechnologyWhen you think of a healthcare facility and the IT challenges it may face, you probably first thing of large hospitals.  And, while they certainly face a number of IT challenges, many other healthcare facilities, such as clinics, outpatient practices, dental offices and more face IT challenges as well.  Today, medical records, patient files, physician notes, prescription orders and more are all kept on secure servers and transmitted securely via electronic file sharing methods.  Paper files are quickly becoming a thing of the past and this means that all healthcare facilities, large or small, have to find efficient and effective means of managing their IT.  Many small healthcare facilities are short on space as it is so a dedicated server room is likely not practical or even possible.  Rather than having a dedicated server room, many healthcare facilities could simply opt for a server-room-in-a-box instead.

A server-room-in-a-box is small but still effectively houses IT equipment in one location that is ideally designed for security and energy-efficiency.  A server-room-in-a-box can be housed in furniture that coordinates with the rest of the decor in a healthcare facility so it will blend right in.  The housing can be on caster wheels or can be completely locked down depending on each healthcare facilities specific needs.  When you hear about a server-room-in-a-box you may worry about overheating and energy efficiency but a server-room-in-a-box is self-cooled with fan options and ventilation so that you do not need any special equipment to keep it running properly.  Also, if you have been in a server room you know that it can be noisy at times.  With a server-room-in-a-box it is actually quiet because it will muffle noise from the equipment.  Ideally you should aim to muffle as much noise as possible (aim for approximately 90%) so that you can create an ideal healthcare facility setting and work without interruption.  The server-room-in-a-box you choose should have an integrated PDU to make power distribution easy.  If you are also wondering if the capacity of a server-room-in-a-box will be enough, they actually come in a variety of capacities ranging from 12U to 38U.  And, if necessary you could always add an additional server-room-in-a-box should capacity needs increase.  Healthcare facilities face a number of different challenges than a typical office setting might face.  It is not just employees visiting the facilities each day but patients and outside personnel as well.  With that in mind, as well as reduced space needs, a server-room-in-a-box is an ideal approach to managing IT needs in a healthcare facility.

Posted in Computer Room Design, data center cooling, Power Distribution Unit | Tagged , , | Comments Off

Common Data Center Move Problems (And How to Avoid Them)

EngineeringToday, many data centers are finding that, rather than trying to make an existing location work, it is better to consolidate data centers or move to a new location.  Moving to a new location provides the opportunity to have more room, better energy efficiency and an optimized environment for any center needs.  Any time a data center faces a big move, it will undoubtedly cause some stress and concern.  After all, a move so large poses a lot of potential risks when it comes to data loss and downtime.  Whether a data center runs critical components of a retail location, retail websites, healthcare systems, warehouse systems or any important operations that would impact staff and personnel if down for very long, downtime must be minimized to protect a business form significant lost revenue, frustration and major problems.

It can be difficult to properly assess the heating & cooling needs as well as the power and capacity needs of an existing data center for a data center manager.  Because of this, when a data center is being moved, anticipating possible problems or needs during a move to be properly prepared can prove incredibly challenging.  One problem many data centers encounter during a big move is miscommunication or lack of communication altogether.  Operational information silos are not both unproductive and detrimental during a data center move.  If communication breaks down important messages may not be relayed, things may not be done as they should, and downtime while everything is being sorted out is likely to occur.  To avoid this, it is important to have all important people involved in a  data center move involved in every meeting.  Even if it seems that some may not be necessary for all meetings, have them be present so that everyone is on the same page.  The next problem many data centers experience during a move is a lack of proper planning.  When planning for a data center move it is important to anticipate all needs as well as possible unforeseen needs.  Have contingency plans in place for the unexpected so that, should something unforeseen arise, you can quickly and efficiently mitigate the problem without loss of critical information or downtime.  Next, it is wise to test critical data center equipment and infrastructure to ensure that everything is in proper working order and current.  A breakdown in data migration due to faulty equipment or outdated products is something that should never occur if proper preparation takes place.  Before the big move occurs, test everything, replace anything that is outdated or not working and then test to ensure that data is able to migrate properly.  Lastly, when moving to a new data center, it is important to ensure that the new center can meet all heating and cooling needs and, most importantly, capacity needs with room to grow.  While energy efficiency and lower costs of operation are important, the last thing you want when moving to a new data center is to have to move again in a few years because the needs of a data center were not properly anticipated.  Leave room to grow so that a data center move is not just a short term solution but a long term solution for your data center.

Posted in computer room construction, Computer Room Design, Construction Industry, Data Center Build, Data Center Construction, Data Center Design, data center equipment, Data Center Infrastructure Management, data center maintenance, Datacenter Design, DCIM | Tagged , , , | Comments Off

How Do IT Upgrades Impact a Data Center’s Energy Usage & Costs?

Utility IncentivesTo do an IT system upgrade or not to do an IT system upgrade, that is the question.  Except, it is not really a question because, for data centers, IT upgrades are unavoidable.  Any time an IT system upgrade is required, whether small or large, it will inevitably require more energy usage.  This means that every time an IT upgrade occurs, operational costs will most likely increase.  With so much focus on being energy efficient, and with the rising costs of energy usage, many data centers are trying to find ways to avoid increasing operational costs through improved energy efficiency while still maintaining the capability to upgrade their IT system as needed. Data Center Knowledge points out statistics about just how much energy data centers are using and why every IT system upgrade and increased energy usage is a concern for many, “Although the dire predictions of the 2007 EPA report on data center energy consumption have not panned out, there are still ongoing energy consumption concerns and data centers are not off the hook. Earlier this year, a report by Greenpeace criticized big data centers for using dirty energy (coal, gas, nuclear) as opposed to clean energy (wind, solar). A more recent report from the Natural Resources Defense Council (NRDC) claims waste and inefficiency in U.S. data centers – that consumed a massive 91 bn kWh of electricity in 2013 – will increase to 140 bn kWh by 2020, the equivalent of 50 large (500 megawatt) power plants. However, the 2014 Uptime Institute annual data center survey reveals that data center power usage efficiency (PUE) metrics have plateaued at around 1.7 after several years of steady improvement.”

With IT system upgrades that bring increased energy consumption as well as increased operational expenses, an evaluation of overall energy usage must be completed.  You cannot endlessly upgrade and grow without evaluating and consolidating over time.  By consolidating energy usage through a variety of other means you can lessen the impact of an IT system upgrade on the bottom line.  Whether improving your heating/cooling system to improve energy efficiency, or consolidating servers, racks or more you can reduce energy usage in other areas within the data center to accommodate IT system upgrades.  The best way to ensure that you are maintaining balance and, hopefully, constantly improving, is to routinely evaluate needs so that there is not wasted energy for unused equipment or other factors.  A data center will run much more efficiently with a data center manager and team that understands just how much data is truly being used, as well as where energy is being consumed in large amounts within the data center.  With a well rounded understanding, and routine evaluations of data center energy usage, IT system upgrades can occur without drastically impacting a data center’s overall energy usage or operational costs.

Posted in data center equipment, Data Center Infrastructure Management, data center maintenance, Power Management | Tagged , | Comments Off

Why a Clean Data Center is Important

datacenter45Just like a home or an office, a data center must be cleaned.  Cleaning is not just to make it visually appealing, but to keep a data center running properly and efficiently.  Data center cleaning may not sound like a high priority on the list of important things to consider with a data center.  Most people probably think about things like rack density, preventing downtime, and heating and cooling issues.  But cleaning a data center properly and routinely is incredibly important.  When a site has visible dirt, or chemical odors, it is a major concern because it could mean there is chemical build up or problematic dust that is leading to the corrosion of important components which means damage to critical equipment.  This is not only frustrating but expensive on many levels.

The best way to ensure your data center stays clean is to limit and control what comes in from the outside.  Only a true “clean room” needs things like shoe covers but that does not mean that other rooms in a data center should be neglected just because they have not been specified as “clean rooms.”  A data center should be cleaned only by professional data center cleaners.  The reason for this is that there are many critical things, both large and small, that could easily be damaged or compromised in some way if an inexperienced cleaner is used.  A data center cleaner will be cleaning everything including things like floors, subfloors, cabinet racks, server racks, walls, ceilings and more.  In a data center more than just dust accumulates.  Chemical residue tends to accumulate as well.  This could be seen or unseen which is why it is important to use a professional.  Even microscopic dust or chemicals must be removed to ensure a clean data center.  Dust and particulates, when left to accumulate or simply unattended to, can interfere with critical functions and lead to data loss and media errors.  Particulates can settle or move, migrating to different areas of the data center to cause major problems.  If they make their way into air ducts they will simply be recycled into the room and continue on their path of destruction.  Any time this happens, critical information could be lost and networks could experience downtime.  As we all know, downtime can be incredibly costly, even if it only lasts a matter of seconds so regular cleaning is very important to prevent this from happening.  In addition to having a yearly deep cleaning and regular maintenance cleaning, insist that those entering the data center wear clean clothes or cover ups so that you can best protect your data center.

Posted in computer room maintenance, Data Center Construction, Data Center Design, Facility Maintenance | Tagged , | Comments Off

The Importance of Backing Up Your Backup

 

Business loses power

Loss of Power

Data center downtime is one of the biggest fears of any data center manager.  Data centers can experience downtime for a number of reasons, some within control and some completely outside of control.  For instance, any poor weather ranging from a heavy storm to a natural disaster could potentially knock the power out in the area in which the data center resides.  No matter what the source of downtime is in a data center, one thing is certain, data centers must have a backup in plan.  But, this is not news.  Every good data center has backup.  The next question is – is that enough?  The answer is a resounding no.  To ensure that data center information and functionality is protected it is critical that there is a backup for the backup in place.  Is this redundant?  Yes.  But ultimately, it could save money, jobs and ultimately, the entire business altogether.  Data Center Knowledge points out just how costly downtime is for a data center and, by extension, a business, “Unplanned data center outages are expensive, and the cost of downtime is rising, according to a new study. The average cost per minute of unplanned downtime is now $7,900, up a staggering 41 percent from $5,600 per minute in 2010, according to a survey from the Ponemon Institute, which was sponsored by Emerson Network Power. The two organizations first partnered in 2010 to calculate costs associated with downtime.”

When you evaluate a data center, its power and capacity needs, and endeavor to create true redundancy in backup power supply you have to ensure that the redundancy is continuous.  If it is not, major problems could arise.  When creating the right system for your data center you have to try to anticipate future growth and needs.  What worked yesterday for data centers no longer works for today.  It is a continuously changing world and, for this reason, it is also important to do regular audits of your backup system to ensure true redundancy has been achieved and is still functioning as you move forward.  The key power components of a data center include the backup generator, uninterruptible power supply (UPS), internal power supplies, power distribution unit (PDU) and much more.  A fully redundant power supply will have adequate amounts of power supply to completely support the data center and all of its components with no single points of failure.  Should a power outage occur, a data center will remain completely functional with this redundant backup power system in place.  Not all data centers need this elaborate of a backup for their backup in place but, if the environment is running a mission critical project this type of backup is absolutely necessary.  Backing up the backup may seem redundant, and it is, to save data centers from frustration, loss of time and significant loss of money.

Posted in Back-up Power Industry, Data Center Battery, Data Center Design, Power Distribution Unit, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , | Comments Off