Home   /  About   / News & Resources   /
   The Damage Caused by Downtime

The Damage Caused by Downtime

We look forward to assisting you

Receive a free consultation. Use the form below or call our 800-509-6170 today

Data center reliability has helped to increase the operational effectiveness of thousands of companies around the globe. Improvements in technology and the design of server farms, as well as in the facilities that house them and the level of training for the people who oversee them have allowed collocation centers to reasonably offer the expectation of the holy grail or network reliability: the seldom-achieved 99.999% optimal running time.

Yet it’s often that exceptional reliability that leads to organizations into trouble. 99.999% still isn’t 100%, yet too many companies take for granted how difficult dealing with that 0.001% can be. This is only compounded by the fact that corporations allow the security they feel in their data center’s reliability to convince them to allocate resources normally dedicated to network maintenance to other areas. While this line of thinking may produce the desired results in the immediate, it need take only one instance of peak-hour downtime to reveal just how flawed this idea truly is.

Given the vast resources some organizations have in terms of staff and production capability, it’s remarkable to see how easily their operations can come to a grinding halt during network downtime. This is most often to due poor training of employees on how to handle manual processes as well as what to do in the event of a server failure. While this typically isn’t an issue for those organizations that have chosen to go with a third party collocation provider whose only role with them is server maintenance, it can spell disaster for those who operate an in-house data center, as limited resources often contributes to extended downtime.

The Difference 0.001% Can Make

This is an issue facing both large and small businesses alike. To put a monetary value on exactly how much that cumulative 0.001% represents, downtime cost companies $26.5 billion in revenue in 2012, an increase of 38% over 2010 numbers. Considering the numbers from companies that reported server outages, that breaks down to roughly $138,000 per hour lost during downtime. And considering that those number’s only signify those outages that were reported, one has to wonder if the actual loss isn’t much higher. Experts actually estimate that were every data center in the world to go dark, $69 trillion would be lost every hour.

That number represents the oddity in these figures. If data center reliability has actually increased, why the increase in lost revenue? One might expect the numbers to actually go the other way. Yet despite the potential for network outages, demand for services provided by data collocations center is growing at an astronomical rate. Currently, data centers worldwide use as much cumulative energy as does the country of Sweden annually. That usage will only increase with time, as it’s believed IP traffic will hit 1.4 zettabytes by 2017.

Another contributing factor is the virtual devaluing of data storage. in 1998, the average cost of 1 GB of hard drive space was $228. The same amount of space was valued at $0.88 just 9 years later. As data storage and transmission needs continue to grow exponentially, enormous data warehouses are becoming more common. Eventually, this leads to an increased need for data collocation, as more companies need to dedicate increased attention to their server maintenance.

The Myth of 100% Network Reliability

In anticipation for such an increased need for collocation services, many may think that the next logical step in data center evolution to reach the 100% reliability plateau. Unfortunately, such a estimate is beyond reach (five 9s is barely achieved a handful of the time). The reason being is that there virtually are almost as many causes of server failure as there are bits of data that they carry. And the chief amongst these is a problem for which there is no solution.

The most common case for downtime is, by far, human error, accounting for almost 73% of all instances of downtime. Whether it be from lack of operational oversight, poor server maintenance techniques, or just poor overall training, people can’t seem to get out of their own way when it comes to guaranteeing the performance of their servers. Companies can and should invest resources into mitigating this, but eliminating it all together remains an illusion.

Aside from human error, downtime can be caused by literally anything. Anyone doubting this need only do a quick internet search for “squirrels” and “downtime” to see just how often these furry little rodents can bring elements of the human business world to a standstill by chewing through cables and thus limiting a data center’s operational capacity.

Focus on Preparation, not Prevention

Given that downtime is an inevitably, an organization is much better served in turning its efforts to dealing with downtime rather than trying to eliminate it. Most, if not all companies aren’t doing all they can to adequately deal with network downtime. 87% of companies admitted that a data center crash resulting in extended downtime or data loss could be potentially catastrophic to their businesses. Yet more than half of American companies admit that they don’t have sufficient measures in place to deal with such an outage.

Given this information, organizations looking to either establish an on-site data center or rent space through a third party data collocation company should consider these two aspects:

  • What degree of network reliability can the data center deliver? Taking into account what’s actually feasible, can it guarantee operational effectiveness to the highest of current standards?
  • What needs to be done to improve in-house performance in the inevitable event of downtime? Are sufficient efforts being placed on maintaining like operational capacity as well as on running data recovery once the network is back up?

As corporations and organizations continue to rely more and more on their data collocation services for optimal performance, understanding the limits of data center reliability and being prepared to deal with network outages is the key to avoiding huge revenue losses from downtime.



Share this post!