Data Centers Move From Big Data to Smarter Data

Thirty-five zettabytes. That is the equivalent of 35 trillion gigabytes, and that is how much information the Computer Sciences Corporation estimates user will generate on an annual basis by the year 2020. The phrase “big data” simply refers to sets of information that have grown too complex or too large to fit into the older, standard tools we used to use. Thanks to the exponential growth in the industry, there has been a push to figure out a way to manipulate and manage all this information, and classification is the key to making it work.


Analyzing Big Data


Broken down into its basic components, big data is just a collection of the simple pieces the everyday user knows well:


  • Spreadsheets
  • Pictures
  • E-mails
  • Everyday work documents


These files are created and shared, getting saved somewhere within the data storage environment in the process. This information is being generated at an unprecedented pace, causing unstructured growth. The boom, so to speak, has left many in the industry scratching their heads when trying to set policies around it or simply maintain it.


The Problem With Unstructured Growth


One of the reasons so many people in the information technology field are concerned about big data is that there are legal implications now that require a business to be able to store and retrieve certain information. Left unmanaged, these key pieces of data can become a compliance liability or land a business in legal trouble.


What’s more, improper storage of all this information is how security breaches occur. Hackers need only to find one small opening in order to compromise an entire data set, as exhibited by the countless issues major retail companies, for example, have experienced.


The Process of Storing Data


So how can we properly and securely store all this information? The first step is to classify it correctly. In a catch-22, we find that in order to classify data, we often need to have policies in place. However, it is difficult to create policies without first having classified the information.


That is why breaking down big data to the ground level is essential, because we can figure out what kind of unstructured data is out there. Once we identify the items at a file level, we can start to classify it because we can determine where it is located, who owns it and when it was accessed last.


Being Smart About Information


The classification process is exactly how we can take a step back and attack the storage problem. Gaining a deeper understanding of the files will enable us to improve the way our data works and is governed.


There are six basic classifications that a piece of data may fall under:


  1. Archive


Regulatory requirements will automatically mean that certain pieces of information are valuable to a business in the long term. Storage companies may be able to find these files by searching for keywords, figuring out who owns it or simply knowing the type of file. Once located, the information can be placed into an archive to satisfy legal or other standards.


  1. Active


If something has been created in the last three years, it is considered active and therefore is most likely to be accesses again. It can be managed in place until either aging out of the system or moving into another classification.


  1. Aged


Information often moves from being active to being aged. Items that have not been accessed for three years may represent as much as 40 percent of the data on a company’s network. Therefore, it is imperative for a business to take this information and move it either into an archive file, if it has value, or the trash can if it does not. Classification enables us to view who owns the document or search it by keyword to determine if it is something that should be saved.


  1. Redundant


You know the drill: You create a version of a document and share it with a co-worker, who makes a few changes and shares it with someone else, who also makes changes. You now have three copies of the same document floating around and taking up space. Through data profiling, we can attach a signature to a document that can help us determine if it is an exact copy of something else and can be deleted.


  1. Personal


Even companies with strict policies that restrict the use of machines for personal items will find that employees often store pictures or to-do lists on the system. Someone has a new baby and wants to share a picture with co-workers, and that image has now been saved somewhere in your data storage. While one photo may not be problematic, several photos from thousands of employees can be. Businesses can utilize data classification to identify personal information and ask employees to remove it from the network.


  1. Abandoned


This last group is likely the easiest to identify and manage, as abandoned information typically does not have value. Usually, this is data that former employees owned, and it has not been accessed in the three-year timeframe. It is still a good idea to ensure that the files do not contain important information that should be stored, however, just to cover any liabilities.


Next Steps


Once the classification process is complete, businesses can easily manage the information by archiving it, deleting it or moving it to a less expensive data center. Policies are easily created once a business knows what kind of information it has, and these policies can be used as a legal defense for deleting a document.


At Titan Power, our goal is to help you run as efficiently as possible. Let’s combat the problems associated with big data by simply making our data smarter. Identify it, classify it and create a policy to give it structure.

Posted in computer room maintenance, Data Center Design | Comments Off

Consolidating Your Computer Power & Air Room While Increasing Its Capacity

Maintaining a computer room can be a comprehensive procedure requiring quite a bit of effort for those at the helm. Proper maintenance must entail a multi-level approach that takes many different components under consideration. These can include regular review of vital equipment, scheduled reviews of heating and cooling systems, and making the best use of the space available for storing essential items.


In many cases, consolidation can increase efficiency immensely, which is of a huge concern to the IT industry. Because consolidation can be a rather involved process, IT managers must take a detailed approach to ensuring their data centers are functioning at peak performance.


Whether designing or consolidating a computer room setup, Titan Power can provide the type of industry experience necessary to achieve the best results. The following tips further highlight the many benefits provided by computer and air room consolidation.


Have a Firm Understanding of Current Needs


Different companies may have differing needs when it comes to maintaining important computer equipment. To this end, data centers are categorized by tier, and each level entails that certain requirements are continually met. Understanding the differences between tiers can be helpful when designing or revamping a data center for more efficient output.


  • Tier 1 – This is the most basic level a data center can occupy. A tier 1 center must provide 99.671% availability, and surplus equipment or power sources are not required. Small companies usually fall within this tier.


  • Tier 2 – In a tier 2 center, key equipment should be able to be replaced or removed with causing any service interruptions. Availability must equal 99.741%.


  • Tier 3 – Tier 3 data centers are the most common of the four. Power supplies must always remain active and accessible, and certain components should have redundant devices in place. Availability must be at least 99.982%.


  • Tier 4 – At a tier 4 center, all equipment must have secondary options in case of failure. Availability at a tier 4 center must be 99.995%, with only 26 minutes of yearly downtime allowed. This is the preferred option for many multi-million dollar companies.


Eliminate Unnecessary Equipment


Unnecessary or unused equipment can actually cost a company quite bit in terms of time and money. While it may seem like a good idea to keep additional equipment to meet future needs, too many superfluous items may actually decrease efficiency in the workplace. That’s why consolidating equipment can make such a difference to a company’s daily work process.


It’s recommended that companies should either discard or store items that are not used on a regular basis. Such equipment can take up valuable space within a computer room, which can then create difficulty in performing critical daily functions. Surplus equipment can also create higher expenditures by using power when it’s not absolutely necessary. Keeping costs low can be an important consideration for smaller companies, who may be working with less capital than their larger counterparts.


Submit Cooling Systems to Routine Review


Cooling systems are integral to a well-functioning computer room. Increased temperatures can greatly damage equipment, which can then result in the need for expensive replacements and repairs. Conversely, utilizing an expansive cooling system in a smaller area can incur exorbitant expenses on monthly utilities.


Reviewing one’s cooling systems is crucial in this respect. Initiating a routine review can serve multiple purposes within a computer room, from ensuring equipment remains functional for the duration to limiting the amount of monthly expenditures needed for cooling procedures. When performed on a regular basis, such review can help those in charge of maintenance better meet needs as they arise.


Make Use of Available Space


An efficient use of space can go a long way where computer rooms are concerned. This is particularly relevant to those working within smaller areas, where a bit of finesse may be required to ably make room for all mandatory equipment. In this case, proper placement can be a great way to ensure a more productive and efficient process overall.


A smooth-running computer room should be completely free of clutter. Needless items will only serve as impediments to daily work duties. Additionally, equipment should be clearly labeled, especially when used in sizable data centers with numerous items in use. These methods can prevent interruptions while also optimizing the daily processes inherent to computer room operation.


Regularly Audit Existing Processes


Over time, the needs of a company may change a great deal. What was once highly useful to a business may no longer be required, while items that weren’t needed at the inception may now prove beneficial. This is especially true when it comes to the maintenance of computer and air rooms, which can require a great deal of specialized service to ensure prime functionality.


For these reasons, it’s imperative that a company does a regular audit of their existing processes to determine whether they make sense going forward. Such audits should include the current equipment setup, hardware needs, and regular checks of cooling devices to ensure they are in proper working order. Devising a checklist can be helpful in this instance. Possible items can include:


  • Checking power supplies
  • Replacing malfunctioning components
  • Engaging in specialized testing of important equipment
  • Securing service personnel to undertake maintenance procedures
  • Ensuring that cooling/heating systems are operating as expected
  • Testing backup power devices (such as generators), as well as batteries


Call Now to Optimize Your Business

Titan Power can help your business optimize its current computer room setup. Our highly-skilled technicians have the experience necessary to help companies meet a variety of IT needs. For more information, please contact us today at 1-800-509-6170.


Posted in computer room maintenance, Data Center Design | Comments Off

Five Strange and Weird Data Center Outages

Data storage is a very real concern in this day and age. Much of the world relies on data centers for everything from completing crucial work tasks to accessing entertainment. Unfortunately, even the most dependable data storage plans can fall prey to bizarre occurrences. These instances can spell ruination to vital equipment, as well as affecting web access.

The following are five very strange stories of data center outages. While some instances are simply unavoidable, there are many measures a company can take to ensure that data is stored in a reasonable manner. For those seeking reliable data storage, Titan Power can offer clients numerous safe and secure options.

Electrified Squirrel Wreaks Havoc on Yahoo

While squirrels may appear cute and cuddly from a distance, these critters are often to blame for a number of power outages due to their predilection for chewing on everything in sight. This was the case in 2010, when a lone squirrel effectively took out half of a Santa Clara data center used by Yahoo while chewing through a communications line.

Surprisingly, squirrels account for a great portion of data center outages due to their love of chewing. According to Level 3 Communications, squirrels accounted for a whopping 17% of their damaged cables in 2011 alone. Damage caused by squirrels is an experience shared by many, from homeowners to multi-national corporations.

Some wager that squirrels are so keen on chewing things like wires and hoses because the sturdy texture helps keep their rapidly growing teeth at bay. Others claim that squirrels are just ravenous little creatures, and that the materials used to create wires may be tasty to them. Despite the reason, many companies invest untold money in squirrel-proofing expensive equipment, with varying levels of success.

Hurricane Sandy Fells Numerous Data Centers

While New York City is well-known for its culture and night-life, many people tend to forget is that this bustling metropolis is surrounded by water. Accordingly, large-scale natural disasters can pose quite a threat to both property and populace.

This was never more evident than when Hurricane Sandy hit in 2012. Sandy brought the city’s many data centers to their figurative knees thanks to surging flood waters, which then caused unprecedented outages. Compounding the storm was the questionable data storage plans that many high-profile entities utilized within the city.

Leading data centers located in New York City (including DataGram, host to such media bigwigs as Gawker, Buzzfeed, and The Huffington Post), chose to store essential electrical components below ground level. Due to these poor decisions, a large portion of property was damaged beyond repair, and many popular websites were forced to go black for an extended period of time.

Ship Anchors Sever Undersea Cables

In many instances, undersea cables are responsible for ferrying data between continents. The idea is that undersea placement will keep vital components intact and internet service uninterrupted.

However, this isn’t always the case, as illustrated in 2008 when ships erroneously dropped anchor on three cables located in the Mediterranean Sea. The resulting service issues were witnessed within the Middle East, along with effects occurring in parts of Asia.

Many initially speculated that these disruptions were caused by intentional sabotage. However, satellite imagery later showed evidence of ships dropping anchor in the wrong area. While this scenario may seem unlikely, similar cuts happened to undersea cables earlier that same year, fueling some of the conjecture that these disruptions were indeed deliberate.

Leap Second Results in Multiple Disruptions

While most are familiar with the concept of a leap year, a leap second is a far more mysterious occurrence. In order to keep in line with earth’s rotational changes, the International Earth Rotation and Reference Systems Service (IERS) must occasionally add seconds to a worldwide timekeeping device. Adding these leap seconds ensures that time as it is commonly understood remains in sync with the movement of earth.

In 2012, such a time adjustment occurred. This caused a range of IT problems commonly known as the Leap Second Bug. Numerous Australian airlines experienced flight delays as a result of the additional second, as well as issues checking in passengers. The Leap Second Bug also caused problems with a variety of highly-trafficked sites, including Reddit and Mozilla.

Acquisition Takes Web Sites Offline for 72 Hours

Business acquisitions are often fraught with tension for all involved. In the case of Alabanza, customers experienced a 72 hour blackout after an acquisition resulted in the company moving integral equipment to another location. This caused a whopping 175,000 sites to go off line, much to the consternation of Alabanza’s numerous customers.

In 2007, Alabanza was acquired by the web hosting company NaviSite. This required a relocation of servers containing vital account information that were being housed in Alabanza’s dedicated data center in Baltimore. Initially, all accounts were slated to be moved by data transfer. However, issues with the file transfer tool led NaviSite to physically move the servers over 400 miles to their data facility in Andover, MA. The result was that many lost access to sites, without any indication that such an interruption was likely to occur.

All sites were eventually brought back online, but the customer dissatisfaction was palpable. Instant access is essential to a successful hosting service, which both Alabanza and NaviSite had to learn the hard way.

Reliable Data Storage Is a Phone Call Away

Unexplained and unnecessary outages can make a client think twice about future data storage. Thankfully, Titan Power has the means to ensure that all data remains secure no matter what occurs. For more information on the many great services we provide, please contact us today at 1-800-509-6170.


Posted in Back-up Power Industry, Technology Industry | Comments Off

Small Business IT Needs Versus Small Office/Home Office IT Needs

Many of the components that make up an effective Information Technology system—whether for a tiny home-based office or a small business with dozens of users—are quite similar from one to the next. However, there are some critical differences. Here is an overview of considerations to be made when implementing or expanding your IT investments.

IT Needs for a Small/Home Office


Your Network


When it comes to your IT needs for your home/small office, one of the first things you’ll want to focus on is your computer network. In addition to allowing your devices to access the Internet, this system also allows you to communicate with other people or computers that are on the network. Even if you are a one-person company and don’t need to worry about sharing files or communicating with other employees, your network is a critical part of your small or home office. For additional flexibility, you may want to consider going wireless to avoid messy cable setups and wasting additional time and energy every time a modification needs to be made.


Your Networking Standard


If you opt to pursue a wireless network, you’ll need to settle on a networking standard. In short, a network standard is a means of ensuring that all devices can properly work with one another on the network. Older components that do not utilize the same networking standard as the rest of your computer system may still be able to connect, but may wind up slowing down the entire system. When possible, it is best to upgrade all devices and adapters to the latest networking standard to ensure a smooth and speedy connection.


Choosing a Wireless Router


Your router is, in short, the device that connects your network to the Internet. It also enables all computers that are connected to it to share this connection. In addition, your router will often also act as a DHCP server for your network, which allows each connected device to maintain a separate, private IP address. Routers often have security already built in, like firewalls, though your router’s firewall should not be the only means of defense your network utilizes.

IT Needs for a Small Business


One of the biggest differences when considering IT needs for a small business, as opposed to a small/home office, is the number of people who need to be connected to one another from day to day. While a small/home office may only need to accommodate one or two people, a small business may need to provide dozens of employees with access to various communication systems. Here are some additional thoughts to keep in mind:


  1. Typically, small businesses are likely to have more customers and more sales activity than a home office; therefore, it is imperative that additional security precautions are made. A recent study by AVG Technologies discovered that 52% of small business owners do not currently utilize any kind of IT security policy, which is extremely dangerous. Typical consumer resources for increasing IT security include installing firewalls and anti-virus software, but a professional IT company can add multiple layers of additional security to help ensure you will never have to deal with a breach.


Taking precautions now can help defend your website against hackers later. Against an improperly defended website, hackers may be able to not only illegally access and crash your website if they so choose, but also steal the financial and personal details of your customers. The ramifications of such an incident can be devastating for nearly any small business.


Employee education. It is critical that your employees understand the techniques that malicious entities have when trying to access data. These days, unfortunately sometimes all it takes is for one person to click inadvertently on a bad link, or install a questionable app on their company phone, to trigger a wave of chaos that can have devastating effects on your business. After installing your new IT hardware and software, you should have a companywide meeting to discuss the changes that have been made, as well as provide information on how to stay safe.


Interoffice communication and beyond. How do your employees need to communicate effectively with one another, both in the office and outside of it? Does the majority of your staff stay in the office all day, or do you have employees who are constantly working offsite? Are voice calls required, or do text messages often suffice? Depending on the needs of your staff, an IT specialist can make recommendations on how to make communication more stable, reliable, secure, and effective.


  1. What kind of power needs do you have for your small business? What are your plans to cope in the event of a power outage? You may wish to consider having a company install and maintain an uninterruptible power supply, emergency generator, or battery backup system onsite to ensure that you and your data remains protected.

Bring in the Pros


There are lots of “DIY Information Technology” guides on the Internet. These guides can help assist you with many different aspects of installing or upgrading your IT equipment. The problem is that most individuals simply do not have the education or professional experience to know which approach will work best for their particular situation. With so many different kinds of software, hardware, methods of connection, security features, and connected devices to consider, it is no wonder that many home-based offices and small businesses alike leave implementing or upgrading their IT system to the professionals. Bringing in the pros right from the start can ensure you won’t have to duplicate your steps, return or exchange incorrect or inappropriate gear, or have extended periods of downtime.

Posted in Back-up Power Industry, Mission Critical Industry | Comments Off

The Computer Room in Your Closet

The Computer Room in Your Closet: How Risks of Power Loss Can Impact Business Operations


Business loses power

As business owners are planning their business or making changes to their current business, they’ll do well to make sure they plan for blackouts and brownouts. Not only that, but it’s also a good idea for business owners to plan for both long-term and short-term brownout and blackouts in the event that they are without power for longer than expected. It’s not unheard of for blackouts to last several days or even several weeks. Here at Titan Power, we want you to be fully aware of just how much you and your business stand to lose during a power outage.


Lack of Incentive


One of the main reasons that businesses will want to make sure they come up with a thorough and well-thought out plan for a blackout is that there is currently little incentive to put money into a steadily aging grid infrastructure. Not only that, but energy from unpredictable renewable energy sources doesn’t seamlessly fit together with 50 or 60-year-old electricity grids.


More and more electricity grids are starting to become connected, which means that should a blackout occur in one region there’s a very strong chance that it will cascade outwards to other grids and lead to a supra-regional blackout. Other threats to electrical grids include terrorist attacks, solar flares, and cyber attacks. Business leaders might not think that a short blackout can have a large impact on their business, but research has revealed that a blackout that lasts a half hour can result on a financial loss of roughly $15,700 on average for medium and large industrial clients. Should the blackout last eight hours, it could very easily result in nearly $94,000 in losses. Blackouts can be devastating no matter how long or how short they last.


Downtown Risk


There are certain misconceptions that an IT staff and senior executives may have that can add to expensive downtime. For that exact reason, one of the very first things that business owners should fully comprehend is the current state of their systems availability.


Businesses owners should ask themselves how many blackouts or downtime events they’ve experienced in the past two years, the average time these events happened, if their equipment was the cause of the failure, and what steps they can take to increase their availability.


In order to truly boost their availability, businesses have to invest in advanced and reputable infrastructure technologies. One thing to keep in mind with this advice is that keeping up with the most advanced infrastructure technology can be more than a little difficult. Also, making sure there is an in-house staff that has the proper facility management, operational, and security proficiency can be more than a little difficult since such services are often expensive and hard to come by.


The Frequency of Power Outages


Business owners may not be able to control when blackouts and brownouts happen, but they can control how they react to these power outages. One of the reasons it’s so vital for them to determine how big of an impact downtime events will have is so that they can determine whether or not they have the proper equipment and controls in place to help them “staunch the bleeding.”


The most prevalent blackouts and brownouts seem to take place in economies that are just starting to emerge and haven’t properly invested in their energy infrastructure. Downtime is also common in areas with natural hazards and inclement weather conditions.


In some countries, consumers are being asked to use less electricity in response to the immediate failings of national electrical grids. After several blackouts during a 2011 heat wave, the government of South Korea revealed a contingency plan that would take some of the pressure off of the national grid, which called for a ten percent cut in demand for major manufacturers and limits on temperatures for neon signs and commercial buildings.


The Detrimental Effects of Power Outages


There are several ways that a power outage can negatively impact a business, including:


  • Phones – A majority of business are essentially unable to do anything without the use of their phones. Not only are they unable to communicate with customers and clients, they may not be able to communicate with each other inside of the business
  • Internet – While a valuable tool, the internet is also a huge vulnerability. Without it, businesses won’t be able to send or respond to email, process orders, or access accounting systems. If order processing is out, customers might potentially take their business and money elsewhere while leaving their frustrations with the company
  • Essential Applications – Technology has changed the way that we do business, and without the applications created by that technology business might not be able to operate at all
  • Data Corruption or Loss – Should a blackout occur, it’s entirely possible for a business to lose valuable data, such as financial records, their customer database, and potential sales leads. Even when power is restored business can still be taking a hit for several days or even weeks afterward
  • Legal Consequences – The ripples created by a blackout or brownout can be legal as well as financial. Legal regulatory fines can significantly increase how much a business ultimately has to pay for a power outage. Should word of these legal consequences become known to the public, there’s a very good chance that customers will take their business elsewhere.


Business owners may be tempted to put off planning for a power outage. Either that, or they might decide to devote the money and resources required for proper preparation elsewhere. The best time to plan for an emergency is well before one happens. Doing so can potentially save a business, its reputation, and keep customers happy even through hard times. For more information about how to plan and prepare your business for power outages, get in touch with the data base experts here at Titan Power.

Posted in Uninterruptible Power Supply | Comments Off

The Dos and Don’ts of Data Centers

Companies and organizations that are using data centers to store vital data should make sure that they are well aware of the best practices for making sure that their data and the actual data center are both well protected from any digital mishaps, financial pitfalls and maintenance issues that might arise. You never know how much time and money you could potentially be wasting by neglecting to make sure that your data center and your data center strategy are in good shape. Brush up on the latest dos and don’ts of data centers before you find yourself in the middle of a data center disaster.


DO Understand the Importance of Data Center Maintenance


Much like automobile maintenance, data center maintenance is one of those things that’s very tempting to put off until a more convenient time. Unfortunately, disaster seems to strike at the most inconvenient time. Think of how much time and money you can potentially lose should your data center ever incur downtime. Without keeping a close eye on them and performing regular maintenance, your servers, cooling systems and other vital components of your data center can suddenly fail, bringing your entire operation to a standstill. You should also keep a close eye on performance degradation since it can lead to a system wide crash if your system isn’t kept in peak condition.


DON’T Forget About Water


While you might think that water is the last thing that you need in a roomful of sensitive electronics, it’s actually vitally important for the proper operation of data centers. The reason for this is that data centers generate a lot of heat and need to be kept cool so that they can operate smoothly. Some of the largest data centers utilize evaporative cooling in order to keep the center from overheating.


That being said, you also want to do what you can to conserve water and money on cooling costs. Remember that every single watt of power that’s used in your data center is converted into heat that has to go somewhere. Only use as much energy as your data center needs so that you only use as much water as you need.


DO Take a Close and Realistic Look at Your Consolidation Plan


If you’ve got a data center consolidation project in the works, you’ll want to make sure that you have a realistic plan in place rather than a plan that’s a little too well thought out. What this means is that you want to have a sound project plan, funding perspective and staffing projections, but you don’t want to over-plan so much that you wind up crippling your progress.


An abundance of checkpoints, refining processes and controls can lead to several roadblocks. When consolidating your data center, plan from the lessons that you’ve learned in the past as well as the most effective and current data center practices. Sometimes it’s best to simply jump in and see what happens.


DON’T Neglect the Best Tools


There are certain tools that you can put to good use when it comes to making sure that your data center is operating at a hundred percent. For instance, infrared scans are a good way to pinpoint high temperatures and several other data center problems. You should also consider implementing computational fluid dynamics (CFD) so that you have a way to model the proper distribution of heat and air flow, both of which allow you to make adjustments to your IT infrastructure. One of the more basic tools that will serve you well is disc space. The right amount of disc space can go a long way in helping you to avoid some of the more major data center issues.


DO Consider “Free” Cooling


Running a data center isn’t exactly the least expensive business operation, which means that you’ll want to save money and utilize free resources where you can, even if they aren’t exactly as free as you might hope. Updated humidity and temperature guidelines make it possible for data centers to operate at greater temperatures, which means that they won’t need as much cooling as they did before. Something else that this means is that data centers are one step closer to having free cooling in the future. With free cooling, centers won’t need as large of a cooling infrastructure, which means that their capital costs as well as their energy consumption will be lower. Keep in mind that free cooling won’t be completely free, but it will most certainly be less expensive than some of the more traditional cooling methods.


DON’T Over or Under Utilize Tools


Some clients love using as many data center tools as they can, but more often than not this does them more harm than good because of the duplicate data, a lack of a formal support infrastructure and various interfaces. By using several tools, you increase your risk of data center error since any underlying data you have is no longer viable.   


On the other hand, you don’t want to fall into the trap of thinking that data center tools are overrated and unnecessary. Even though the basic tools will most certainly get the job done, they aren’t optimized for a large scale or specially designed data center. In this case, it’s best that you find a well-balanced middle ground where you only have two or three tools that complement each other.


Taking good care of your data center all starts with setting a good foundation. Figure out what the basic demands of your particular data center are and work your way up and out from there from the root of your data center to the fruit of your data center.

Posted in Datacenter Design, Facility Maintenance | Comments Off

Ten Popular Myths About Power Protection Put to Rest

Just because a notion has been around for a long time doesn’t guarantee that the idea is true. Unfortunately, this is the case with many of the ideas that affect the way data centers measure, use, and conserve power. The most common myths regarding data center power protection are guilty of leading to increased spending and reduced efficiency. Here are ten of the most common myths and some clarifications:


  1. 1.     Utility Power Is a Clean, Dependable Source of Energy


The truth is the actual voltage of electrical power can vary quite a bit. According to current U.S. laws, the amount of voltage can vary between 5.7 and 8.3 percent. A data center manager may believe that the facility is receiving 208-phase voltage, but the actual number could actually range between 191 and 220 volts. This type of utility power is also prone to power outages. The potential is that a data center might suffer from a power outage up to nine hours every year.


A good solution to this problem is to develop a Combined Heat and Power facility to improve the quality of data center power and increase dependability.


  1. 2.     Brief Power Outages Aren’t a Problem


Experienced IT professionals understand the significant risks that power outages pose, even if the power is only out for less than a second. That brief interruption make IT equipment unavailable for a much longer period of time – maybe even stretching into days. During this time, the data center can lose system integrity and be at risk for significant financial losses.


Data centers generally have a backup plan in place from the very beginning. Redundancy in power supply and a dependable power source are two factors that can help prevent outages. An uninterruptible power system (UPS) may be another valid solution.


  1. 3.     The Data Center Is Protected Well Enough by a Single Backup Power Supply


While it may be true that a backup power generator is a critical element of protecting the center from power outages, this one source of power is not enough. In that brief moment when the power flickers off before switching to the generator, damages could occur and hours of productivity could be lost to the reboot cycles. Another scenario involves the UPS without a generator. Any extended loss of power could put the data center at risk once the batteries of the UPS run down. The clear solution to this problem is to maintain both the generator and the UPS.


UPS Myths


Several myths relate directly to the UPS of the data center:


  1. 4.     Every UPS has the same battery runtime and service life.
  2. 5.     The UPS load doesn’t affect efficiency.
  3. 6.     A working UPS doesn’t need servicing.


There are a few other similar, but misguided, ideas surrounding the use of the UPS. Too many data center managers develop a false sense of security once the UPS is in place. Instead of getting too comfortable, managers should include the UPS in a bigger backup plan, monitor and service the UPSs consistently, and keep accurate records, such as the capacity of each power bus and how much of that is being used.


  1. 7.     Data Centers Can Afford to Lose a Few Points When It Comes to Energy Efficiency


Sometimes data center managers believe that improving their energy efficiency by just a few points isn’t going to save the center much money. However, modern systems are both energy-efficient and very scalable. This means that replacing older technology with new technology, and gaining those few points, can save the center both kWs of power, thousands of dollars, and reduce the amount of emissions that the center gives off. Look at a 1-MW data center with an aging UPS. A 10-year-old UPS might waste at least 120 kW while also adding unnecessary heat to the facility. Replacing that system with new UPSs will recover the power, reduce the wear and tear on cooling systems, and reduce costs and emissions.


  1. 8.     Power Usage Effectiveness (PUE) Is An Ineffective Measure of Efficiency


The most common objectives to using PUE for the data center is that measuring power usage can be difficult. In fact, there is more than one way to obtain these measurements. Most data centers resort to measuring just a few points (possibly the servers and the UPSs) without measuring the amount of power consumed by cooling and/or distribution losses. In order to keep PUE numbers low (about 1.0), measurements must be taken.


  1. 9.     If Low Temperatures Are Good, Even Lower Temperatures Must Be Better


Data center managers closely monitor the temperature of the center. The equipment, hardware, and technology should be used within a specific temperature range; otherwise the hardware could fail altogether. However, when the temperatures fall too low, there are no further savings, no increased efficiency, and no prolonged life. By reducing the temperatures beyond the recommended point, data center managers are simply wasting money.


  1. 10.  The Use of Solid-State Disks (SSDs) Can Reduce the Consumption of Power

This myth came about because the SSDs do consume less power than a traditional hard disk. However, the SSDs also have a maximum draw that falls right around 10 watts. With this in mind, the 15,000 rpm drive uses a great deal less power than the 4 watts of the SSD. It is also important to remember that the disks cost much more per gigabyte (10 times more), making it hard to justify their purchase.


Instead, IT managers should choose to maintain and up-date their other equipment to improve efficiency.


Investments and Rewards


Data centers, just as any other business, invest a great deal in the power needed to remain operational. Anything that puts that power at risk might undermine the well-being of the entire center. With efficient UPS hardware, redundancy in the backup plan, and careful record-keeping, data center managers should be able to avoid these popular myths and operate a successful center.



Posted in Back-up Power Industry, Mission Critical Industry | Comments Off

Top 10 Ways to Ensure Your IT Equipment Is Managed Effectively

Data centers all over the world have filled a very important role in this modern computer-linked society. From the modest server room to the corporate level data center, companies of all sizes use these centers to store sensitive information and critical data. For those IT professionals who are responsible for the running the IT equipment, staying on top of efficiency might take up a good portion of their time. In general, effective management of the equipment comes down to consistent monitoring and accurate measurements:


  1. 1.     Install a Reliable Network of Temperature Sensors


Any amount of IT equipment will generate some heat. A small laptop at home is as sure of adding to the room’s temperature as a much larger IT department. As the need for IT utilization grows, the need for cooling systems has also grown. Now, in addition to handling the IT equipment, IT professionals must also be aware of the associated cooling systems. The temperatures between racks at a data center or generated by different types of equipment can vary widely. A network of data sensors is a good way to monitor and manage the temperature of equipment, ensuring that the equipment stays within ASHRAE  (American Society of Heating, Refrigerating, and Air Conditioning Engineers) recommendations and protecting the equipment from over-heating.


  1. 2.     Monitor Power Usage at Various Points


There has been a great deal of discussion concerning the efficacy of power usage effectiveness (PUE) measurements. These measurements can be difficult and time-consuming to take, leading some professionals to believe that the measurement is not important. The PUE is typically found by dividing the total facility energy by the IT equipment energy. The best PUE is 1.0. This measurement answers the question, “How well is the equipment using the amount of power being delivered to it?” Without taking a PUE measurement, it can be very hard to determine whether or not power and equipment is being used efficiently.


  1. 3.     Automate the Collection of UPS and PDU Data


With an awareness of the difficulties present in obtaining the PUE measurement, use an automated method of collecting data from the UPS and the protocol data unit (PDU). Energy efficiency monitoring equipment, hardware and software, can reduce the consumption of energy, while at the same time improving the productivity of the IT equipment.


  1. 4.     Prevent Threats to Rack-based Equipment


In a data center with multiple racks or even a large IT department with at least one rack, there are some common risks. These include:


  • Malicious tampering
  • Accidental damage
  • Too much humidity
  • Inappropriate temperatures
  • The presence of smoke


Maintaining a constant visual upon the racks may not be possible, but the use of a monitoring system can alert IT professionals to open rack doors, increasing temperatures or humidity, and the presence of other threats. Connect a monitoring unit to the central monitoring system for faster response times.


  1. 5.     Detect the Presence of Liquids and Fluid Leaks


Just about any owner of electronics knows how dangerous just a small amount of liquid can be. A few drops of water on the motherboard can render a computer completely useless. One water leak in a data center can lead to thousands of dollars lost due to downtime and equipment damage. Both of these situations can lead to data loss, frustrated customers, and decreased productivity. A system for monitoring leaks should be established around water lines, glycol piping, drain lines and condensate drains, unit drip pans, and the humidifier supply. This system can be set up with the central monitoring system or as a stand-alone system.


  1. 6.     Install Intelligent Control of Precision Cooling


Maintaining a precise temperature inside an IT department or data center is crucial. Without intelligent controls, maintaining the precise temperature and level of humidity would be difficult and impractical. An intelligent control system will help the units to work uniformly instead of in competition with each other.


  1. 7.     Include Intelligent Control of Critical Power


The same reasons for monitoring temperatures with intelligent controls may be used for monitoring critical power with the controls. The digital controls can be used to optimize the specific performance of uninterruptible power systems (UPSs). The intelligent controls can also manage the switch between traditional operation and the backup systems used when outages or overloads occur. Additionally, the controls might monitor the conditions at the site.


  1. 8.     Utilize Alerts and Alarms


Protecting the IT department from any downtime, even a fraction of a second, is crucial for equipment efficiency. Even the briefest power outage can disrupt systems, prompt a reboot, and lead to several days of reduced productivity. With a reliable system that provides notifications of any event that might lead to a power failure, IT professionals can respond to those sources of trouble before an outage occurs.


  1. 9.     Keep an Eye on Battery Levels


Data centers routinely use a battery monitoring system to protect against power outages. Without this maintenance, the center runs the risk of losing UPS system power failure when that power supply is most critical. Use a monitoring system to monitor the health of the batteries in all UPS units.


  1. 10.  Consider Remote Management or Monitoring of IT Equipment


By shifting the burden from internal IT personnel to a remote source, a devoted organization can bring combined resources and expertise to the task of measuring and monitoring the use of IT energy. As troublesome issues arise, the remote organization can handle the IT crises while other professionals can focus on responding to trouble on location. This suggestion may not be appropriate for all IT departments, but might be an option to consider if the IT staff is already struggling to complete necessary tasks.


Whatever the size of your IT department, keep in mind the common adage, “if you can’t measure it, you can’t manage it.” This holds true for the monitoring and measuring involved with IT equipment efficiency. With reputable and accurate monitoring systems, the IT professionals can better track and maintain the efficiency of their equipment.



Posted in computer room maintenance, data center maintenance | Comments Off

Becoming Energy-Efficient Through a Data Center Energy Audit

Green is in, and there is little doubt that companies that can show they are striving to be energy efficient are more productive. To stay proficient, older data centers need to find ways to become more energy efficient, and newer data centers need to implement energy efficient protocols from the outset. The numbers show that proper implementation of energy management can save data centers millions of dollars each year. In order to find these energy management ideas, data centers need an energy audit. There are different types of auditing processes to determine data center energy usage. Understanding the standards, downfalls, and successes of this process is important in understanding its importance.


Types of Energy Audits


There are three different ways to carry out an energy audit, including a walkthrough audit, comprehensive audit, and investment grade audit.


A walk-through is by far the least expensive and time-consuming of the three. However, a simple walk-through audit can easily overlook opportunities for energy savings. Sample readings and measurements recorded generally include:


  • Power
  • Lighting levels
  • Thermal comfort


Normally, an auditor walks through the facility and reviews energy data for the past one or more years to determine opportunities for energy savings. A comprehensive audit may be recommended if the auditor feels that the facility could benefit from one.


A comprehensive energy audit is considered the standard. It usually involves gathering energy data of a facility for three or more years and conducting a detailed investigation into the facility’s energy use process. Since more data is collected and examined, more energy management opportunities may be discovered than a simple walk-through would provide. Since this audit takes more time and manpower to analyze data, it is more expensive than a walk-through.


Investment grade audits are an even more exhaustive process to leave no energy-saving stone unturned.  Everything involving the facility’s energy usage and footprint for the past several years is assessed to find opportunities for energy savings. Detailed energy models are created to illustrate the effect energy efficient actions could have for the facility. It is the most detailed and time consuming of the three audit processes, and therefore more expensive.


Assessing Data Center Energy Usage


The process for assessing data center energy usage starts with data collection. The amount of data collected depends on the type of audit being performed. It is common for data centers to undergo a comprehensive audit that examines three years or more of existing energy usage information. An energy efficiency model may start to form from the data collection alone. An auditor will need to visit the site to document in person energy efficient actions that may need to be taken. It is common for the auditor to assess areas such as:


  • IT equipment and software
  • HVAC
  • Electrical power chain
  • Air Management
  • On-site energy generation


The product of any type of energy audit is the energy audit report, which examines what the current cost of running the data center is and compares it to the projected cost after energy efficient actions are implemented. The report includes steps to take to lower energy costs. Some energy efficient implementations cost money at first, but ultimately serve to save data centers millions over the years.


Standards for Assessing Energy Usage


Ultimately, during the energy audit process, the goal is to produce a number to evaluate the data center’s Power Usage Effectiveness (PUE). PUE is usually quantified by taking total facility energy and dividing that number by the IT equipment energy. The resulting number can be anywhere from 1.0 to infinity. A score of 1.0 means that a data center is operating at 100% energy efficiency. Typically, older data centers generate a PUE of 2.0 to 2.5 (80-85% efficiency). However, there are many data centers (some in that older category) that have managed to get their PUEs to 1.6 or better. Newer data centers that focus on optimal IT power delivery and cooling are more likely to get closer to achieving a score of 1.0.


Common Downfalls in Energy Efficiency During an Energy Audit Assessment


Although PUE assessment is considered standard across the board, it is not an infallible process. There are certain factors that can directly affect the score that a data center receives during an audit, including:


  • The age of the data center
  • Type of processing
  • Type of cooling
  • Temperature levels
  • Location of CRAC units
  • Ventilation


PUE can differ from the actual total energy usage levels of a data center. When a data experiences down time on IT devices, the total energy usage also goes down. However, unless there is a proportional drop in the infrastructure energy and cooling capacity, the PUE number actually goes up, worsening energy efficiency.




It might sound complicated, but there are plenty of data centers that have already undergone energy audits and seen success with energy efficiency protocols. Google and eBay are the towering success stories that have seen PUE levels of 1.2 and 1.4 respectively. More recently, digital realty major DRT has released information that it has dropped its PUE level down to 1.6, saving the company $6-10 million per year at two data centers.


The energy audit process for a data center uses careful analysis of energy usage to determine where energy costs can be efficiently reduced. The ultimate goal is to find ways for a data center to drop its PUE level to as close to 1.0 as possible. Significant benefits generally await those data centers that can find ways to be more energy efficient. For those companies looking to stay in the business effectively, a comprehensive energy audit may be in order.

Posted in Data Center Design, data center equipment | Comments Off

Is it a Data Center Emergency or Just A System Notification?

In the office of today, a lot of machines beep, ring, and create various alarm notifications. Nearly everything from the copy machine to the coffee machine has an alert to bring attention to things such as low toner, finished cycle, or some other status. Unlike the pre-computer offices of the twentieth century, where the only bell usually heard was from the typewriters as the secretaries reached the end of the line, modern offices have a lot of minor little alarms – and of course, a few big and very important alarms, as well. For instance, just one device, the Uninterruptable Power Supply (UPS) that helps to keep a steady power supply to the company data centers, has both small, less-important notifications, and more urgent alarms. Knowing the difference between a status update or alert, and a system critical alarm that requires user action, is one key to keeping an office operating smoothly.


What is That Alarm About?


For workers who are not in the IT department, the computer systems of the office can be both familiar and intimidating. On the one hand, nearly everyone in the company uses the workstations, tablets, laptops, and other devices that are part of operations. On the other hand, though, the complicated computer programs, algorithms, functions and electronics behind the familiar facade of the keyboards and screens that all workers use everyday is mostly not well understood by the average employee.


When the company’s data center UPS activates, for instance, in the event of a power loss for any reason, it will trigger a notification. Some people become alarmed and feel a need to call the experts when they see such a situation. In most cases, though, the truth is that the system worked just as it should, and there is not a problem to report. The UPS is there to keep power supplied in the event of power interruption in the system. If there is an unusual event where the main power will remain unavailable for a longer time, it may be necessary to take immediate action when the UPS system kicks in. Generally, though, full power will soon return, and the system can return to normal operation. In that case, there is no need to call the IT experts, as the system is working properly and doing its job.


Power to Electronics


The computers and devices that smooth the flow of much information and work today naturally require a steady and predictable supply of electricity. Electronic machines can be very sensitive to sudden power bursts or fluctuations. Despite significant advances in power grid technology, the fact remains that in the real world, power systems do not provide an exactly constant, perfect flow of electricity. In reality, the amount of electricity ebbs and flows and surges, depending on needs, demands, and production in the system. While the slight variations in power supply are not harmful to most electrical appliances around the home or office, the very sensitive and sophisticated computer equipment can be damaged or ruined by power supply fluctuations.


Realizing the risks to equipment and data that can come from power surges or sudden unexpected power outages, electronics manufacturers created solutions in the form of UPS devices, as well as Power Distribution Units (PDU). For computer-sensitive companies, a PDU can help to smooth the flow of electricity to various devices, ensuring that none gets hit with a spike of energy that can burn out electronics, and that none are lacking the power they need to do their jobs. Many large industries rely on PDU to moderate the flow of power to a variety of electronic devices. Naturally, data centers are one of the primary users of PDU, but they are also utilized by government operations, hospitals, as well as many data-intensive industries.


The More You Know. . .


While not every employee in every company can be a computer expert with a thorough working knowledge of the ins and outs of the system, it helps to have employees who are thoroughly trained and knowledgeable on the operations that they will come in contact with in their work. Particularly for parts of the system that are not used everyday, but that can be vital in the event of power system problems, such as the UPS and possibly the PDU if in use in the company, it is useful to have thorough training in the basic workings of the system and the various alerts and notifications that the devices may provide in different situations of normal operations.


Workers who are informed of what to watch out for and trained on the various levels of warnings and notifications that will inevitably come from the computer and its power backup systems are generally more confident and less prone to call for expert help that is often unnecessary and costly. One of the key mission functions of a company’s IT operations is to minimize downtime and promote efficient, productive use of the electronic devices that the company uses. The better that all employees can understand some of the more mysterious inner workings of the machinery that tend to be out of sight and out of mind, the more likely they are to be productive and pro-active in resolving small issues in-house and in the simplest way possible. When workers know which notifications are indicators of a system working well and resolving internal problems adequately, and which alarms require quick action to maintain data stability and computer operations, then they can perform their own jobs better, knowing the computers are there for them when they need them.


Posted in data center maintenance, Facility Maintenance | Comments Off