header-about-us-sub

Effective Grounding Methods in Mission Critical Facilities

UofA_2011_825kVA_Delivery_028-001Whenever a critical facility is being designed, it’s imperative that a great deal of attention is given to the grounding system in order to reduce the chances of overcharging, to isolate faults and to upgrade the uptime. The experts of Data Center Journal are here to recommend several different effective grounding methods, including corner grounded delta, low-resistance grounded and solid grounded.

 

Corner Grounded Delta

 

While it was once one of the most common grounding methods for mission critical facilities, the ungrounded method comes with a number of problems and isn’t very effective or useful for a majority of contemporary electrical systems. From the failings of the ungrounded method came the corner grounded delta method (CGD). Advantages of this particular grounding method include:

 

  • Affordability
  • A ground reference for every current carrying conductor
  • The elimination of problems associated with the underground method, such as transients and overvoltages

 

That being said, the CGD method isn’t without its pitfalls, mainly:

 

  • The requirement to mark grounded phase through the entire distribution system
  • The inability to use cheaper slash-rated circuit breakers
  • Ground fault sensing usually can’t be identified for the grounded phase

 

Low-Resistance Grounding

 

The main reason a mission critical facility might use the low-resistance grounding method is to lower the rate of damage that can result from the intense currents that flow during ground faults. This method can be utilized on low voltage systems as well as medium voltage systems. The overall magnitude of the fault current is quite large, which means that the source has to be immediately disconnected. Because of this, the low-resistance grounding method might not be a viable option for mission critical applications.

 

Solid Grounded

 

The way the solid grounded method works is it connects the system neutral in a direct path to the ground, which helps in making sure that the neutral voltage and ground are balanced. With a grounded neutral, the individual lines of neutral voltage are locked with a specific reference to ground. The advantage of the locked reference is that it keeps phase-to-ground overvoltages from taking place. One word of caution with the location of neutral-to-ground bond points is that not paying close attention to those points can result in an interruption in the ground fault sensing systems. Because of the substantial amount of electricity that can be released during a ground fault with a solid grounded system, the source of the electricity has to be disconnected as soon as a ground fault is detected.

 

For a professional suggestion about which grounding method is the best fit for your mission critical facility, get in touch with us here at Titan Power Inc.

Posted in Data Center Construction, Mission Critical Industry | Comments Off

Vital Safety Tips for Computer Data Rooms

Spring Cleaning Checklist

Data Center Cleaning Checklist

No matter how safe and harmless your computer data room might look, you have to always keep safety in the front of your mind in order that your computer room can remain fully functional and continue to operate at peak efficiency. Data Center Journal has a few essential safety tips that you can put to good use to reduce safety hazards and avoid lost wages, injuries, medical expenses and reduced productivity.

 

The Importance of Proper Training

 

If you have employees working in your computer data room, it’s essential that you make sure they are properly trained. Require your employees to attend and complete certified safety classes, make sure new employees are either trained or accompanied by more experienced employees and keep an organized log in order to make sure that everyone is following the proper safety measures. You might also want to consider appointing an employee who is in charge of walking through the computer data room to ensure that insulated tools are being used and that the programs are running correctly.

 

Inside of the Computer Room

 

It’s recommended that you check your fire and flow alarms once a month and make sure they have working batteries. If there are any holes in your floor that could be fallen into, you are required by OSHA to install either a toe board or a railing around the hole. All cabinets and server racks should be properly secured and grounded, otherwise they might fall over during the loading or unloading of heavy equipment. If you have any racks or cabinets that are on castors, it’s best that you make sure they are castors that can adequately handle the weight that you’ll be adding to them. Since castors increase the enclosure’s center of gravity, there’s a chance that they can tip over.

 

Pay Attention to the Outside of the Computer Room

 

Make sure the exterior of the computer room is just as safe as the interior of the room. Specifically, all of your battery rooms should have hydrogen gas detectors as well as fans so that there is proper air exchange. All of your battery rooms should also have deluge shower and eye wash stations for contamination incidents. It’s important that the shower stations have a water flow alarm hat that can be observed in a master control room or guard station. There should also be a fully stocked first aid kit outside of the computer room.

 

If you’re looking for ways to make your computer data room operate more efficiently, consider brushing up on your safety tips. Reach out to Titan Power Inc. for more tips and advice.

Posted in computer room maintenance | Comments Off

These 10 Tips Can Help Boost Power System Availability in Any Data Center

Power systems are a crucial component of every data center. Not only do these systems supply energy to important devices, they can also prevent catastrophe from occurring in the event of an outage. Implementing the following guidelines can help companies avoid the perils of insufficient power systems, which can have calamitous effects on future success.

  1. Consider Utilizing an Uninterruptible Power Supply

Unlike a backup generator, which often entails a delay between when an outage occurs and when standby power is available, an uninterruptible power supply (UPS) provides immediate accessibility when the main power supply fails.

In many cases, UPSs are considered a short-term solution used to provide technicians a window of opportunity to properly shut down equipment, while also affording time to save significant data. UPSs can also perform other essential functions within a data center:

  • Ensure Operational Continuity – Even the smallest electrical anomaly can have a dire impact on data storage, as well as equipment. A UPS enables operational continuity by offering a constant source of backup power.

 

  • Mitigate Impact of Power Surges – A power surge can greatly damage computer equipment, resulting in exorbitant costs for repairs and replacements. In the event of a power surge, a UPS will revert to battery power to avoid damaging expensive devices.

 

  1. Perform Regular Audits of Power Components

 

Regularly scheduled auditing of power systems is highly important. Failure to do so can lead to many unexpected occurrences if the main power supply becomes unavailable for an extended period of time.

 

That’s why IT managers must be fully aware of the limitations of their current power systems. This includes keeping abreast of product specifications, as well as submitting to routine maintenance, which can help expose issues before they become a larger concern.

 

  1. Implement a Dependable Cooling Device

 

All data centers should implement efficient cooling solutions to maintain equipment. Generally, IT managers will have a few configurations to choose from, each of which offers distinct benefits depending on the circumstances.

 

When dealing with extreme heat loads in their computer rooms, a room and rack cooling solution may prove ideal. These systems can handle high-intensity loads of up to 200 watts per sq. foot or more. In situations where equipment is frequently moved around a room, computer room air conditioning can be a good option. This can keep temperature and humidity fluctuations to a minimum, which is important for protecting vital computer components.

 

  1. Don’t Let Administrative Hurdles Impede Progress

 

Conflicting administrative departments, each with their own opinions on the way things should be, can sometimes impede necessary modification of power supplies. Implementing procedures for such decision-making can reduce the amount of time spent debating on the best course of action, which usually results in interminable delays and lost business.

 

  1. Choose the Right Power Distribution Unit for Your Needs

 

UPSs can also help regulate power distribution to central devices. While these power distribution units (PDUs) can vastly increase power availability within an IT environment, it’s beneficial to be aware of some important distinctions.

 

Rack-based PDUs convert electrical flow to an adequate level when powering a piece of equipment, which prevents circuitry overloads from occurring. Floor-mounted PDUs receive power from a main source, then distribute this power to smaller devices based on a prescribed process. While rack-based units can be moved freely around a data center, floor-mounted units are usually permanently placed.

 

  1. Document Power Equipment Changes

 

Tracking progress can be a great way for a data center to implement new solutions by focusing on what works and what doesn’t. This is particularly relevant as it pertains to the purchase of new equipment, which may not always perform as well as initially expected.

 

Documenting such changes can offer a concrete assessment of successes and failures relating to power systems. Such documentation can be highly useful in the long term, especially when illustrating to management the need for new or upgraded devices.

 

  1. Employ Contingency Plans

 

A suitable contingency plan is fundamental to make certain important processes stay on track. Considerations can range from retaining enough fuel for backup generators to ensuring cooling devices remain operational. Anticipating future needs will prove quite useful should disaster strike.

 

 

  1. Replace Outdated Systems

 

Given the rapid pace of technological changes, it’s important for equipment to be updated regularly. This will ensure optimal performance from your power supply, which is key for the proper maintenance of a data center or computer room.

For instance, UPSs only recently began offering users both high availability and efficient power consumption in the same unit. As a result, companies that purchased previous models may greatly benefit from upgrading to this improved technology. Not only can such upgrades enable better performance from equipment, they can also cut costs drastically by reducing unnecessary energy consumption.

  1. Prioritize Value Over Equipment Cost

 

Disputes over cutting costs on equipment are common within the IT industry, as purchasing the necessary devices can be quite expensive. While some may prefer a fiscally conservative approach when it comes to power systems, this can actually harm a business greatly. For this reason, those at the helm of an IT department must take the long view regarding overall costs in order to ensure best practices.

 

  1. Apply Specialized Testing Procedures

Advanced data center testing techniques can shed light on existing vulnerabilities, while also uncovering emerging issues that can become detrimental if left unchecked. These specialized testing methods can also help staff determine if power supplies are functioning at peak-performance.

Such testing can include a number of sophisticated procedures, from facility-rollover testing, which establishes how well emergency power will perform, to infrared thermography, which can identify hot zones within equipment components.

We Can Help You Boost Power and Increase Efficiency

Thanks to our years of experience in the IT industry, Titan Power can help optimize your current power system to allow for more efficient processes overall. For more information on all that we can do for your business, please call today at 1-800-509-6170.

Posted in Uninterruptible Power Supply | Comments Off

Reactive vs. Preventative Maintenance: Why Data Facilities Need a Cultural Shift

Today it is not uncommon for data centers, computer rooms, and control facilities to employ an entirely reactive maintenance regimen. At Titan Power, we know that fixing something that isn’t broken can be a difficult sell for companies and facilities managers in any industry. Still, for data centers and related facilities, simply waiting for problems to unfold can be a damaging strategy.

 

Preventative maintenance may seem like an unnecessary luxury, but in reality, it represents the best possible use of assets. Reactive maintenance introduces unnecessary costs and risks that no business needs to face. Over time, an insistence on reactive maintenance can cost a facility resources and opportunities.

 

Why Focus on Prevention?

 

In a comparison of preventative and reactive maintenance, preventative maintenance holds all of the advantages. When you employ this type of maintenance effectively, you will notice tangible gains in your business and operations, including:

 

  1. Reduced maintenance costs. Preventative maintenance is almost always more cost-effective than reactive maintenance, which often involves fixing multiple problems that arose from one initial problem. When you factor in the cost of hiring emergency help or placing rush orders, the potential for cutting expenses through preventative maintenance is even greater.

 

  1. Lower overall operating costs. Preventative maintenance can keep your equipment running at peak efficiency, reducing energy usage and associated expenses. Proper maintenance will also extend the lifespan of many of your systems, from generators to blowers. This allows you to save money upfront and long-term.

 

  1. A safer work environment. Equipment malfunctions or failures can create safety hazards that threaten even the most well-trained staff. Although some incidents simply cannot be foreseen, preventative maintenance can address safety risks associated with wear and tear, improper device use, and inadequate safety settings. Preventing work-related accidents, in turn, can improve productivity and reduce your insurance costs.

 

  1. A better reputation. Equipment problems that result in downtime or data loss can be costly for any computer room, data center, or similar facility. Winning back client confidence or finding new business after these issues occur can be difficult. Preventative maintenance makes it easier for your business to deliver on its promises, keeping your current client base happy.

 

Making the shift from reactive to preventative maintenance can be difficult, especially for businesses that are operating on tight budgets or struggling to keep up with current maintenance needs. However, allocating more resources for preventative maintenance, planning ahead for scheduled downtime or repairs, and promoting a culture of prevention can go a long way toward making your business more profitable and successful.

 

Simple but Effective Starting Points

 

What areas should you focus on to reduce your need for reactive repairs? It is often easiest to enroll in maintenance programs from established companies or contractors. These programs usually involve regular inspections of key facility systems and components, such as uninterruptible power sources, generators, automatic transfer switches, and HVAC or CRAC systems. During these inspections, experts can identify necessary maintenance based on observed performance issues and manufacturer-recommended maintenance schedules.

 

Batteries are one of the system components that are worth focusing on during preventative maintenance. Many facility managers are surprised to learn that batteries are a top contributor to system downtime and power loss. Without high-quality, operational batteries, your entire uninterruptible power source or generator can fail. Alarmingly, just one old or bad battery can wreak havoc on a system. Besides scheduling inspections, you may want to partner with specialists who can offer advice on choosing the most reliable replacement batteries.

 

The Power of Preventative Tests

 

After your batteries and other system components are in top working order, system testing is a smart next step. Performance testing ensures that the whole system functions as you would expect it to, given the condition of the individual components. Some tests worth considering are:

 

  • Load bank testing. This ensures that backup generators can produce and maintain their stated power capacity for a minimum of two hours. This testing cleans the generator out and combats functional issues associated with generator underuse. Most facilities should have this testing done annually.

 

  • Facility rollover testing. This is a broad test to verify that all backup equipment functions as needed following a power failure. This assessment measures how quickly and effectively emergency systems take over supplying power. Annual testing is the bare minimum; for some facilities, more frequent testing is beneficial.

 

  • UPS acceptance testing. This checks whether your uninterruptible power supply meets manufacturer performance specifications. The tests may involve electrical, mechanical, and load assessments. This testing should be performed within a few days of equipment installation to establish a baseline, and it can be done subsequently as part of a regular maintenance package.

 

  • Infrared thermography. This measures infrared energy to detect electrical components with abnormal temperatures and a potentially high risk of failure. Infrared thermography can be an effective way to identify various issues, including excessive or uneven loading, loose connections, or short circuits. We offer this service as needed but recommend it as part of normal maintenance.

 

Although regular inspections and maintenance are often enough to prevent equipment or system issues, these tests can help reveal any overlooked work that may be critical now or in the near future.

 

Making the Switch to Prevention

 

Performing the right maintenance and repairs at the right time can help you save money, time, and stress. Responding to problems as they arise, meanwhile, can leave your business vulnerable to financial loss, poor solutions, and disruption. At Titan Power, we have years of experience identifying and heading off potential problems, and we are ready to help you tap into the many benefits of preventative maintenance. To learn more about our maintenance programs or schedule an inspection, call us today at 800-509-6170.

 

 

Posted in Data Center Design, data center maintenance | Comments Off

Difference between Real, Reactive and Apparent Power

Do you know the difference between VA and watts? It is important to understand the different types of power in order to achieve efficiencies with all of your electronic and computer room/data center equipment.

 

One of the easier ways to describe power is to break it down into types – Real power, Reactive power, and Apparent power. Real power is the portion of power flow that results in the consumption of energy, and it is measured in Watts. Reactive Power is measured in or volt-amps reactive, or VAR, and in a UPS spec is referred to as the Power Factor. This Power Factor is very important to the efficiency of the UPS, as it affects power costs, power losses, and overall effectiveness of the system.

 

To learn more about the types of power, see our latest video post here. It describes in a little more detail the types of power and and how they affect your UPS system in a computer room environment. The video even has a fun beer analogy to help demonstrate the differences. Cheers to that!

Posted in Back-up Power Industry, Titan Power, Titan Power Videos, Uninterruptible Power Supply | Comments Off

Prevent Data Center Downtime With Generator Load Bank Testing

Ensuring an uninterrupted supply of power is an essential task for anyone running a data center. Power disruption can have various negative outcomes, from downtime to data loss. Given these risks, it is not surprising that back-up generators are standard features in data centers. Still, generators that are untested or poorly maintained can prove ineffective or useless when they are needed most. We at Titan Power are all too familiar with this problem, which is why we recommend regular load bank testing of backup generators. This is a crucial service for any data facility that needs to ensure consistent performance and protect against worst-case scenarios.

 

How Load Bank Testing Works

 

The goal of load bank testing is to place peak power demand on a generator and then determine whether the generator can produce and maintain the necessary kilowatt output. A load bank is a device that can create an electrical load and pass it on to the generator in a steady and controlled manner. This presents a useful alternative to the load that real-life power demand creates, which often fluctuates chaotically.

 

During load bank testing, the load device is typically placed within 20 meters of the generator and connected to it via cable. During the two-hour test, various measurements of the generator and its output are taken to yield insights into performance. These insights include:

 

  • Whether the system can provide the necessary amount of power
  • How efficiently the system functions at different load capacities
  • Whether the generator can maintain a stable voltage throughout the test
  • What levels the oil and fuel pressure reach

 

There are several types of load banks that can be used to perform this test, including resistive, capacitive, and inductive. When especially high-voltage loads must be produced, reactive load banks — which use inductive or capacitive loading or a combination of the two — are typically used to deliver the necessary load. We generally use resistive or reactive load banks, depending on each generator’s maximum power output.

 

A Controlled Environment and Useful Data

 

Although a basic test of a standby generator’s capabilities occurs any time that a data center’s power fails, load bank testing offers a few advantages. The data collected during load bank testing can provide a precise, carefully measured, and more comprehensive look at the generator’s ability to perform as needed. Additionally, this testing provides a controlled environment in which common risks associated with generator overload or failure — such as harm to other system components and business disruption, to name a few — can be effectively mitigated.

 

Proper load bank testing is an invaluable resource because it provides an accurate indicator of the way that the generator will perform in a real-life situation. Other methods of testing may not fully simulate the type of demand that the generator will need to meet. Even full-time generators that are used on a daily basis may perform differently than expected when faced with an emergency load, since most generators typically produce output that is far below their capacity ratings during everyday use. Load testing can provide useful insights and peace of mind, and it can also yield functional improvements.

 

The Negative Effects of Generator Underuse

 

Load bank testing can help improve the performance of regular generators as well as standby generators that are subject to regular no-load testing. When a diesel generator is quickly tested, used temporarily as an emergency power source, or simply under-loaded on a regular basis, it cannot reach its optimal temperature. This increases the risk that some fuel products will fail to burn off and instead accumulate in the exhaust, which is a phenomenon known as “wet stacking.” After a while, wet stacking can noticeably affect a generator’s performance. It can also reduce the device’s longevity.

 

Yearly load bank testing can counter these detrimental effects. The two-hour testing process can help a generator reach and sustain its peak power output, which means the generator will also achieve its ideal operating temperature. Any previously unburned fuel will be effectively burned off. After the test, the generator will be cleaner, primed to run more efficiently, and less likely to fail in the future. These benefits underscore why annual load bank testing is so advisable.

 

Planning for a Successful Test

 

In general, the only timing requirement for load bank testing is that it should be performed every year; however, the initial test should not be scheduled too soon after the generator’s initial installation. The generator batteries need some time to charge and reach voltage equilibrium in order for the test to yield accurate results. Conducting the testing about a week after commissioning will ensure reliable results, while any tests performed earlier may be inconclusive.

 

When you are planning for a generator load bank test, it’s important to remember that load devices can generate significant heat, even with the cooling systems they are outfitted with. These devices also produce moderate noise. It is often important to conduct testing away from employees and any building alarm systems that may activate easily. This ensures the test can be completed without unnecessary disruption to your regular business and building operations.

 

Schedule Your Test Today

 

Load bank testing is an affordable investment in your data center operations, especially when compared to the financial cost of emergency repairs and the less tangible cost of losing power at a critical time. Titan Power can incorporate load bank testing into your regular annual data center maintenance, and we also offer it as a standalone service. Don’t leave your data center susceptible to power failures and all of the associated complications. Call us today at 800-509-6170 for a free consultation about our load bank testing services.

Posted in Data Center Design, data center maintenance | Comments Off

Critical Steps to Take After a Data Center or Computer Room Disaster

Both natural and man-made disasters can have a significant impact on all the servers and storage at your data center. When that happens, do you know what your next steps are? Here is a guide to help you get through the critical period immediately following any kind of disaster that impacts your data flow.

 

Outline a Plan and Anticipate Problems

 

Before discussing the steps to take after a disaster, it’s important to review what you should do long before any problems occur. Nobody wants to have problems in their data center, but the list of potential issues at a data center are long and varied, depending on several different factors. Understanding the types of threats you might face and creating a plan that includes contingencies for several different threats is critical. Some common threats to anticipate include:

  • Malicious data center attacks (cyber-attacks)
  • Weather events related to your location, such as tornadoes, hurricanes, floods, earthquakes, or wildfires
  • Power failure or power surges
  • Fire or flooding from within the building
  • Security concerns
  • Limited resources availability, such as droughts that impact water usage

 

Set Up and Run the Data Center Efficiently

 

While the setup of your data center will likely take place long before a disaster strikes, the way that you have your servers, cables, backup, and cooling system laid out can have a big impact on how well and how quickly you can recover following a problem. Avoid common problems that could impact your ability to get to your servers quickly following a disaster scenario.

  • Map out the data center so that all machines are accessible without tripping over wires and cables.
  • Keep the wiring clean and easy to follow so if you need to unplug or re-route just one cable you won’t have to deal with a tangled mess of wires where you end up just guessing which one is the right intranet cable (and crossing your fingers and hoping for the best when you unplug it).
  • Keep entry and exit doors clear and ensure proper security so only employees with proper clearance can access the room.
  • Enforce a zero-tolerance policy banning food and drink inside the data center to prevent unnecessary disasters from spilled drinks or food.
  • Spread out the electrical controls and keep them covered so it is difficult or impossible to “accidentally” shut off the power.
  • Have a secure off-site facility with backups for your most critical data in case all the servers and information in your data center are destroyed.

 

Remain Calm and Execute Your Plan

 

Even with plenty of safeguards in place and a plan outlined that anticipates potential problems, every data center is still at risk. When the servers go down and the room is dark following any type of natural or man-made disaster, it can be difficult not to panic. Before you start running around like a crazy person, take a deep breath and go about the recovery process methodically.

 

1: Get Your Backup Power Systems Running

Every data center should have a reliable uninterruptible power supply (UPS) system so when the power goes out the servers can continue running. Have dependable UPS batteries, and check them and replace them regularly. The wrong time to remember that you should have replaced your batteries is when they don’t work following a disaster.

 

2: Start with Mission-Critical Servers

Your data center might be filled with several rows of servers, so you need a map of which ones are considered “mission critical,” meaning that your business cannot operate without them. When these servers go down and information is unavailable to your employees and customers, the cost to your business is high, and every minute counts. It is helpful to have mission-critical servers clearly labeled and located in the same area of the data center so you know you’re working on the right ones and you won’t spend extra time running back and forth through rows of servers to find them. Keep a printed version of the data center map somewhere that is easily accessible (if it’s stored on a hard drive and the computers are down it won’t be much use).

 

3: Recover Second-Level Systems

After mission-critical servers are restored, move on to the systems that support your day-to-day business operations and make work easier and more convenient for employees, such as reporting, forecasting, and other similar tools. These are not mission-critical and will have minimal long-term financial impact on the company but should be restored as quickly as possible to keep operations running smoothly.

 

4: Consider Interdependent Systems

If you have interdependent systems that rely on each other to function, you will need to restore the entire system before you move on to another part of the recovery plan. Even some modular software systems must be completely intact to function correctly, so knowing which systems are interdependent and looking at a big-picture recovery plan can help you focus on the ones that need your attention first.

 

5: Avoid Unnecessary Work in the Recovery Process

As you begin rebuilding after a disaster it can be difficult to sort out what is necessary and what is not, but spending hours restoring non-critical data can slow down your entire recovery process and be frustrating for management and customers. Your disaster plan should include a list of non-essential systems, such as historical data, test systems, and employee Intranet, and other non-critical libraries that can be omitted from the initial restoration process to save time.

 

6: Spend Your Resources Wisely

If your backup systems only have the ability to support a portion of your data center, make sure the most important parts are hooked up to the UPS power supply. Don’t waste limited capacity on non-essential systems.

 

7: Cross-Train Other Employees to Start the Recovery

In some cases a natural disaster, illness, injury, or other problem may prevent your IT people from getting to the data center to begin the recovery process. For this reason your business should have several other individuals outside of IT who are cross-trained on the basics for restoring critical systems so they can get the process rolling even without an IT person available.

 

You can’t anticipate disasters, which is why it’s so important to have a plan in place, and review and practice your plan regularly so when it does happen you are ready. Talk to Titan Power today to find out more about creating and maintaining your data center so it’s ready for any disaster.

Posted in computer room maintenance, Data Center Design, data center maintenance | Comments Off

Are Your UPS Batteries the Weak Link in Your Backup System?

Every data center faces some of the same potential disasters, and near the top of the list is the risk of power failure. When that happens, data centers need to have the proper procedures in place to ensure smooth transition to uninterruptible power supplies (UPS), but do you know how to prevent vulnerabilities in your UPS system? UPS batteries are often the most vulnerable part of your system and can lead to serious headaches, costly downtime, and lost revenue if you are not maintaining and managing them correctly.

 

Data center experts estimate that batteries are to blame for system downtime and UPS load loss in about 9 out of 10 cases, but fortunately it can be prevented with the right preparation and maintenance.

 

The Purpose of UPS Batteries

 

The uninterruptible power supply (or power source) at a data center is tasked with taking over in emergency situations when the main power fails. In order to have a properly functioning UPS system, you need to have UPS batteries in place that can handle your entire data load without failing—after all, a backup system without proper battery life won’t provide much security in a power failure situation.

 

To begin, make sure your data center has the right UPS batteries to meet the needs of your systems. There are three main types of UPS batteries that vary in terms of cost and overall reliability. Valve-regulated lead acid (VRLA) are the most affordable, but also the least reliable. Flooded or Wet cell batteries provide better protection and longer life with a higher price tag. Factoring in things like life expectancy, capacity, voltage, and convenient access points can help you get the one that will offer you the best protection when you need a backup system.

 

Determining a Battery’s Useful Life

 

The way that you use, maintain, and charge a UPS battery will have an impact on its useful life. Batteries in storage will naturally decline over time, particularly if they are not used or charged regularly or if they are not recharged after a power failure that discharges some of the battery life. While in storage batteries should be charged about three times a year; without these frequent recharges the UPS battery will likely only last between 18 months and two years, much shorter than the usual three to five year lifespan.

 

A UPS battery has reached the end of its “useful life” when capacity falls below 80 percent of its rated ampere-hours. After this the battery will begin to steadily and quickly decline and should be replaced as soon as possible. Batteries generally last between three and five years, although the specific amount of time your battery lasts will vary depending on usage, storage, and maintenance.

 

Before purchasing any battery, check to make sure it has not been sitting on a shelf in a warehouse for a while prior to purchase. Since they require frequent recharging, purchasing a battery that is more than one year old and has not been recharged means you will get a shorter lifespan than a brand new battery.

 

Causes of UPS Battery Deterioration

 

In any backup system the UPS batteries might fail for one of several reasons, including:

  • Infrequent or nonexistent maintenance
  • Less-than-ideal storage conditions
  • High humidity or fluctuating temperatures
  • Loose connections or inter-cell links
  • Dried out or damaged cases that lead to electrolyte loss

 

With so many different things that could go wrong when it comes to your UPS battery, having a well-defined maintenance schedule is critical to keeping them in top shape for when they are needed.

 

Ensuring a Proper Storage Environment

 

One of the best ways to prevent degradation for UPS batteries is to store them properly. The battery manufacturer should specify the ideal environment for keeping and storing UPS batteries, but generally speaking, the storage environment should be:

  • Indoors
  • Protected from weather, humidity, sunlight, etc.
  • In a dry location
  • Around 77 degrees, but if you cannot achieve exactly that temperature try to at least keep it between 60 and 85 degrees Fahrenheit

 

Proper UPS Battery Maintenance

 

Once installed, it’s important to keep track of all your battery maintenance. Perform visual inspections on a regular basis and log any information about potential problems that you see during the inspections, such as damaged cases or leaks. If left unaddressed these things could cause corrosion, fire, and other damage to your data center.

 

Next, perform regular readings on the batteries to determine if they are wearing out too fast—this is especially critical if you have a series of batteries wired together, as a failure in one of them could impact the entire string.

 

If you do have a power failure and must rely on the batteries for backup power, make sure to recharge the batteries within 48 hours of discharge to prevent extensive and irreparable damage.

 

Write down the replacement cycle for each battery so you don’t lose track, especially if you purchase batteries at different times over the course of several years, or if you add batteries to your storage slowly over time.

 

Sealed UPS batteries are often referred to as “maintenance-free” but don’t be fooled by the name—they still require regular maintenance checks like any other battery. The “maintenance-free” part only means that you won’t need to replace the fluid inside the battery.

 

Feel Confident With UPS Battery Maintenance

 

At Titan Power we understand the importance of preventive maintenance for UPS batteries. Being prepared for any contingency is critical to your success and continued operations, and we can help you get the right maintenance plan for your UPS batteries so you feel confident in case of an emergency.

Posted in Data Center Battery, Data Center Design, data center maintenance | Comments Off

The 7 Things Your Data Storage Company Hasn’t Told You

Properly storing your data is a key part of protecting your business. Every day, companies rely on information to get the job done, producing items in the form of graphics, spreadsheets, presentations, e-mails and more. Additionally, government regulations are now requiring organizations to maintain certain pieces of data that may have otherwise been deleted.

 

No matter what size business you own, professional data storage is almost always a necessity. Once you have made the wise decision to utilize a data storage company, you may be tempted to thing of the information as an “out of sight, out of mind” thing. And if you are working with the right data partner, you can rest assured that your information is safe and secure.

 

However, there are a number of truths surrounding your information that you should know. Here are seven things about data storage that could eventually affect your business.

 

  1. It’s hard to keep up with the boom

 

Data is now measured in zettabytes. One zettabyte is the equivalent of 1,000 exabytes or 1 trillion gigabytes. By the year 2020, the Computer Sciences Corporation estimates that there will be 35 zettabytes of information circulating. The agency states that the explosion of information is largely due to the fact that businesses are switching from analog to digital technology.

 

Corporations are not alone in driving this growth, however, as individuals have taken on a substantial portion of creating and storing information. Users are regularly producing photos and videos and utilizing social media accounts, causing the amount of data to grow by leaps and bounds.

 

The big boom has caused unstructured growth among data companies, which are trying extremely hard to keep pace. While the industry may not come out and say it, you can be sure that the storage company you use is constantly thinking of ways to accommodate more information.

 

  1. We are constantly backing up information, but we aren’t always sure when it will be needed.

 

Most data storage facilities are backing up information every night. The backups then sit somewhere offsite, where they are often left untouched. However, that information cannot be destroyed because it’s always possible that someone will need to access it. Sometimes, it can feel like they are preparing for a big event that may never happen.

 

  1. Your data is here … somewhere

 

Consider that a data storage company is responsible for more gigabytes than you could ever imagine. Additionally, information that needs to be stored is growing by as much as 60 percent every year. According to the Computer Sciences Corporation, annual data information will grow by as much as 4300 percent by the year 2020.

 

So if you ask someone where exactly your information is, don’t be surprised if they aren’t sure if it is located in offsite storage or in a data center. And further, if you find you need something quickly, be sure to give the company enough lead time to produce it. Depending on how old the information is, it could take weeks to locate it.

 

  1. It’s not easy to access our backups.

 

Think about how quickly technology evolves. Even cellphones come out with new designs, models and upgrades every few months. Certain applications are updated even more often. Translate that kind of change into the data world, and you are looking at an industry where software is constantly upgraded, switched over and then discarded. Therefore, if you need to access something on a tape from 10 years ago, it may take some time to find, and then someone is going to have to locate the right program to open it because the software is almost certainly extinct.

 

  1. We prefer to add more storage than sort through old files.

 

Could you imagine having to go through millions and millions of files to determine if they could be kept or deleted? Sure, it may be better in the long run to trash old data, but to better keep pace with ever-expanding storage needs, businesses simply add more room instead. People in information technology are keenly aware of the programs that could help users identify which files can be trashed, but explaining those programs is often more of a bother than a help because it is so time-consuming.

 

  1. Brace yourself, because costs could rise

 

Data storage companies need to take into account all the information you have archived, everything you are currently producing and everything you might create. Based on projections for companies to generate more and more data, it’s likely that you will see an increase in charges. It can be difficult to estimate exactly how much space a business will need in the long term, but typically, people are producing more, and not less.

 

  1. Communication issues are a problem.

 

Thanks to increased federal regulations, there are legal requirements for what must be stored and retrievable. If data companies had a better picture of which files are important and which can be deleted, we could conceivably cut our storage significantly. However, the legal resources available do not always understand that finding those files can take an extremely long time, and most of the time, associates would rather not deal with the issue and just add more storage.

 

The team that is in place at Titan Power understands the challenges facing the data storage industry. We are constantly creating innovative ways to plan, design and engineer your data storage center. For companies who are weighing their risks and want to get a better look at the bigger picture, understanding these truths is a good place to start.

Posted in Data Center Design, data center maintenance | Comments Off

Data Centers Move From Big Data to Smarter Data

Thirty-five zettabytes. That is the equivalent of 35 trillion gigabytes, and that is how much information the Computer Sciences Corporation estimates user will generate on an annual basis by the year 2020. The phrase “big data” simply refers to sets of information that have grown too complex or too large to fit into the older, standard tools we used to use. Thanks to the exponential growth in the industry, there has been a push to figure out a way to manipulate and manage all this information, and classification is the key to making it work.

 

Analyzing Big Data

 

Broken down into its basic components, big data is just a collection of the simple pieces the everyday user knows well:

 

  • Spreadsheets
  • Pictures
  • E-mails
  • Everyday work documents

 

These files are created and shared, getting saved somewhere within the data storage environment in the process. This information is being generated at an unprecedented pace, causing unstructured growth. The boom, so to speak, has left many in the industry scratching their heads when trying to set policies around it or simply maintain it.

 

The Problem With Unstructured Growth

 

One of the reasons so many people in the information technology field are concerned about big data is that there are legal implications now that require a business to be able to store and retrieve certain information. Left unmanaged, these key pieces of data can become a compliance liability or land a business in legal trouble.

 

What’s more, improper storage of all this information is how security breaches occur. Hackers need only to find one small opening in order to compromise an entire data set, as exhibited by the countless issues major retail companies, for example, have experienced.

 

The Process of Storing Data

 

So how can we properly and securely store all this information? The first step is to classify it correctly. In a catch-22, we find that in order to classify data, we often need to have policies in place. However, it is difficult to create policies without first having classified the information.

 

That is why breaking down big data to the ground level is essential, because we can figure out what kind of unstructured data is out there. Once we identify the items at a file level, we can start to classify it because we can determine where it is located, who owns it and when it was accessed last.

 

Being Smart About Information

 

The classification process is exactly how we can take a step back and attack the storage problem. Gaining a deeper understanding of the files will enable us to improve the way our data works and is governed.

 

There are six basic classifications that a piece of data may fall under:

 

  1. Archive

 

Regulatory requirements will automatically mean that certain pieces of information are valuable to a business in the long term. Storage companies may be able to find these files by searching for keywords, figuring out who owns it or simply knowing the type of file. Once located, the information can be placed into an archive to satisfy legal or other standards.

 

  1. Active

 

If something has been created in the last three years, it is considered active and therefore is most likely to be accesses again. It can be managed in place until either aging out of the system or moving into another classification.

 

  1. Aged

 

Information often moves from being active to being aged. Items that have not been accessed for three years may represent as much as 40 percent of the data on a company’s network. Therefore, it is imperative for a business to take this information and move it either into an archive file, if it has value, or the trash can if it does not. Classification enables us to view who owns the document or search it by keyword to determine if it is something that should be saved.

 

  1. Redundant

 

You know the drill: You create a version of a document and share it with a co-worker, who makes a few changes and shares it with someone else, who also makes changes. You now have three copies of the same document floating around and taking up space. Through data profiling, we can attach a signature to a document that can help us determine if it is an exact copy of something else and can be deleted.

 

  1. Personal

 

Even companies with strict policies that restrict the use of machines for personal items will find that employees often store pictures or to-do lists on the system. Someone has a new baby and wants to share a picture with co-workers, and that image has now been saved somewhere in your data storage. While one photo may not be problematic, several photos from thousands of employees can be. Businesses can utilize data classification to identify personal information and ask employees to remove it from the network.

 

  1. Abandoned

 

This last group is likely the easiest to identify and manage, as abandoned information typically does not have value. Usually, this is data that former employees owned, and it has not been accessed in the three-year timeframe. It is still a good idea to ensure that the files do not contain important information that should be stored, however, just to cover any liabilities.

 

Next Steps

 

Once the classification process is complete, businesses can easily manage the information by archiving it, deleting it or moving it to a less expensive data center. Policies are easily created once a business knows what kind of information it has, and these policies can be used as a legal defense for deleting a document.

 

At Titan Power, our goal is to help you run as efficiently as possible. Let’s combat the problems associated with big data by simply making our data smarter. Identify it, classify it and create a policy to give it structure.

Posted in computer room maintenance, Data Center Design | Comments Off