A Power Distribution Unit Can Optimize Computer Operations

Computers are at the center of many successful business operations, and there are no signs that this trend will be abating any time soon. Because computers are so integral to the business world, it’s imperative that precautions are put in place to ensure that daily operations can continue unimpeded, no matter what may occur.


A power distribution unit, also known as a PDU, is one such technology intended to simplify businesses operating via large-scale computer networks. This is especially true of things like data centers, which often contain an array of devices necessary to perform key operations. A PDU is tasked with providing power to these many disparate devices from a central location. As a result, PDUs must contain certain protections to ensure all devices remain powered sufficiently for everyday functions to occur.


What Is a PDU?


As briefly mentioned above, a PDU is responsible for distributing electrical power to a number of different devices. As a PDU allocates power usage, it also ensures that unexpected power outages do not damage important hardware or result in a loss of pertinent company data.


While PDUs can be used in a number of industries, they are most commonly seen in data centers. This is because data centers are often comprised of numerous computer devices, all of which require a reliable source of power to maintain integral daily operations.  A PDU can efficiently power many devices concurrently, while also ensuring that these devices remain in operation in the event of an outage.


Types of PDUs


  • Basic Units – The ubiquitous power strip that is now a staple of every home and office is merely a simplified version of a PDU. Power strips serve as a centralized location to disperse electricity to a number of devices. Many power strips even offer protection to devices in the form of surge protectors, which can cause electrical items to shut down before being damaged by an unanticipated spike in power. While other types of PDUs are far more sophisticated, the principle remains the same.


  • Rackmount Units – Rackmount PDUs also disperse electricity to many different devices. However, these larger models can often accomplish other tasks which makes operation safer and more efficient. Unlike standard power strips, a rackmount PDU can allow for intelligent power dispersal. This permits power to be used more efficiently by connected devices, which not only improves operation but also cuts back on wasted energy. A rackmount model can also be accessed remotely, making it beneficial for companies who may have offices spread out over a great distance.


  • Cabinet Units – Cabinet PDUs are most often found in data centers, which often contain a large amount of hardware. As a result, cabinet PDUs are typically far more complex than their less-powerful counterparts. A cabinet PDU may require many sources of high-current power supplies in order to successfully maintain many different devices. Cabinet PDUs also include things like circuit breakers, as well as control panels dedicated to monitoring power distribution.



How Do PDUs Work?


Electrical currents are supplied to the PDU, which then distributes this power to devices accordingly. In some cases, PDUs can be accessed remotely, which allows the operator to change the way power is distributed to different devices. This can be helpful in the event that the number or type of electronic devices changes, or as a means to deal with impending outages or other issues that may affect computer hardware.


PDUs can also differ in the amount of power they can handle, ranging from smaller units to those tasked with allocating power to numerous devices. Many PDUs include means of mitigating the effects of large-scale power outages, which is useful for both protecting data, as well as vital computer equipment.


Who Needs PDUs?


While many businesses can make use of some form of PDUs, for certain industries these devices are crucial. The following are just a few industries and organizations that require PDUs as a part of daily operations:


  • Government Entities/Organizations – Government-run computer networks need to be accessible at all times. For this reason, proper allocation of power is crucial maintaining reasonable operation.


  • Schools – Schools often entail a large bank of computer systems necessary for day-to-day assignments. As a result, a centralized source of power can be a smart choice for keeping all devices up and running, no matter the demand.


  • Hospitals – As technology advances, so do the means in which patients are treated for health issues. Because many hospitals are now utilizing computerized systems, a reliable power source is important for maintaining the health and well-being of patients.


  • Brokerage/Trading Firms – PDUs can ensure that workstations crucial to accessing market information remain online no matter what. Even a few hours of down time can result in financial devastation for a brokerage firm.


Benefits of PDUs


PDUs can be beneficial to companies for a number of reasons. While they primarily allow for a more efficient dispersal of power to many devices, they can also serve as a fail-safe in the event that power is lost unexpectedly. PDUs can also be helpful for those companies that wish to closely monitor their power consumption. Thanks to the monitoring features included within many models, operators can devise ways to more efficiently use allotted power, resulting in far less waste. In many cases, PDUs offer convenience and efficiency in one package.


Whether you are the helm a large-scale data center or a small business, a PDU can prove extremely useful in a number of circumstances. For those companies that rely heavily on technology for daily functions, PDUs are a natural choice.

Posted in data center equipment | Comments Off

Comprehensive Data Center Spring Cleaning Checklist

Spring Cleaning Checklist

There are numerous mission critical elements to all data centers, and regular maintenance is integral to minimizing downtime and keeping all mission critical systems running.  Arguably, it is best to take a top-down approach when itemizing which data center components are most integral to successful operations.  The redundancies and numerous automated facility support systems as well as interdependent operations need to be addressed for viability before downtime, or in some instances permanent data loss occurs.

It is also important to note that optimal data center construction and maintenance can provide some of the top results for data center owners.  Recent research by technology research firm Gartner suggests that most current data centers will be obsolete in less than 10 years. Others argue that the lifespan of the average fully-functional data center may be as little five years. Maintenance, testing, and general spring cleaning can help increase the lifespan of almost any data center substantially.

Spring Cleaning Checklist For Data Centers

Below is a simple, itemized checklist for what items should be addressed when maintaining data centers.

ü  Specialized Testing of the Facility

It is almost always more cost-effective to invest in specialized testing of data center systems in order to adequately repair or maintain mechanical devices and hardware that is integrated with critical operations than to wait until several operations have been damaged or failed.

Specialized testing can also help identify if additional redundancies are necessary due to data center growth or potential overuse of the larger power grid the data center is connected to. In addition, facility roll-over testing can accurately identify if all backup systems will be fully functional in the event of a power surge or full power outage.

Generators and Power Supply – Power surges and power outages can be adequately addressed with redundancies. However, all automated generators that provide an alternative power source need to be fully-functional and work in conjunction with alarm systems. In addition, load bank testing can ensure that all generators are functioning at a normal capacity.

Infrared Thermography Testing and Troubleshooting – Infrared thermography testing can identify atypically warm or cool areas throughout a data center. The cause of the temperature change can be traced back to components that were not properly installed, electrical issues, or overloads, or loose circuits.

Specialized HVAC Testing and Installation – HVAC testing and design for data centers is integral to supporting basic functions and extending the lifespan of all mechanical components. Optimal temperatures ensure that hardware does not melt or crack, and optimal humidity reduces the risk of corrosion as well as other types of damage.

Testing and troubleshooting is the first step to identifying which data center components need to be repaired or replaced.  Other components can be replaced or removed for increased space.

ü  Replacements and Installations

Routine replacements of inadequate or semi-functional components and installation of new components can help extend the lifespan of data centers as well as offer increased security.  In addition,  routine replacement and installations by data center experts can further ensure that downtime due to various types of interruption is less likely.

Battery Replacement – Battery failure is the leading cause of system downtime due to UPS load loss. A single bad battery in an entire string can cause system failure during a power surge or power outage. Low quality batteries can have substantial consequences for data centers, and they are one of the most important elements to regularly replace.

Additional Power Redundancies – Large data centers located away from main power grids may experience the need for new generators and other sources for power redundancies.  New developments can put strain on an existing power grid, or planned infrastructure could jeopardize the viability of current power plans for data centers. Utility companies often need to be contacted by data centers that consume a notable amount of electricity in order to continually and accurately assess available resources.

ü  Optimizing Usable Space

Secure data disposal is necessary after a certain point in time. Data racks should not be cleared haphazardly due to potential ramifications with interruption of optimal airflow or other concerns associated with physically altering the design of the data center. One of the easiest ways to increase usable space is to get rid of trays that are no longer in use. It is of the utmost importance to dispose of data securely and properly in order to protect the integrity of the data center as well as personal information stored that is no longer in use.

Different Needs For Tier 1 Data Centers and Tier 4 Data Centers

Spring cleaning for data centers can be highly dependent on the type of data center as the consequences of downtime can vary substantially. Similarly, the consequences of a breach can also vary substantially.  A Tier 4 data center on a military base is a prime example of a data center that would likely need extensive testing and maintenance for optimal security and function.

However, a small municipality might have a Tier 1 data center due to limited needs and limited funds. Downtime for a Tier 1 data center may not be especially detrimental, and off-site servers can be used as a form of redundancy to ensure that critical operations are not interrupted, such as emergency response for citizens.

How to Approach Data Center Spring Cleaning

Work with a team of experts to best approach data center spring cleaning from all necessary angles. Instead of guessing what might be fine for another year, be confident that all systems are fully-functional.  Optimize space, install necessary new components, and maintain existing components to prolong the lifespan of data centers.

Posted in computer room maintenance, data center maintenance | Comments Off

Making Sense of Data Hosting Solutions

Data hosting is a serious concern for businesses everywhere. Choosing the right service can hold back your organization or give it a push in the right direction. Without data hosting services adequate to your current needs and adaptable enough to offer you solutions for future growth, the technology your organization relies on will not be up to the challenges posed by a rapidly changing industry.


Today there is a wide range of hosting services available for organizations of all sizes. Each type of data hosting has particular advantages as well as potential drawbacks. Understanding the unique features of each of these hosting options allows you to maximize the advantages of the hosting service you ultimately select. This information will help you zero in on the hosting solution that is the best fit for your organization.


In-House Server


In-house server options are arguably the most straightforward server solutions that an organization can select. All the equipment is purchased, maintained, and upgraded by the organization using it. The server equipment is kept somewhere on site. There is typically a team or department tasked with maintaining the server function and keeping the system running smoothly.


This server option gives you the ultimate flexibility in terms of structure and resource deployment. Members of your organization have the final say with regards to what equipment is selected and how it is taken care of. Security is also under the direct control of the organization. Though this element of control is highly attractive to some businesses, in-house servers require a considerable investment. Physical space has to be dedicated to server storage; improved cooling and ventilation solutions may have to be added to this space to create a climate optimal for computer function. This can be a problem for organizations with limited available space.


Dedicated Hosting Service and Shared Servers


Dedicated hosting service lets clients lease a server that is dedicated to their needs alone. It is not shared with any other client served by the data hosting company. This allows the lease holder to exercise full control over hardware, operating system, and other technical aspects. The hosting company is responsible for physical security, system maintenance, and even IT support services. Though an organization would give up some degree of control, this hosting option offsets many of the necessary investments that an organization would have to make in order to accommodate in-house servers. Though organizations can comparison shop for the most competitive price, clients are typically locked into a contract for a certain period of time.


Shared server solutions have many of the same qualities as dedicated hosting. The difference is that instead of leasing an entire server, the client leases space on a server shared by other clients. Each client is allotted a certain amount of server space which can then be used for website hosting or other data storage.


Managed Dedicated Server


This option is tailored to organizations that do not have the space to house complex server equipment or the means to develop their own server management team. Managed dedicated server hosting services bundle together everything necessary for hosting and then offer these bundled products to their clients. This hosting solution can cost more than other options but makes many specialized services available to clients. Technical services such as the following are offered by many managed dedicated server host companies:


  • Database management
  • Server monitoring
  • Firewall, backup, and recovery
  • Virus and spam protection
  • Software updates
  • Security audits


Virtual Private Server


This is similar to dedicated and shared server hosting except that in this case, a virtual machine is used instead of a physical one. These virtual machines provide many of the same benefits as a physical hosting service, though may not be able to deliver the same kind of performance power since host-end processing has to be shared among numerous virtual private servers. Virtual private servers are quite cost competitive compared to other hosting solutions. This is a popular choice among organizations looking for website hosting services; in fact, many web hosts offer virtual private server space for their clients to use.


Cloud Hosting


Cloud hosting services are a relatively new innovation that has seen divisive opinions regarding its usefulness and ease of application. However, cloud-based solutions have certain characteristics that are an excellent fit for some businesses.


Cloud hosting is made possible by cloud computing. Computers in a cloud configuration work together to provide the services essential for modern business function. This collective effort means that a single machine can fail without compromising the overall function of the cloud; the services can continue uninterrupted while the broken element is addressed. Cloud hosting therefore can provide highly reliable server functions; this has significant advantages in terms of security and stability.


Some businesses are hesitant to adopt cloud hosting solutions for their server needs because this approach is relatively new. There are concerns about the long-term usefulness of this server solution as well as overall data security. Because cloud hosting has received so much industry attention since its introduction, it is fairly easy for business owners to find authoritative analyses of the viability of cloud hosting. Though cloud hosting offers a great deal of flexibility and a number of useful features, the concerns particular to each individual organization have to be considered before selecting this, or any other, data hosting option.


Selecting a hosting solution for your organization is a very important decision and should be made only after a thorough examination of all available options. With a little effort you can narrow down your options to the ones that make the most sense for your organization.



Posted in Technology Industry | Comments Off

Planning for a Data Center Infrastructure Management Solution

Traditional organization separates data center functionality into two groups:

  • Physical assets controlled by facility management
  • IT domains maintained by the information technology (IT) department












Analysts discovered that this scheme ignores many opportunities for increased efficiency by supporting overlapping functions between the two groups. Data center infrastructure management (DCIM) combines the two groups into a single system. In OSI (Open Systems Interconnection) terms, the management of the physical layer 1 is included in the supervision of the remaining six layers for a “full stack” solution. Complete management systems include the ability to centralize monitoring, maintenance, and expansion activities.

Components of DCIM

The components of a DCIM system include hardware, specialized software, and sensors. The hardware specifications remain the same as for a traditional implementation. The sensors provide a bridge between the hardware and software. These sensors convert physical information such as ambient temperature, humidity, and power supply integrity into digital signals suitable for a computer interface. DCIM software integrates traditional functionality with sensor monitoring capability, in order to allow the software to supervise the hardware.

As companies shop the DCIM marketplace, they will find that providers fall into two categories:

  • Suite vendors – sell complete DCIM solutions
  • Specialists – sell enhancements to suite solutions that are often viable stand-alone products

Performance of DCIM Systems

Performance metrics for DCIM are evolving as quickly as the systems themselves. Some metrics derive from existing industry standards, including the following:

  • PUE (power usage effectiveness) – a metric for energy efficiency for a data center
  • CUE (carbon usage effectiveness) – measures efficient use of water and tracks limits on carbon emissions
  • DCeP (Data Center Energy Productivity) – relates the net value of the data center service to the consumption of energy resources
  • PAR4 – Server Power Usage – tracks server power during four states: system off, idle, loaded, peak
  • DCPM (Data Center Predictive Modeling) – a model to predict future performance, including energy use, energy efficiency, and cost

Many companies contract an industry expert to come into the firm, analyze the existing systems, and compile the business requirements. This exercise often concludes with a serious of seminars to educate management and the IT department. Armed with this knowledge, the company is ready to work with the contractor to choose the appropriate DCIM solution.

DCIM Tracking Parameters

DCIM systems manage resources through the following capabilities:

  • Visual representation of the physical structure – some advanced systems offer a 3D virtual fly-through of the data center
  • Capacity planning
  • Modeling and Simulation – allows managers to model proposed changes in order to analyze performance and cost impact
  • Change management – controls the change order process
  • IP camera management – captures motion alarms and coordinates physical access control
  • Environment monitoring
  • Power management – monitors the entire power chain, from the server chip sets to the data center power generators
  • Asset management (cost containment)
  • Monitoring – the system monitors all electrical, mechanical, and power equipment, including servers, routers, switches, and virtual machines (VM).
  • Dashboards – quick-view screens display a summary of system integrity
  • Reports – systems record data to document tracking, identify trending, and perform predictive analysis

Two of these capabilities, capacity planning and cost containment, are examined in more depth.

Capacity Planning

Capacity planning for DCIM allows managers to allocate resources against present and future needs. Following is a suggested outline of steps to generate an effective capacity plan:

  • Step 1: Compile a complete inventory of the data center resources. Identify which are the critical infrastructure assets. List the mission interdependencies for each.
  • Step 2: Generate a comprehensive monitoring report for each of the critical assets in the data center. Perform an analysis of the critical performance parameters. Use the analysis to identify bottlenecks in the system.
  • Step 3: Ensure the DCIM solution will include data for all of the space, power, and cooling attributes of the environment. Verify that snapshot data is readily available to the supervisor in real-time, and that alarms are in place to indicate out-of-spec conditions.
  • Step 4: Monitor daily workloads for a month or other suitable period. Make sure the system provides the flexibility to deal with on-demand needs as they arrive.
  • Step 5: Validate system performance against service level agreements (SLAs).
  • Step 6: Use DCIM trend analysis to complete capacity planning against future needs.

These steps will provide the current state of the system, and allow scenario planning for future expected expansions.

Cost Containment

As part of a critical business analysis, companies use a DCIM system to provide the information necessary to generate an ROI (return on investment) on the data center performance. This ROI is based on a careful analysis of all cost contributors. Cost monitoring starts by analyzing the power and other resource consumption of all key assets from the capacity plan. It continues by monitoring dynamic use on a real-time basis.

One of the strategies of cost containment is to avoid chasing up time. This pursuit can be a very expensive undertaking. Like many business processes, the cost sensitivity of up time tends to show decreasing returns on investment at high levels. Companies should perform a marginal cost analysis to find the optimal operating point.

To perform a marginal analysis, the company can start by considering price points for each of a Tier-1, Tier-2, Tier-3, and Tier-4 solution. These points would be calculated for a fixed power budget (in megawatts) and an available data center floor space (in square feet). This will generate four cost values for each of the performance tiers. Then, the company should complete pro forma calculations of these values over a suitable period, perhaps 30 years. This will yield a marginal cost per year for each tier. These four values, as a ratio to marginal uptime for each tier, will yield the four values for marginal cost per hour of uptime. These values will climb exponentially as performance increases. This analysis presents a clear cost vs. benefits result to the company.

The Power of DCIM

Data center infrastructure management solutions offer powerful control to data center enterprises. DCIM allows demonstrable improvements in efficiency, performance, and cost control.

Posted in Data Center Design | Comments Off

Energy-Efficient Big Data Businesses See a Better Bottom Line



Green Building Data CenterOne of the biggest costs for a big data center is power. In fact, according to Greenpeace, data centers typically require close to 100 megawatts of energy. Put in simpler terms, that’s enough energy to power 80,000 homes. While high-capacity systems are inevitable, there are ways for these businesses to reduce their energy consumption and improve their bottom line.


The Cold, Hard Facts


The Internet is energy-intensive. One of the biggest areas where data centers stand to improve their carbon footprint, according to Greenpeace, is where that energy comes from. Some companies are working on just that, branching out to solar power options or using air from the great outdoors to cool data centers.


There is still much that can be done. For example, there are still states in which Internet companies largely rely on fossil fuels for energy. The Greenpeace report noted in North Carolina, only 4 percent of the energy powering the state’s data centers comes from renewable resources. When looking at the big picture, data centers account for more than 2 percent of all of the power that is generated in the U.S. Therefore, improvements in data centers can actually have a ripple effect, boosting efforts across the country.


Why It Matters


How does that 100 megawatts of energy translate into dollars? Think millions. These centers can consume millions of dollars’ worth of energy every year. It makes sense, as there are servers that need to be running around the clock. Therefore, simply powering down every night is not an option for most companies. The fact remains that preserving fossil fuels and utilizing cleaner sources can actually help grow revenue by minimizing expenses. It goes without saying that when you can maximize efficiencies in terms of power, utility bills shrink.


Where Issues Arise


The ideal power use effectiveness (called PUE) for a data center is measured at 1.0, which would indicate that all of the power the center is using goes toward processing and storing data with no energy being used to cool any servers. For smaller companies, the ideal PUE is not out of reach. In fact, most start with stellar ratings.


What happens is the business starts to grow, increasing the number of computers they need and adding more racks. To keep things cool, they increase air conditioning to accommodate for the additional equipment. They grow again, they cool more intensely. The cycle continues and can easily spiral out of control, especially if no one is keeping tabs on how the power is (or is not) being used efficiently. So what can be done?


  1. 1.     Take Stock


Experts note that the companies that succeed in reducing expenses are the ones that view power as more than just a cost; it is something that can be controlled. The first step any data center should take is to do an audit of how energy is put to use. Most utility companies can provide this service, and some will even do it for free. The trick is to balance reducing power needs with still maintaining the quality output you need. There are energy-efficiency firms that will partner with companies to piggy-back on top of what a utility company can do.


Depending on the result of the audit, big data centers may note that making small changes can produce big results. For example, in 2013, Forbes Magazine reported on one retailer that was consuming $7 million of power annually and wanted to make a change. They brought in experts and made changes to things like the racks they used and determined a smart back-up power supply that would keep them afloat if something happened.


  1. 2.     Give Air Conditioning the Boot


Many data centers rely on a traditional method of cooling servers, which involves air conditioning. These units typically run around the clock to ensure equipment does not get over-heated. There is an alternative way to generate cool air inside the center, however, and it is known as evaporative cooling. The process involves installing a rooftop tower that will cool the water that keeps servers at a safe temperature. The aforementioned retailer now only uses air conditioning a very small fraction of the time, reducing the power they needed to cool the center by 93 percent.


  1. 3.     Rethink the Configuration


One of the most compelling aspects of what the retailer did was to put the focus on the equipment itself. Typically, a data center utilizes a universal energy supply. An inefficient system will actually produce wasted heat, which means energy is being used to generate unnecessary warmth that in turn needs more cool air to prevent overheating.


The solution? Combine racks to actually shrink the amount of space that is used. The universal power supply is then reduced, thus minimizing the energy that is used and the bill that hits the doorstep every month.


The Cost of Energy Efficiency


Upgrading equipment and software comes with a cost. Depending on the size and scope of the center, that figure could exceed six or seven figures. However, the savings can kick in almost instantly. In the Forbes report, the retailer noticed a return on its energy investment in less than a year, now operating at the energy levels they were using half a decade ago despite having added more than 30 stores.


Big data centers will inevitably devote a decent part of their budgets toward power, but the kind of power they use and the cost are flexible. By taking the right steps, companies that make the investment can notice a return on their investment in a short period of time, thus improving their bottom line for years to come.

Posted in Data Center Construction | Comments Off

Helping Data Centers Survive a Cold Snap



cold-serverFor all of their power and capabilities, data centers are still susceptible to the disastrous effects of water, fire, lightning, earthquakes and extreme winter cold. It seems that winter has received an upgrade, which means that it’s more important than ever to make sure that data is being well-protected while it’s being housed in a data center.


The Frigid Fallout


Should a data center succumb to an arctic chill, it can cause problems with cooling and fuel while making it more difficult for equipment to work at peak efficiency. There’s also the fact that data center employees might have difficulty getting to work on time if they’re able to make it there at all, which can create an all new set of problems and setbacks.


It’s beneficial to have contingency plans in place that can mitigate any damage and make sure that operations proceed as normally as possible should something go wrong with the data center. It’s always better to be acting in these situations as opposed to reacting.


Know Your Enemy


In order to properly prepare a data center for the cold, you need to understand how the cold can affect a data center. The cold can add undue stress on the data system, and that’s especially true if the frigid air outside is being used to keep the center warm. The center’s drain lines can freeze over in addition to heating coils, fuel systems, humidification units and cooling towers. You never realize how much effort goes into keeping data centers at the proper temperature and setting until a cold snap blows through town.


Frozen air conditioning units might spring leaks, and snow might find its ways into the intake vents, which can make it all but impossible for air to properly circulate and lead to a system wide shut down. One of the best ways to prevent these mishaps from occurring or at least keep them from completely crippling a data center is to be diligent when it comes to maintenance and upkeep. Make sure all battery warmers, block heaters and engine oil heaters are in working condition.


Location, Location, Location


Prevention and contingency plans will also depend on where the data center is located. If the data center is located in an area that receives especially bitter winters, it might be necessary to set aside the funds necessary to make sure that the center is always being well-heated. It’s a good idea to start saving this money up as soon as possible beforehand so that finances don’t take a bigger blow than necessary. You know that winter is coming, so you might as well prepare for it in the spring.


Generators should be insulated and heated, and you’ll want to keep humidity in mind whenever you’re dealing with electricity and cold.


It’s not unusual for anti-static wrist straps to be used in order to make sure that equipment is properly protected during the winter. The use of ultrasonic humidification can result in much better energy savings when compared to the use of conventional humidification methods.


The weather forecast should also be monitored so that there’s ample time to call in extra employees and help if necessary and so that there’s enough time to gather any necessary extra supplies.


Consider Employees


Employees should be just as well-protected from the extreme winter cold as the data center itself. Even if a data center is being built in an area that isn’t known for its punishing winters, there should still be plenty of room in storage for salt in case things take a turn for the worst. It’s also a good idea for there to be stairs instead of ladders for employees to use to move between levels since ladders can be a huge hazard if they ever ice over.


The areas employees frequent for extended periods of time should be kept warm so that employees remain comfortable enough to be able to focus on their jobs. Snow drifts are something else that should be taken into account whenever a data center is being built since they can damage the structure of the center and lead to numerous other complications.


Aside from keeping employees comfortable, there’s also the fact that employees might not be able to make it in to work due to road closings. Individuals who are at the data center when inclement weather hits need to well-supplied so that they can continue working as normal and so that productivity doesn’t dip.


There should also be proper employee protection, such as gloves and heavy coats, for when maintenance needs to be done on outside equipment. Even with the right winter protection, there’s still a chance that maintenance might take longer than it usually does since employees might need more breaks and have to deal with equipment that could be iced over.


Proper Preparation


In order to protect a data center from inclement winter weather, certain steps need to be taken, including:


  • Making sure that all alarms and remote monitoring panels are operational
  • Keeping a close and constant eye on outside equipment
  • Stocking up on all necessary parts in case they should fail
  • Ensuring that there are plenty of de-icing products on hand and that temperature probes have been calibrated
  • Keeping an eye on the forecast in order that maintenance can be postponed if necessary


The worst case scenario always needs to be taken into account when it comes to preparing data centers for a cold snap. Be proactive and make sure that employees are fully aware of what they should do if the weather takes a frigid turn.

Posted in Mission Critical Industry | Comments Off

Hack Your Way to the Truth about Hacking


truth-about-hackersIt doesn’t matter how big or how small your business is, there’s always a possibility that your website could be hacked. And the worst part is that several weeks or months might go by before you even realize that something is amiss with your site. No matter what business you’re in or how small of a budget you have, you always want to make sure that you, your website and your customers are properly protected against digital attacks. Besides firewalls and antivirus programs, knowledge is another effective tool in protecting yourself against hackers.


The Cost of Hacking


One thing that you have to always keep in mind about your business website is that it’s always open, so there’s always the possibility that it could become hacked. If you have a small- to medium-sized business, you can expect to pay more than $180,000 if your website is attacked, and that’s just on average.


Not only is there the financial cost that comes with hacking, there’s also the fact that future jobs could be lost as well. It’s estimated that at least 500,000 American jobs are lost every year because of the costs of hacking and cyber espionage. So not only do businesses have to pay for a hacker’s misdeeds, potential future employees do as well.


Other than businesses and potential future employees, there’s also the possibility that the web designer will also have to pay the price of a hack because they were the ones who designed the website that was hacked in the first place. Should clients spread the word about the web designer, there’s a possibility that the designer could lose out on business while the same happens to the company that they originally designed the website for.


The Ripples Caused By Hacking


Should your site ever become compromised, you can immediately expect a loss in profit and revenue because you have to shut down while you get everything taken care of. Even if you open up a temporary website while you’re building a new website, there’s still the possibility that your customers will be uneasy about doing further business with you, and that’s especially true if their credit or debit card information was stolen during the hack.


Just like you have to do everything that you can to protect your business reputation after a cyber-attack, search engines also have to protect their reputation as well as anyone who uses their search engine. What this means is that your search engine rankings can take a serious hit after it’s been discovered that you’ve been hacked. Should your site be blacklisted by search engines, you might show up significantly lower on search engine results than you did before the hack.


Businesses that have been hacked also have to consider the very real possibility that some of their customers might take legal action if their financial information was stolen during the hack. Now you have to take time away from rebuilding your business in order to go to court and you might even have to spend money to hire a lawyer and pay for legal fees.


And to add insult to injury, if your business has a credit card issuer, they might fine you hundreds of thousands of dollars for the breach in security.


After the Dust Settles


You can still be feeling the effects of getting hacked several months and possibly even several years after it happened. Should the media get ahold of the information, they might always throw in the fact that you were hacked whenever they mention your name in the future.


If you have sensitive employee information that was also hacked, social security numbers, health care information and even home addresses might have been compromised. Now your employees have to worry about their identities being stolen at any time in the future, which might cost them additional frustration on top of being associated with a business that’s been hacked.


How it Happens


There are several ways that your website can be hacked, including:


  • Content Management Systems(CMS) are often vulnerable to access to the site through back doors created by the permissions needed for a CMS to operate.  Old CMS’s or not updated versions are a very common entrance for a hacker
  • Plugins are often used in CMS’s for many reasons to make adding content, custom code, or images easier for the user.  Some plugins, however can leave you vulnerable to a hack.
  • Insecure passwords. There are computer programs that are designed to filter through random combinations of passwords until they find the right one that will give them access to your site
  • Old code. Outdated or poorly written code can also act as a gateway for hackers. You might want to think about updating any old plug-ins or themes on your website.


Some of the warning signs that your site might have been hacked are:


  • Sudden surges in traffic from odd locations
  • Massive uploads of spam
  • Malware warnings
  • Several 404 error messages
  • A sluggish site


If you even suspect that you site might have been hacked, act as quickly as possible in order to mitigate any damage that might have been done. Get in touch with your hosting provider and inform them of your suspicions before you change all of your passwords. You can also hire someone to professionally “scrub” your website, which might cost upwards of $200.


To protect yourself from being hacked, it’s a good idea to change your passwords often, install security plug-ins and make sure that your website is always up to date. At the end of every month you’ll want to take a close look at your website and get rid of any themes or plug-ins that you aren’t using so that they don’t become a security liability in the future.


Think of business website protection like insurance—while you might not ever need it, it’s still a good thing to have and one of the best ways to save yourself time and money in the future. There’s no way to make your site completely hack-proof, but you can most definitely make it hard for hackers to make off with ill-gotten gains.



Posted in Technology Industry | Comments Off

Considering the Fire Concerns of a Data Center

Fire Danger Data CenterData centers have become the new warehouse of the 21st century. Literally millions of bits of information flows through their servers daily, and corporations and organizations rely on them for optimal performance. A corporation’s data is among some of its most valuable capital, and should be thusly protected. When considering data protection, the old adage that “it’s better to be safe than sorry” certainly applies.

Any data loss or network interruption can be catastrophic to an organization. Yet many often think that once they have their data flowing through a server farm, concerns about its safety are unwarranted. That attitude can be dangerous, as there are still a number of ways outside of conventional methods through which data can be lost. Often a single layer of protection isn’t sufficient, and there are some things that even a firewall can’t stop. Thus, it’s encouraged to consider any and all aspects of your data’s safety, even when it’s under the care of a data center. This is certainly one area of business where overkill is underrated.

A Threat from the Center Itself?

When considering the security of their data, most focus on protecting it from intrusion. And with good reason; cyber crime and the theft of intellectual property can be disastrous. Yet many overlook concerns about the where their data is being stored, and the potential risk that the actual data center can pose.

Try leaving a refrigerator door open for than 5 minutes and see what happens. Most are surprised to find out that the room actually gets warmer. That’s due to the increased energy consumption that the refrigerator has to use to power the motor to keep its contents cool with the introduction of the outside air. That energy is given off of the unit as heat, and its effects are noticeable.

The same principle of thermodynamics applies to a data center. Servers that are in constant use expend a lot of energy. This energy causes the actual machine housing to heat up. Now, imagine many of those servers stacked one atop the other from floor to ceiling, all arranged in row after narrow row, filling the room. Couple that with the miles of wiring comprised mostly of copper and other alloys that are terrific conductors of heat that connect the machines, and one can imagine the immense heat that builds up inside of the rooms of a date center. The introduction of any small amount of combustible material to such a hot environment could easily produce a literal fireball.

Most data centers have measures in place in to combat these potential hazards. The rooms housing the servers must be well-ventilated to help transfer some of the heat outside of the room. Yet, in the event of a fire, these ventilation systems can also serve to delay response. The rapid airflow caused by a ventilation system can actually carry smoke away from the source before a smoke detector can trigger any sort of fire suppression system.

Aside from fire, water is the next most harmful substance to a server. Yet most data centers employ some form of sprinkler system as part of their fire suppression system. Water raining down upon the servers in the event of a fire can often cause more damage to a server farm than the fire itself. Often, the suppression systems themselves are the actual hazard. Accidents have happened at data centers where fire alarms either triggered incorrectly or by an inconsequential amount of smoke have accidently damaged or destroyed all of a center’s assets when their servers were doused by the sprinkler systems, causing data loss and greatly limiting the operational effectiveness of entire companies and organizations.

How to Mitigate These Risks

Corporations are faced with a bit of a dilemma: finding data centers to house their information and run their servers  that don’t in-and-of-themselves pose a threat. The answer only comes by placing an extensive amount of time into researching how a data center handles the risks of fire, and what methods it employs in extinguishing one should it occur.

Because the mission-critical nature of data centers require that extended downtime be avoided at all costs, simply turning servers off to allow them to cool isn’t a practical way to mitigate the risk of fire. Given that a center can’t totally eliminate the risk, the key to better protecting a center and it’s servers from fire then is to find way to immediately identify fire at its source and to suppress it without damaging the servers in the area.

Recent advances in fire suppression systems have turned such ideas into possibilities. In order to better identify the source of fires, photoelectric smoke detectors have been created that to better detect combustion particles in the air. The detectors can be placed on ceilings or within air ducts. These new systems also help in preventing false alarms from causing any potential sprinkler damage, as they use microprocessing devices that function in real-time to determine if whatever the alarm is sensing is real or not.

New suppression solutions have also been created to replace conventional sprinkler systems. Rather than using water, these systems instead deploy gas-based, waterless agents aimed at extinguishing the fire in its inchoate stage. These agents work by either absorbing the heat from the source of the fire or by depleting the oxygen in the area and essentially choking the fire out. Once settled, no residue is left on the machines and their performance is unhindered.

Data centers have helped countless organizations to increase their operational effectiveness and protect their data. Yet there are safety trade-offs to be made when considering renting space within a center. However, a little research into the safety practices of a data center can help one know with which providers he/she will be able to enjoy the effectiveness that they offer without the safety risks.


Posted in Uncategorized | Comments Off

Looking Into The Future Automation Of Data Centers

Data Center AutomationData storage is as demanding as ever, pushing server providers to seek innovative solutions that are still barely able to keep up with the rapid growth. Server providers have the delicate task of maintaining fast, safe, and reliable user access—not to mention simple concerns such as preventing equipment from overheating. Automation appears to be the future of servers, as purely manual control of the incredible amount of data will likely no longer be reasonable. The solution is expected to be a mix of high-tech artificial intelligence used in both computers and equipment that is best described as robots. While current technology and engineering is still not quite up to the task, the vast resources being poured into research and development will soon bring this tech revolution to reality.

What is Fueling the Incredible Growth of Data Storage and Server Demands?

In 2012 the estimated stored digital data in the world was over just under three zettabytes, a number that is expected to soar to over eight zettabytes by 2015. A zettabyte is over one billion terrabytes; many new computers feature one or two terrabytes of storage, an amount of storage that was at one time limited to only super computers. Data storage has grown at such a fast clip that the tech world has had to rush to develop names to quantify storage amounts. As a frame of reference, a zettabyte is capable of storing two billion years of music! So what is fueling this frantic data growth?

  • Growing World Wide Access to the Internet. Roughly 40% of the global population now has access to the web, representing a 250% growth since the mid 2000s. Growing third world economic development will continue to add more and more users over the next decade.
  • Vast Amounts of Video. By the year 2016 over half of web traffic will involve internet video, further creating a massive storage demand. YouTube users alone upload somewhere in the area of fifty hours of video a minute, 24 hours a day.
  • Increase in Mobile Device Use. Smartphones, tablets, and other mobile devices are quickly become internet users’ primary method of web surfing, dramatically raising the average amount of time spent on the web.
  • Booming Online Commerce. Online retail purchases are set to exceed in store purchased over the next ten years, further boosting data demands.

The above are just a few of the many contributors to increasing data demands. Government and business internet usage are also increasingly adding to total digital data, leading both to develop their own storage server facilities.

How Will Servers Keep Up With Storage Demands?

Google, Facebook, and other organizations that handle massive amounts of data continue to build bigger and more advanced data centers throughout the globe, and are still struggling to keep up. Building more and more data centers is not a feasible long term solution to data demand; the development of new technology is a necessity. The following are some of the current and anticipated trends that will move the data storage forwards:

  • Automated Monitoring and Fixing of Processes. Identifying the root of server performance problems can be a time intensive task, as is fixing them. Newer computer technologies that automatically find and fix faulty processes are growing in popularity. Developing programming and artificial intelligence technology will build on current systems to allow for more complex fixes and for performing daily tasks that currently have to be done manually.


  • Increased Energy Efficiency. Energy use has been a consistent thorn in the side of server operators, with some experiencing as much as a 90% waste rate. This problem has been not only expensive but has drawn the ire of environmental advocates as well, who are also upset at the fact that most data centers rely on gas powered generators in the event of a power out. Google, Facebook, and other internet giants have turned to building servers in areas like Sweden and Greenland to take advantage of natural cooling and nearby hydroelectric power.


  • All in One, Portable Data Centers. AOL and other online operations are experimenting with the use of small, portable unmanned data centers that can be used to supplement high demand areas or used in the event that a main data center is damaged. The smaller nature of these portable centers necessitates that they have the ability to function on their own as much as possible. Advancements in robotic and AI technology have the ability to fuel the widespread use of small, self-reliant data centers.

The Dawn of the “Lights-Out” Data Center

AOL’s vision of a “lights-out” data center has attracted the attention of other major web players. A “lights-out” data center is one designed to be completely human free in day to day operation, save for significant breakdowns in equipment. Data server robotics are currently focused on a rail system that allows robotic equipment to move throughout a data to center to move servers, perform minor repairs, clean, and use integrated software to handle processing problems. Current data centers require administration and engineering staff to be responsible for a high number of servers; the use of robotics has the potential to cut down on operating costs and reduce employee workloads.

The development and use of robotics is currently too expensive to make it feasible for widespread use, as the upfront costs associated with building a lights-out center make it an uneasy investment to make. However, in the near future the upfront costs will likely be more than made up for due to higher efficiency, decreased labor costs, and the ability to remotely control robotics from anywhere in the world.

Given the staggering growth in data needs, server operators have no choice but to adapt in order to meet demands and provide the performance that users expect. Robotic technology, innovative software, and artificial intelligence will likely be the foundation of data center improvements, potentially revolutionizing one of the most important resources in the world.

Posted in Data Center Construction | Comments Off

The Damage Caused by Downtime

Data center reliability has helped to increase the operational effectiveness of thousands of companies around the globe. Improvements in technology and the design of server farms, as well as in the facilities that house them and the level of training for the people who oversee them have allowed collocation centers to reasonably offer the expectation of the holy grail or network reliability: the seldom-achieved 99.999% optimal running time.

Yet it’s often that exceptional reliability that leads to organizations into trouble. 99.999% still isn’t 100%, yet too many companies take for granted how difficult dealing with that 0.001% can be. This is only compounded by the fact that corporations allow the security they feel in their data center’s reliability to convince them to allocate resources normally dedicated to network maintenance to other areas. While this line of thinking may produce the desired results in the immediate, it need take only one instance of peak-hour downtime to reveal just how flawed this idea truly is.

Given the vast resources some organizations have in terms of staff and production capability, it’s remarkable to see how easily their operations can come to a grinding halt during network downtime. This is most often to due poor training of employees on how to handle manual processes as well as what to do in the event of a server failure. While this typically isn’t an issue for those organizations that have chosen to go with a third party collocation provider whose only role with them is server maintenance, it can spell disaster for those who operate an in-house data center, as limited resources often contributes to extended downtime.

The Difference 0.001% Can Make

This is an issue facing both large and small businesses alike. To put a monetary value on exactly how much that cumulative 0.001% represents, downtime cost companies $26.5 billion in revenue in 2012, an increase of 38% over 2010 numbers. Considering the numbers from companies that reported server outages, that breaks down to roughly $138,000 per hour lost during downtime. And considering that those number’s only signify those outages that were reported, one has to wonder if the actual loss isn’t much higher. Experts actually estimate that were every data center in the world to go dark, $69 trillion would be lost every hour.

That number represents the oddity in these figures. If data center reliability has actually increased, why the increase in lost revenue? One might expect the numbers to actually go the other way. Yet despite the potential for network outages, demand for services provided by data collocations center is growing at an astronomical rate. Currently, data centers worldwide use as much cumulative energy as does the country of Sweden annually. That usage will only increase with time, as it’s believed IP traffic will hit 1.4 zettabytes by 2017.

Another contributing factor is the virtual devaluing of data storage. in 1998, the average cost of 1 GB of hard drive space was $228. The same amount of space was valued at $0.88 just 9 years later. As data storage and transmission needs continue to grow exponentially, enormous data warehouses are becoming more common. Eventually, this leads to an increased need for data collocation, as more companies need to dedicate increased attention to their server maintenance.

The Myth of 100% Network Reliability

In anticipation for such an increased need for collocation services, many may think that the next logical step in data center evolution to reach the 100% reliability plateau. Unfortunately, such a estimate is beyond reach (five 9s is barely achieved a handful of the time). The reason being is that there virtually are almost as many causes of server failure as there are bits of data that they carry. And the chief amongst these is a problem for which there is no solution.

The most common case for downtime is, by far, human error, accounting for almost 73% of all instances of downtime. Whether it be from lack of operational oversight, poor server maintenance techniques, or just poor overall training, people can’t seem to get out of their own way when it comes to guaranteeing the performance of their servers. Companies can and should invest resources into mitigating this, but eliminating it all together remains an illusion.

Aside from human error, downtime can be caused by literally anything. Anyone doubting this need only do a quick internet search for “squirrels” and “downtime” to see just how often these furry little rodents can bring elements of the human business world to a standstill by chewing through cables and thus limiting a data center’s operational capacity.

Focus on Preparation, not Prevention

Given that downtime is an inevitably, an organization is much better served in turning its efforts to dealing with downtime rather than trying to eliminate it. Most, if not all companies aren’t doing all they can to adequately deal with network downtime. 87% of companies admitted that a data center crash resulting in extended downtime or data loss could be potentially catastrophic to their businesses. Yet more than half of American companies admit that they don’t have sufficient measures in place to deal with such an outage.

Given this information, organizations looking to either establish an on-site data center or rent space through a third party data collocation company should consider these two aspects:

  • What degree of network reliability can the data center deliver? Taking into account what’s actually feasible, can it guarantee operational effectiveness to the highest of current standards?
  • What needs to be done to improve in-house performance in the inevitable event of downtime? Are sufficient efforts being placed on maintaining like operational capacity as well as on running data recovery once the network is back up?

As corporations and organizations continue to rely more and more on their data collocation services for optimal performance, understanding the limits of data center reliability and being prepared to deal with network outages is the key to avoiding huge revenue losses from downtime.

Posted in Uninterruptible Power Supply | Comments Off