header-about-us-sub

What The 2013 Data Center Census Means For The Industry

UPS Maintenance

Uninterruptible Power Supply Maintenance

The 2013 data center census works to give a glimpse into the future of the data center industry. Although the census is not able to completely predict the future, it can give some telling indications about where the future of data storage is headed. The census works to identify the answers to questions about where data centers will be spending their money, what new technology will be developed, how the cloud will affect the need for data centers, and many other important questions for any individual working within the data center industry. Although it is impossible to predict exactly what will happen and foresee any disasters that may also affect the industry, the census gives some insight into how data centers will change in the 2013 year.

 

The Implications Of The Cloud

The cloud has become a popular way for companies to store important information directly to the web, rather than being forced to hire a data center to handle the storage of all important information. Concerns within the industry were that this new technology would make data centers obsolete and old fashioned, and that companies would turn to the cloud as the more effective way of handling information. As the data center industry worked to evolve with this new technology, many in previous years felt that a significant amount of their time and money would be spent developing new technologies that conform to the idea of the cloud. In reality, very few data centers actually invested money in this idea in 2012, choosing instead to focus assets on everyday items like cooling, cabling, and power supplies.

According to the data center census, 2013 may just change all that. From the information within the census, it can be assumed that architecture for cloud infrastructure will be a main focus of data centers throughout the world. It’s predicted that some countries will see close to a 138% increase in uptake in regards to cloud architecture. Those countries that have a lower percentage of uptake are also those with a higher uptake over the previous years, leading one to believe that they have already implemented new technology. No matter what country is being examined, it’s clear that the cloud and the changes necessary because of it will be an important part of every data center’s future plan.

 

Data Center Infrastructure Management

DCIM is a fairly new idea that that works to merge the fields of network management and physical management within the data center. This works to create new systems that make the center more energy efficient.

DCIM has been considered the solution to the problem of energy efficiency within the data center. Implementing new data center infrastructure management should ideally make each data center more efficient and more cost effective. In the previous year DCIM did not perform as well as expected, but it appears that 2013 may change all of that. Countries such as Latin America and Russia predicted a high uptake in DCIM throughout the 2013 year, although many who had the lowest uptake in 2012 also expected a higher uptake in 2013.

The most important pieces of information to take from the census are that the cloud will be an important aspect of every data center going forward, that data centers will spend a large amount of money working to make the centers cloud-friendly, and that DCIM will be implemented at a higher rate than before in order to make data centers more energy efficient.

The focus on energy efficiency and developing ways to make data centers more efficient may have come from the fact that the 2012 census brought staggering numbers of the total energy consumption by data centers used throughout the world. Creating data centers that are more energy efficient helps to save money and is easier on the environment, while creating outputs that are measurable and can be examined in every aspect of the center’s energy usage.

 

Census Details

The census looks at information given from over 10,000 participants all over the world regarding important and relevant topics within the industry. The census also focuses on the charity efforts and philanthropic efforts made by the industry, by giving five dollars to the Engineers Without Borders organization for each survey that is completed. The previous year’s census helped organizers to amass a fund of $40,000, which was then donated to UNICEF to help children throughout the world.

 

Practical Applications For The Census

 For any individual working in the data center industry, there are practical applications that can be made from the information gleaned by the census. With this high number of participants, it can be assumed that the information is fairly accurate in depicting what the next year will be like for the industry.

Data center professionals can recognize from the census that the competition is focused on new technology, and on creating an infrastructure that is compatible with the cloud and allows for customers to utilize this valuable new tool. Data centers that are hoping to stay relevant within the industry will be rewarded by moving assets toward the development of a new infrastructure that includes the cloud.

Data center professionals can also assume that energy efficiency will be a big topic this year in the industry. When a center is run more efficiently, and costs are lowered, the savings can either be passed onto the customer, or used to improve the service that the customer receives. Data center management must realize that the competition is focused on lowering energy costs both as a way to improve the way consumers look at the center, and to allow them to put money into more valuable developments and tools. Ignoring the need for a data center that consumes less energy can make a center seem outdated and inefficient to the average consumer.

 

The Forecast For The Future

 Each year, individuals within the data center industry can focus their efforts on important updates and changes that make the center more functional and more successful. In order to determine where money should be spent in order to accomplish these goals, it’s important to pay close attention to the census information that is released each year. This can give each data center important clues as to where the industry is headed, and how quickly they need to get there.

Posted in Construction Industry | Tagged , , | Comments Off

Setting Up A Disaster Plan For A Data Center

data_center_disaster_planIn terms of disasters, few people are prepared for the chaos that can come as a result of any type of disaster. Floods, fires, tornadoes, hurricanes, and even heavy rainstorms can damage structures and belongings beyond repair. Most times, there is little to no warning that a disaster will occur, and minimizing the damage becomes difficult without a disaster recovery plan in place. This precaution is especially important for a data center, where large amounts of expensive equipment and irreplaceable information may be stored. Creating a basic disaster plan for your data center is a simple process if you know where to start.

 

Assess The Risks

What types of risks does your data center face on a daily basis? A center in the middle of Arizona isn’t likely to deal with a hurricane, but a fire or monsoon is a likely possibility. California data centers may not see a heavy amount of snowfall, but must be prepared for floods and earthquakes. Before you can prepare for any disaster, you must determine which disasters your data center is at risk from.

Along with natural disasters, there are man made disasters that can happen with little warning. Fires may result from an electrical shortage, equipment may be damaged by a theft or burglary, and other number of man-made disasters may occur. Data centers in all parts of the world should be prepared for these untimely incidents.

 

Within an operational risk assessment, examine the following information:

• The location of the building

• Access routes to the building

• Proximity in relation to highways, airports, and rail lines

• Proximity to storage tanks for fuel

• How power to the data center is generated

• Details of the security system

• Any other critical systems that may shut down in the event of a disaster

 

Assessing the risks is the first step in creating a contingency plan that protects the building, the information, the equipment, and the employees when the unthinkable happens.

 

During the risk assessment, do the following things.

• Include all IT groups to guarantee that all departments have their needs met in the event of an emergency.

• Obtain a list of all data center assets, resources, suppliers and stakeholders.

• Create a file of all important documents regarding the infrastructure, such as floor plans, network diagrams, etc.

• Obtain a copy of any previous disaster plans used for the particular data center.

 

Once all relevant information has been gathered regarding the data center, the design process can begin.

 

Preliminary Steps For Disaster Planning

The first step in creating a disaster plan for a data center is to consult with all management within the center to flush out the threats that are most serious to the center. These can be human error, system failure, a security breach, fire, and many other things depending on the individual center.

The second step is to determine, with the help of other management professionals, where the most vulnerable areas of the data center are located.

Next, study the history of any malfunctions the data center has faced and how each disaster was handled.

It’s also important to determine exactly how much time the data center can handle being without power before the situation becomes critical.

Next, review the current procedures for how an interruption to the data center power supply should be handled, and obtain information regarding when these procedures were last tested by the appropriate individuals.

Single out emergency teams for the building, and review their training in regards to emergencies to determine if additional training or updates need to be implemented.

Finally, identify the response capabilities for emergencies for each of the center’s vendors.

 

Developing A Data Center Disaster Recovery Plan

When compiling information in regards to risk assessment, no stone should be left unturned. The more information, the more accurate and successful the disaster recovery plan will be. Disaster recovery plans cannot be created without a good level of organization and information, and will be extremely ineffective if information is inaccurate or incomplete.

The next part in a disaster recovery plan involves compiling a gap analysis report that determines the differences between the current emergency plan, and what the new emergency plan needs to be. During this process, all changes should be clearly identified and listed in order to more efficiently address potential problems. Include the total investment that is required to make the changes along with recommendations from the proper professionals on how to implement each change. Once the report is complete, have each member of management read the report and choose which recommended actions would be put into place. Each management member should have input into which changes are made, and coming to an agreement may require more time spent at the drawing board.

Once the recommendations are in place, and each member of management has agreed that the needs of their individual department are met, it’s time to implement each of your changes for your critical assets. Hardware and software, networks, and data storage should all be addressed within this step to ensure that equipment is protected and that information can be recovered in the case of a disaster. Once changes are implemented, tests should be run to determine if system recovery assets and plans are properly functioning.

If it is determined that the updates are functional and successful in recovering and saving equipment and data, it’s now time to update all documentation for disaster recovery in company handbooks or policy manuals. Because technology is constantly changing and the needs of data centers are always evolving, disaster plan updates should be made regularly. In order to do this successfully, there must be an accurate record kept of former procedures and how well they worked as intended.

 

The Next Disaster Recovery Plan Update

Once a new recovery plan is in place, there is no time to relax. Changes in the plan should constantly be on the minds of management personnel, and the next update to the system and process should be scheduled before the committee adjourns.

When designing a disaster recovery plan, keep the information as simple as possible in order to stay more organized, and to avoid going overboard and overlooking important minute details. It’s not necessary to completely overhaul a system to update a plan; constant changes should be made to protect the equipment and information housed by a data center.

 

Posted in data center maintenance | Tagged , , | Comments Off

Facebook Opens First Data Center Outside of United States

 

data_center_facebookThe city of Lulela, Sweden is situated in the frigid northern part of the nation. Just sixty miles from the Arctic Circle, temperatures in Lulea can sink as low -41C. Lulela has a big new resident; social media giant Facebook recently opened a brand new data center in this icy section of Sweden. The Facebook Lulela data center is the company’s first data center located outside of the United States, a sign of their commitment to better serving their European user base. While Lulela might seem like an odd choice for such a place, Northern Sweden looks to play host to servers from a variety of web operations, including Google. Facebook has over one billion monthly users, creating a significant amount of data for the social media giant to manage. Facebook users generate over ten billion messages a day, store in excess of three hundred billion photos, and create billions of “likes” a day. That adds up to a lot of data to store.

Why Lulela?

Facebook’s choice of Lulela has been met with both praise and skepticism from users, environmental groups, and privacy advocates. Facebook’s choice of Lulela for its new data center was based on meeting two goals. Facebook’s first goal was to improve performance for its European user base, which required a new data center location that could increase speed and performance for users. Facebook has more users outside the US than inside, making the development of data centers outside the US a logical decision. The Lulela data center enhances user speed up to or beyond Google levels, which is a significant improvement. The new data center will be largely dedicated to storing “unused” data such as photographs that are infrequently accessed. Photographs account for a significant amount of Facebook traffic.  Acccording to the company, over eighty percent of traffic goes to less than ten percent of its photos. The Lulela strorage center is capable of storing the equivalent of 250 million DVDs.

Facebook’s second goal was to build an environmentally friendly center, keeping with its growing commitment to energy efficiency. The new data center is powered by hydro-electricity via a nearby dam that generates twice the power of Hoover Dam. In the event of a power outage the data center will depend on a stockpile of diesel generators. Lulela’s frigid environment was another selling point for Facebook; the eight months of icy winter will serve as a chiller for the five acres of server equipment. The facility uses the cold air to maintain a water evaporation driven cooling system. Surplus energy from the coolant system and the servers will be used to heat the data center’s office space. The “cold” system of data servers is rising in popularity due to its cost effectiveness and energy efficiency. Facebook’s new center operates on cutting edge energy efficiency levels; Facebook claims that the new data center is “the most efficient and sustainable data center in the world.” Facebook also chose a minimalist approach to their server equipment, forgoing the use of unnecessary plastic and metal cosmetic materials.

Facebook’s decision to build in Lulela was also bolstered by a perceived corporate friendly environment in Sweden and a readily available skilled workforce.

A Valuable Backup

Besides providing a boost for European users, Facebook’s new data center handles live traffic from around the world. Facebook’s other data storage centers are located in California, Oregon, and Virginia, with another center in development in North Carolina. Noting the server problems that other web and communication providers have experienced, Facebook’s new data center also serves as a backup in the event of server crashes at their other locations. After some difficulties following its Initial Public Offering, the social media giant has doubled down on efforts to keep its user base satisfied. Facebook’s continued efforts to effectively monetize the site demand a high level of performance and consistency. Maintaining and storing user’s messages, photos, and other material, regardless if they are used or not, is an important part of Facebook’s effort to keep users logging on regularly.

Security and Privacy Concerns

Some users have responded favorably to the new data center and its location, assuming that its European location will keep their valuable personal information and correspondence more secure. The recent news regarding the National Security Agency possible monitoring private communication and social media information has caused some Europeans to question the safety and privacy of their Facebook account. Facebook has repeatedly denied providing information to the NSA or other government agencies or participating in warrantless exchange of user information. This is not the first time Facebook has been accused of dispersing user information; there have been allegations that the site has allowed advertisers and other entities access to user information as well. Facebook accounts are ripe with personal information, including personal data, correspondence, pictures, shopping habits, interests, etc. This wealth of user information—and its value to advertisers—is one of Facebook’s most valuable assets. Facebook continues to face constant threats of malicious third party applications, phishing software, fake accounts, and data manipulation.

Those who espouse the data center’s location as offering a security boost might be surprised to find out that Sweden is not necessarily a haven for web privacy. A law passed in 2008 allows the Swedish government to monitor and record any web traffic that crosses their borders without a warrant. This means that all of the live traffic and data going through the Lulela data center can be legally accessed by the Swedish government at any time, for any reason.

The Future of Facebook and Data Storage

Facebook’s increasing emphasis on user customization has the potential to dramatically increase the amount of personal information stored in its data centers. Barring a massive decline in its user base, Facebook will require additional data storage at some point in the future. Facebook has experienced a recent decline in downloads of its mobile applications, leading the company to consider expanding its partnerships with mobile device providers. It is too early to tell if the planned emphasis on mobile devices will have a significant impact on Facebook’s data storage requirements. Facebook looks to continue to develop new data storage technology. While Facebook handles the majority of data center design and development, it is not opposed to working with other tech and web companies to further innovation.

Posted in Technology Industry | Tagged , , , | Comments Off

A Guide To Modular Data Centers

Modular Data CenterModular data centers became popular when the economy started to swirl the drain. Businesses needed new ways to secure funding in small amounts while simultaneously decreasing the risks that came with creating a data center. Two of the main gripes with traditional data centers are the speed at which they could be deployed and revenue. It takes an abundance of both time and money to construct a building to house a traditional data center. Something else to bear in mind is that advancing technology encourages businesses and organizations to shift their focus to scalable and rapid modular data center designs. Besides these two reasons, there are several other reasons that a modular data center is preferable to a traditional data center.

 

The Overall Design of Modular Data Centers

 

From the original order to the deployment, modular solutions offer an extremely fast timetable. The reason for this is that they’re designed to be able to be personalized, ordered and shipped to data centers in a matter of months of less. Modular data centers also allow for a parallel dependent transition as opposed to linear dependent transition when it comes to construction. Since they have a design that can easily be standardized and repeated, it’s no problem to match scale infrastructure and demand for modular data centers. The only limits on modular scale are the foundational infrastructure at the data center site and the open area that’s available. Another aspect of scalability is the ease it allows by having modules that can be efficiently and quickly be replaced if the technology ever needs to be upgraded. What this means for businesses and organizations is that they can predict shifts in technology a matter of months in advance.

 

Scalability for modular data centers is not only determined by how quickly the proper environment for a data center can be set up. Having an agile data center foundation is the same as swiftly being able to satisfy the needs of a growing and shifting business. Such needs might include creating a revolutionary new service or cutting down on downtime. It’s all about agility. Some business want a data center for the sole purpose of capacity planning while others like modular data centers because they have some of the best disaster recovery operations.

 

Something else to think about with modular data centers is that they can be delivered anywhere in the world that the end user desires. Rather than delivering the center all at once, a data center can be delivered and re-assembled rapidly once it arrives at its final destination. Such mobility can be one of the top selling points for businesses and organizations who make disaster recovery one of their top priorities since modular data centers can be shipped to a recovery site, put back together and have the organization running in very little time.

 

The Disadvantages of Modular Data Centers

 

One of the disadvantages of modular data centers is that some of their standard configurations have a limited value for organizations and businesses that want high-performance computing, in addition to the heavy cooling and power requirements that come with them. If owners were to take a look at the cost analyses, they might see that they would see very little to no savings if they were to have a module installed anywhere outside of the data center. Site preparation work would still need to be done, which requires utilities to be brought over and trenching, both of which cost money. When looking at it this way, it would actually be more affordable to have a traditional data center constructed.

 

Being locked in with a single vendor is something else that organizations worry about when it comes to modular data centers. The reason they might not want to be locked into a contract with a single vendor is that the organization or business might not have as many choices in the models and brands of internal components or the choices of terms of service if something were to ever go wrong with the data center. Being stuck with a vendor also means that data center owners aren’t able to keep looking for lower cost repair and maintenance services.

 

Those looking into data centers also have to bear in mind how well they’ll work with the resources they already have. The infrastructure management applications of a data center have a main console for keeping an digital eye on a vast network of resources, such as virtual and physical servers and power distribution and cooling centers. If a modular data center has DCIM capabilities of its own, it will most likely be able to work in conjunction with standard interfaces for swapping information with the organization’s existing DCIM systems management applications or programs. To keep from having to supervise modules separately as opposed to part of the whole, it’s best that buyers ask for a rundown of specifics on how open the modular unit is.

 

If you want a data center that’s based on open standards, it’s best that you have a list of primary standards to show the data center company since not everyone has the same idea of what open standards are. This will keep you from wasting time and possibly money to clear up the confusion before you and you modular data center up and running.

 

Advantages of Modular Data Centers

 

Modular data centers are an engineered product, which means that their internal subsystems are intricately integrated in order for them to be more efficient when it comes to both gaining power and cooling the module. Pure IT and first generation modules will more than likely not have the same levels of efficiency gains as those that have similar containment solutions within a traditional data center. In order to save funds on distribution gear and to avoid power loss from close proximity, it’s suggested that you set the data center’s power plant relatively close to the IT servers. You’ll also find that you’ll have chances to make use of energy management platforms inside of modules, with every subsystem being created as a single piece.

 

Do your homework and plenty of research before making a final decision on whether you should get a modular data center or a traditional one. If you’re looking for efficiency, easy setup, resiliency and scalability, you’ll more than likely benefit from a modular data center.

 

 

Posted in Construction Industry | Comments Off

Data Center Regulations And Construction Requiring PUE Surveys

Possible Future Ramifications For Data Center Regulations And Construction Requiring PUE Surveys

Low PUE Data CenterThere are three main categories in which PUE surveys would likely be used in the future construction and maintenance of data centers.  These include future government regulations pertaining to usage of greenhouse gasses; standardization for data center construction; and using PUE surveys to optimize return-on-investment due to the predicted short “shelf-life” of data centers.

New data centers have been estimated to be obsolete in under a decade by research firms including Gartner (7 years) and the International Data Corporation (9 years).  Many others speculate that the shelf-life of data centers is closer to five years.  Thus, data center construction and maintenance must take cost-benefit into account to a greater degree than many other structures built to support infrastructure or service providers.

Data center construction heavily relies on a variety of specific needs.  For example, data centers must be in a position to effectively re-route many utility lines during construction; design and build firms must work closely with government agencies and service providers to ensure that a large data center will not overwhelm an existing power or utility grid; data centers must have complex HVAC systems; and, any loss of power or lack of maintenance could be catastrophic for data centers catering to individual clients.  The potential for lost data and the lost ability for customers to have their websites live could easily result in various lawsuits citing lost earnings and corresponding legal fees.

To make a new data center successful, it is imperative to stay informed on all pertinent news and stay abreast of all likely future trends in regulation regarding construction methods and energy usage, especially on hot topics such as utilization of greenhouse gasses.  Due to the costs associated with building and maintaining a data center, knowing the nuances of data center maintenance such as Power Usage Effectiveness (PUE) surveys can be the difference between success and failure.

PUE survey services

PUE surveys can show whether or not a data center is working to its full potential.  One of the most considerable costs when running a data center include costs associated with energy.  Taking preventative measures to ensure that your data center is functioning optimally can save money now and help prepare for possible future regulations that address the amount of greenhouse gasses consumed by a certain facility.  Three common types of PUE surveys include:

  • Thermal imaging surveys
  • Power quality surveys
  • HVAC and thermal imaging surveys

Examining the overall power consumption of a data center is helpful to an extent.  A sudden and unexplainable spike in energy consumption should raise cause for concern.  However, overall energy consumption reveals little about target areas that are not preforming with optimal efficiency or a starting point to remedy the problem.  Without specific PUE data, trying to optimize efficiency and address problem areas can be like searching for a needle in a haystack.

Thermal imaging surveys

Routine maintenance is highly recommended.  It is not only conceivable but common for necessary re-calibration of measurement tools.  In addition, PUE surveys are designed primarily as a preventative measure.  Instead of waiting until there is a noticeable problem, thermal imaging surveys can detect an atypical transfer of energy, such as heat.  Thermal imaging can be especially effective in a data center environment as data centers rely heavily on equipment functioning in an artificial climate.

Thermal imaging technology can provide more accurate PUE data due to the consistent temperature within a data center as opposed to a structure with a less advanced (and less predictable) HVAC system.  Thermal imaging can target specific rooms or smaller areas.  Furthermore, thermal imaging can provide data that can help prevent the chance of fires, unseen faults in electrical systems, and determine to what extent a data center is in jeopardy of loss of data due to an unknown electrical problem or electrical fire.

Power quality surveys

Power quality surveys are able to gather data relevant to power consumption to a much more specific degree than power consumption for the entire facility.  They can investigate flicker, slag, and other similar phenomena.  In addition, power quality surveys can ensure that enough power supply is available to meet demand.  As many data centers have additional power redundancies (e.g. generators) in addition to being connected to an existing power grid, it is essential to know to what extent power supply is readily available to prevent a power outage or crashing the power grid within the data center itself.    Electricity and energy consumption should be ideally dispersed throughout an entire data center instead of taking a risk by having a disproportionate amount of power go to a specific area.

HVAC and thermal imaging surveys

HVAC systems are imperative for the functioning of almost all data centers.  Troubleshooting problems with HVAC systems along with collecting data that suggests inefficiencies in the system is recommended routine maintenance in order to save money or avoid a system meltdown.  Thermal imaging is highly indicative of optimal HVAC systems for obvious reasons.   Some of the first signs of HVAC malfunction include uneven distribution of heat exchange.  In addition, excess power is wasted when trying to support a sub-optimal HVA system.

How PUE surveys may affect the future of data center construction and maintenance

There are three main factors in determining the future of data centers: government regulations, the “shelf life” of new data centers, and the necessary return on investment from construction to when the data center is rendered obsolete.  In short, data centers need to be designed for the future and always strive to operate at optimal levels of efficiency.  Thus, PUE surveys may impact future data centers in the following ways:

  • Mandatory laws regarding PUE surveys and increased government regulation
  • Need for increased PUE surveys to optimize overall efficacy due to shorter data center shelf lives to optimize return on investment
  • Standardized construction methods to promote longevity of data centers and preservation of resources

After investing in a data center, ensure that PUE surveys are conducted regularly to save money now and stay in compliance with possible future government regulations.  Aside from early detection of possible catastrophes, PUE surveys can help prepare for the future of profitable data centers along with preempting possible future regulations pertaining to energy consumption and greenhouse gasses.

 

 

Posted in Data Center Construction | Comments Off

A Look At Google’s Data Center And What It Takes To Keep It Running Smoothly

Inside Google Data CenterGoogle currently has 13 data centers located in North America, South America, Asia and Europe. These data centers house the thousands of machines needed to run Google’s operations. Whether a customer is using Google to send an email, make an online transaction, search the internet or do business with Google Apps, all the information is processed through a Google data center.

Employees at the data centers work hard to keep Google running smoothly. In addition, they work to assure that customers’ information is kept safe and secure. The following details the facets of keeping Google’s data centers working and constantly improving.

 

 

Energy

 

Google is the first of the main internet services companies to be given certification due to the high environmental standards employed in their US data centers. They are dedicated to being green through the wise use of energy. Google is working hard to conserve and reduce energy in all of their data centers. Most data centers spend 80 to 90 percent more energy in cooling their machines than in running them; Google’s cooling costs are only 12 percent higher than machine operating costs. Only 50 percent as much energy is used at Google data centers as is used at other data centers. Thus far, Google has over a billion dollars in energy costs.

 

At Google data centers throughout the world, a wide variety of methods are being used to keep energy costs down while protecting the environment. Google buys electricity from wind farms near their data centers. Additionally, 33 percent of the energy they use is renewable energy. They also have solar panels on the roofs of their data centers.

 

In an effort to improve the environment by reducing the number of vehicles on the road, Google has created a bike to work program. They also have a shuttle program to transport employees to and from work.

 

Inside Google’s data centers, the temperature is raised to 80 degrees Fahrenheit. Outside air is then used for cooling, thus further reducing energy costs. Google’s servers are specifically designed to use as little power as possible. Removing all unnecessary parts and minimizing power loss makes the servers more green.

 

Not only is Google continually looking for ways to reduce energy and improve the environment, they are also helping others to make an impact on the planet. Email hosted on local servers can leave a carbon footprint more than 80 times that left by a Gmail user. Companies that use Gmail decrease their environmental impact up to 98 percent.  Through Google’s desire to produce renewable energy via solar panels and wind farms, they are actually able to produce more energy than they need. Over 500,000 homes could be powered with the excess energy Google produces.

 

Reusing and recycling are an important part of Google’s business. When machines become outdated, they are repurposed and continue to be used in Google’s data centers. After a machine is no longer usable, all data is completely erased and parts are either reused or sold. By repurposing machines, Google has been able to eliminate the need of purchasing over 300,000 new servers.

 

Data Security

 

When it comes to security, Google does not take any chances; they are committed to protecting the proprietary information of their customers. From physically securing data centers to meticulously building and monitoring servers, employees are working hard to keep customer and company information safe. As technology continues to evolve, Google persists in their efforts of improving security measures to ensure the ongoing safety of its customers’ information.

 

Google builds their own custom servers at each of the data centers. The servers automatically back up data, which protects customers in the event of their own system failure. These servers do not leave the data centers until they are non-functional. Then the data is completely erased and they are broken down and sold as parts. In addition, when hard drives become unusable, the data on them is deleted in a thorough process. Then they are either crushed or shredded and then sent to a recycling facility.

 

To prevent hacking, Google stores each customer’s information in chunks across many data centers. These information chunks are unreadable to humans and are named randomly. As malware is a legitimate threat, Google makes every effort to prevent and eliminate it. However, if a security incident were to occur, Google’s security team makes it their priority to resolve the issue as soon as possible.

 

There are specific plans in place in the event of a disaster of any kind. If there were a disaster, including a natural disaster, fire or security breach, data would automatically be transferred to a server at another data center. If a power failure occurs at any of the data centers, there are backup generators to keep everything running. To avoid potential power outages, Google has an air cooling system to keep machines at a constant temperature, which keeps them from overheating.

 

Physical Security

 

Each data center employs a security team around the clock. This security team is dedicated to maintaining a security at each data center. Security guards, surveillance cameras and fencing help to keep the facility secure. Improved technology including thermal imaging cameras help the security team members look for suspicious activity on the premises. Some data centers use biometric devices to further ensure a safe and secure facility.

 

Only authorized personnel are allowed on the data center grounds. No public tours are permitted, and security guards at guard stations allow only authorized employees past security checkpoints. In order to keep the inside of the facility safe, video monitoring of all areas allows the security team to view all areas of the data center.

 

Employee Security

 

All employees must undergo an extensive background and reference check. They are trained in procedures of security and ethics. Google limits employee access based on their position. This is just another way Google is working to keep its customers’ data secure.

 

Google’s data centers contain vital customer information, and Google is dedicated to securing and protecting the information. Through custom machine production to detailed security procedures, Google ensures data centers are running effectively and efficiently. Using renewable energy allows them not only to save money, but also to improve the environment.

 

Posted in Mission Critical Industry | Comments Off

The Pros And Cons Of Different Cooling Methods For Data Centers

Melting-ice-cubes.jpgUnique changes in technology and the energy used to run these important pieces of equipment have created a need for new and innovative cooling methods for data centers.  Cooling methods were first designed over three decades ago, when it was next to impossible to predict the needs of the current time.  When a data center is being designed, there are several different options for cooling methods.  Because data centers consume almost forty percent of the total energy when cooling off the center, it’s essential to find the most energy efficient method when designing a data center.

Data Center Cooling Issues

Most data centers face several problems when creating a cooling system.

First, it is necessary to understand the power density needs for the IT equipment in every data center.  IT equipment is changing regularly and has updated needs for cooling depending on the equipment.  When designing a cooling system, it’s necessary to create a system that can handle the current equipment, and account for changes in future equipment.  Cooling systems should be flexible enough that they can adapt to future upgrades to your IT systems.

Second, cooling requires the second highest amount of energy that every data center uses.  When designing a center and choosing a cooling system, it’s essential to choose a system that efficiently uses energy to cool the equipment and the center overall.

Finally, it is necessary to completely understand the airflow in the space, and also be able to control for the airflow at all times.  The purpose of a cooling system is to avoid hotspots and increase efficiency by ensuring that the right environmental conditions are constantly in place.  It is also necessary to understand that the server’s heat load and heat rejections are all part of one process and not two separate needs.

When searching for the right cooling system for a data center, there are three key components to keep in mind: scalability, agility, and environmental friendliness.

Chilled Water System

For data centers with availability requirements that run from moderate to high, a chilled water system is available in three different types:

  • Glycol-cooled chillers
  • Air-cooled chillers
  • Water-cooled chillers

Each method is different based on how the particular system rejects heat using water or air.  A chilled water system pumps chilled water from the chilling area to the computer room air handler (CRAH) units throughout the entire data center.

Pros to a chilled water system are the high level of reliability and the cost savings to the data center.  These systems allow centers to run air conditioning units when power usage is less expensive.  During the day, when energy rates are higher, the center can tap into the storage that is cooled and use that rather than requiring the air conditioning system to run full time during the day.

Air Cooled System

Other data centers use air to keep the equipment cool in the center.  With these systems, an air conditioner is combined with a condenser.  Systems for air-cooling are divided into two pieces.  Half the components are located in the computer room air conditioner, and the rest are located in the condenser that is placed outside the facility.

The advantages to using an air-cooled system in a data center are many.  Cooling with air is more environmentally friendly, can greatly lower cooling costs, and is proven to be safer than many other cooling options.  Energy is saved because air conditioning units don’t have to constantly be running, and can be turned off for intervals in order to preserve energy and save money.

With an air-cooled system, the mechanical designs prevent water from coming through openings in the building, increasing the safety of the cooling system.  Air cooling systems also utilize filters to clean air from outside before it enters the building.  Many systems for cooling that use air can also be conditioned by using humidification.  All these factors work to increase the safety of the cooling system as it cools the computer room and all your important equipment.

Cooling Design Basics

Along with the type of medium that is used to cool the center, each data center design must choose to use room, row, or rack-based cooling to more effectively cool equipment.  Depending on how the room is designed, the air will be pushed through in different ways.

Room-Based Cooling

In room-based cooling, a center may employ one of more air conditioners that supply cool air.  Dampers, vents, or ducts do not restrict this air.  With room-based cooling, the supply and return may also be constrained partially by an overhead return or raised floor system.

With room-based cooling, the design is often constricted by the unique measurements of the room.  As more power is used, it may be more difficult to predict performance and maintain uniformity in cooling levels throughout the center.

Row-Based Cooling

In row-based cooling, the air conditioning units are connected to a specific row and are dedicated to this particular row for cooling purposes.  This creates paths for airflow that are more clearly defined and shorter distances.  This also allows for a much more predictable airflow, allowing the cooling system to achieve a higher power density.  Row-based cooling systems are also more efficient due to the reduction in the length of the airflow path.

 

Rack-Based Cooling

In this type of cooling for data centers, the cooling unit is associated with a particular rack that is holding equipment.  Each unit is mounted directly within or to each individual IT rack.  With rack-based cooling, the airflow paths are even shorter and better defined than with row-based cooling.  Because of this, airflows are not affected by any room constraints or variations in installation.  Rack-based cooling allows for the highest power density to be utilized within the cooling system.

Rack-based cooling is also the most efficient, as the airflow paths are the shortest length of all three types of cooling.  The cooling specifications can also be designed to the equipment on each rack, rather than for an entire room or row of equipment.

 

The Importance Of Cooling

Keeping IT equipment cool is an essential part of the design of any data center.  With data centers using so much energy, choosing the most efficient cooling system can not only save money, it can also decrease harmful effects to the environment and support your cause for using less energy to run your center.

Posted in data center cooling | Comments Off

Uninterruptible Power Supply Maintenance Planning and Execution

UPS Maintenance

Uninterruptible Power Supply Maintenance

 

 

 

 

 

 

 

 

 

How An Uninterruptible Power Supply Is Maintained

Maintenance is a must for all computers and all computer network components.  Without maintenance, developing problems are left unchecked, updates and upgrades don’t get installed, and an accurate picture of network and server operation never develops.  Planning for the maintenance of any network component can be a challenge and planning for the maintenance of your uninterruptible power supply (UPS) system is perhaps most challenging.  Regardless of the apparent challenges this important task needs addressed.

What Maintenance Achieves and Why it’s Important

In short, UPS system maintenance helps ensure that your system is going to perform dependably regardless of changing circumstances.  The impact of variables such as power interruptions, variances in voltage and frequency, and other disruptions are minimized through the regular attention given to each data center component.  Regular maintenance checks reduce the possibility of unplanned downtime and help ensure that all data center components are operating at peak efficiency.  A well-functioning data center provides secure and stable computing power for all employees.  Since well-maintained data centers and UPS systems function more effectively, they contribute to overall productivity – and thus to your bottom line.

What is the Maintenance Process?

Maintaining a UPS system is a multi-step process that includes inspection, analysis, and testing.

  • First a visual check is conducted on the physical components.  Anything found to be worn, loose, burned, or otherwise compromised will be removed and replaced with new components.
  • Next all equipment housings are cleaned and vacuumed by hand.  Removing dust and minute debris helps maintain the optimal operating temperature of each component; environmental systems work more effectively in spaces where air is able to freely circulate and keep electronics cool.  Even though there are few moving parts inside a UPS system, dust can still penetrate module chassis and interfere with necessary function.
  • Batteries and capacitors have the potential to leak fluid onto surrounding components.  A visual check of each battery will be performed during maintenance.
  • The HVAC system and climate controls tasked with maintaining a stable environment inside the space housing the UPS must itself be checked.  Depending on the nature of this system and the extent of its infrastructure, this stage may take some extra time to complete.
  • An operational test will be run on the entire system, including batteries.  The report generated at the end of this test will allow technicians to analyze functioning parameters and gauge the remaining longevity of all battery strings and cells.

Detecting the First Sign of Trouble

All electrical devices generate some degree of heat.  This heat must be safely channeled away from the device through the use of fans, heat sinks, outside cooling agents, and a stable environment.  Some of the steps mentioned above are done to help create a stable environment optimal for UPS systems.  This stability is created through temperature moderation, humidity control, and air circulation.  When the HVAC system is working correctly and when all heat regulation components within the system are also functioning, there should be no heat spikes.  The only way to detect spots of irregular heat is with a thermal scan.  This scan gives particular attention to all the electrical connections present in the system.  These points of connections are apt to generate heat when not working correctly; this initial malfunction can indicate an existing problem or one that is in development.  A thermal scan will detect these early warning signs and give technicians a chance to deploy early intervention strategies.

Assessing Power Generation and Use

Another important part of UPS maintenance is the testing of the system that manages power transfer throughout the rest of the center and its modules.  This test assesses the circuit breakers and transfer switches within the UPS; these components are responsible for regulating the flow of power and, if not working correctly, will supply too much or too little to the other components.  The maintenance bypasses must also be checked at this stage to make sure that they’re working within their optimal operating parameters.

Most UPS systems are designed to function for a short, albeit critical, period of time.  This length of time allows for maintenance intervention during planned or unplanned interruptions in power before the system is restored to its usual source.  Some organizations require a backup power system that is capable of generating its own electricity in the case of a supply disruption or stoppage.  UPS systems connected to a backup generator are likely to require additional maintenance checks to assess its function.

The Ideal Maintenance Schedule

Not every step in this maintenance list will be conducted during every check.  Doing so would cause unnecessary interruption in organization function and demand the presence of skilled technicians when they really aren’t required.

A visual inspection of component integrity should be conducted once each quarter.  This inspection can be completed in a short period of time and requires little special training; with some instruction any technician can perform this check.  The check of the climate control system and visual check of the batteries should take place once every six months; vacuuming can take place at the same time.

An operational test and complete thermal scan should be conducted once each year.  This may require you to arrange a maintenance check from your UPS company if you don’t have qualified technicians on staff.  Every two years test the power system, battery backups, and any generators.

If possible, plan these checks well in advance.  Good advance planning allows for adequate preparation for any system downtime that may be required.  Because downtime is costly in terms of time and lost productivity, advance preparation is necessary to minimize any negative impacts this period of time may have on your organization.

Between Each Maintenance Check

As thorough as all these maintenance checks sound, there is actually additional work that should be done in between checks.  This work will make maintenance simpler and help your team respond to unexpected circumstances.  Make an inventory list of all spare parts and materials; keep a running tally of materials used so new items can be ordered and kept on hand.  Coordinate the maintenance schedule with larger workplace operations so conflicts in labor and budget allowances are avoided.  And finally, make an effort to keep up with the latest developments in the UPS industry.  Understanding the larger trends will help your organization adapt to the many changes likely to impact your operations.

 

Posted in computer room maintenance | Comments Off

Uninterruptible Power Supply Industry Changes in Technology

UPS Technology

 

 

 

 

 

 

Important Changes Taking Place In The Uninterruptible Power Supply Industry

It’s taken for granted that the technology field is changing every day.  Because data centers and the uninterruptible power supply systems connected to them are found in an increasing number of organizations you’ll need to keep up with the tech news related to these subjects if you’re going to stay competitive.  Uninterruptible power supplies (also called UPS) are a vital part of an organization’s informational infrastructure; they power the networks and servers required by an organization.  One of the latest UPS developments to be experimented with is called the N+1 strategy.  This strategy makes redundancy an asset rather than a liability and introduces a new way of thinking about the overall design of UPS systems.

Rethinking the Role of Redundancy

Redundancy is typically thought of as unnecessary repetition; it’s often considered a negative thing because it adds waste to an otherwise efficient system.  While redundant steps in protocol and redundant stages in management are to be avoided, there are instances where excess can be helpful, even necessary.  The additional repetition of elements within an uninterruptible power supply system may be the key to avoiding some of the most complex and costly problems associated with these systems.  The N+1 strategy first calculates the number of power modules that are absolutely necessary to run any given UPS system.  That number (represented here by the variable N) is then increased by one- thus, the N+1 equation.

The large individual UPS modules that are present in the data centers of most modern enterprises are vulnerable to failure; this inherent vulnerability is a primary drawback of these modules even though their use is standard across industries.  If a failure occurs, there is no backup available to continue the data center’s normal operation.  In an N+1 system numerous smaller modules are grouped together and given their own batteries.  The streamlined operation of the discrete large modules is duplicated by these smaller grouped modules; under normal circumstances grouped modules will behave the same way as an individual one.  The similarity in operation makes it simple to integrate new module formations into an existing data center.

The Primary Benefits of the N+1 UPS System Design

Though the large UPS modules perform admirably in normal circumstances, if a failure occurs the entire system goes down.  System down time is very costly and can cause severe information compromise if the down time is not being managed.  Even the managed down time required for maintenance can cost a lot in terms of lost money and productivity.  The main advantage of the N+1 system is that there are always enough modules in operation to sustain the system’s power, thus fulfilling the promise of an uninterruptible power supply.  The redundant module built into the system allows for any individual module to be serviced without sacrificing any data center function.  Once that module has been addressed, it can be brought back online and another module removed for servicing.  All necessary maintenance and service can be taken care of without planning for downtime or anticipating any of the problems that can happen as a result.

The Anticipated Growth of UPS System Usage

As interesting as the development of N+1 systems are, the news that has really gotten people talking is a recent study conducted by Pike Research that gives developers a solid prediction with regards to the growth of energy storage use in commercial buildings.

Commercial buildings, energy storage, and UPS systems are closely connected and are anticipated to become increasingly reliant on each other.  Because commercial buildings house agencies whose work demands a stable and secure source of power for their computer servers, energy storage is now an anticipated architectural consideration during preliminary building planning.  Those agencies without current UPS needs occupying buildings with the capacity for these systems are likely to express interest in taking advantage of this potential.  As an agency grows it can either make use of existing facilities or relocate to a building that can better accommodate them.  Companies creating commercial buildings need to anticipate the changing needs of growing agencies and build in the resources that will be required at different stages of development.

Some Useful Facts and Figures

In 2011, UPS systems were estimated to have generated $3.4 billion worldwide; in 2013, the income generated is predicted to be nearly $4 billion.  By 2016 analysts are predicting that UPS will mature into a $4.8 billion industry worldwide.  Feeling comfortable with these systems and the way that they dovetail with other related fields is going to help organizations strategize for short and long term growth.  How so?

  • Understanding these related concerns will help with selecting the facilities that will house your organization and any affiliated groups housed in distant locations.
  • Computer technicians, network administrators, and computer system designers can provide valuable input with regards to space selection to help decision makers select appropriate facilities.
  • Knowing what energy supply infrastructure is present in a building will help decision makers create more accurate budgets for the purchase of necessary UPS components and for future operation and maintenance costs.
  • Vetting potential UPS suppliers will be simpler because decision makers are already familiar with both the power supply needs of their organization and with the infrastructure available on the property.

Other Industry Changes Due to the Growth of UPS Demand

As demands for UPS systems accelerate and multiply, more companies offering supplies, maintenance, and installation will come into existence.  This increased competition means more options for the consumer.  It also means that the consumer needs to be more aware of the ins and outs of their data center and the power supply that’s connected to it.  Though the Pike Research study didn’t explicitly state the intangible impacts of increasing UPS usage, it’s safe to assume that this increasing demand will translate into an increasing literacy among employees working at all levels inside an organization.  Just as computer literacy has become so widespread as to be assumed, familiarity with the basics of UPS can be expected to spread in a similar, though rather more limited, manner.

The growth of UPS system demand and its impact on related fields and the development of N+1 redundancy planning are just two examples of the changes taking place in the data center industry.  As the need for these systems continue to grow, it’s certain that many more exciting changes will take place.

Posted in Uninterruptible Power Supply | Comments Off

Places Most In Need Of The Benefits Offered By A Data Center

Business Data Centers

 

 

 

 

 

 

There are many businesses that benefit from the usage of a data center.  A data center is essentially a computer center where a company will hold all servers and equipment necessary to keep the technology side of the business running at full capacity.  Having all equipment and servers under the same roof is not the only benefit that comes from utilizing the benefits of a data center.  Other benefits include:

 

  • Easy maintenance for all equipment
  • Presence of routers and switches to help servers communicate
  • Continuity of business in the event of a failed server
  • Ability to accommodate a growing business

 

Perhaps the biggest benefit from using a data center is the assurance you have that your equipment will never fail.  Data center design allows for an uninterrupted power supply that immediately takes over in the event of a power outage.  This power supply is unique from a back-up generator or emergency power supply in that it uses batteries to run the equipment.  If you have a failed server, professionals are on hand to quickly fix the problem before your customers even notice there is a glitch.  You will never deal with a network that is down for several days, disrupting your business, when you use the services of a data center.

 

Although almost all businesses can benefit from using a data center, there are certain businesses and complexes that should always be using the data center services.  Companies with larger networks and more servers are obviously more likely to benefit from the accommodations in a center.

 

High Security Requirements Of Data Centers Benefit Banks

 

In today’s economy and volatile banking climate, many banks find it hard to operate.  With regulations increasing, continuous cost pressure, high stakes mergers and acquisitions, and changing business models, it’s becomes incredibly hard for a bank to operate profitably.  In order to remain competitive in today’s world, banks must employ any solution that is efficient, flexible and smart.

 

Banks benefit from the strong security systems used by data centers to protect not only their equipment, but also the information that is sent through each server.  A bank must be able to guarantee to their customers that their personal and financial information is handled with the utmost security and care, and data centers allow them a higher degree of safety.  Because many banks also offer online services, their information must be available at all hours of the day.  There is no allowance for a failed server or a power outage.  Data centers use techniques and tools that can quickly recover power or fixed a failed server.

 

Government Buildings And Military Complexes Benefit From Data Centers

 

Any entity that needs to move a lot of information regularly and quickly can benefit from the use of a data center, and government buildings are no exception.  The more information that is handled electronically, the more servers needed.  When you add more equipment and more servers to effectively handle all the needed information, you then must provide a large space to hold all of the equipment.  Rather than keeping it stored in a basement where environmental conditions are a concern, many government buildings opt to create a data center to keep their equipment safe.

 

Military complexes process a large amount of confidential information that must be kept from the basic public and computer hackers.  The military also processes information that must be retrieved in the event of a disaster.  Storing the information in a data center ensures that it is afforded the high security level of the center, both physically and electronically, and that information can be retrieved in the event that it is lost or destroyed.

 

With the large amounts of information that are processed through both government buildings and military complexes, data centers offer a fast way to process and move information from one place to another, to ensure that it is available to all necessary parties at the right time.

 

Power Plants Benefit From The “Always On” Mentality Of Data Centers

 

There is no shut-off button for a data center.  And when your plant is working to provide power to a certain geographic area, there isn’t an option to be shut down for a period of time.  If you do find yourself in the event of a power outage, getting power restored quickly is essential to the millions of people who are counting on you to light their homes and keep their food cold or frozen.

 

Data centers offer the ability to restore power to your network within a second of the original failure, allowing you to pinpoint where the problem is and fix it quickly.  And with a data center, your equipment and information is easily reached for simple maintenance tasks.  Power plants use data centers because this storage system helps you to pass on the convenience to your customers quickly and efficiently.

 

Casinos Value The Ability To Process Large Volumes Of Information

 

Every time a slot machine is pulled, that pull is recorded and produces a small amount of data, which is then sent to the server.  At that point, employees with the right clearance know exactly what is held within that slot machine at that exact moment.  Algorithms are then used to determine gambling outcomes and generate random combinations in the millions on that particular machine.  You can see how large amounts of information can quickly become overwhelming and bog down an average server.

 

The main benefit casinos have when using a data center is the speed with which they can process transactions.  In any given night, a casino is facing a large number of transactions that must be handled and recorded, and then kept on the server for future reference.  Staying on top of all transactions and what is in each slot machine after every pull is a job for an organized and intricate data center.

 

Data Centers For Every Business

 

It’s simple to see that any business that processes large amount of information on a regular basis can benefit from a data center.  The less obvious benefits are the high level of physical and electronic security, the ability for quick repairs, and the option to control the environmental conditions for each and every piece of equipment.  Any business can benefit from electronic equipment that runs more efficiently and helps to maximize profits and performance.

Posted in Mission Critical Industry | Comments Off