Understanding Data Centers and Data Center Tier Certifications

datacenter726Data centers have been around for ages. In the beginning of time, they were like huge computer rooms and were very complex to maintain and operate. In fact, they sometimes required a special environment to operate them properly and a lot of cords and components to ensure that the system worked correctly. These systems usually required a lot of power and had to be constantly cooled to ensure that they would not overheat. They were also extremely costly. Although the data centers were used for various purposes back in the day, their main objective was to help with military efforts. Much has changed since those early stages of the data center. Today, they are more complex, have better technology, and require tier certification in order to operate them properly. Learn more about these data centers and the tier certification processes that go along with them.


Today’s Data Centers


The data centers used today are homes to computer systems. They also hold various components that complement these computers, including telecommunications and storage systems. These centers also have backup or redundant supplies of power. Furthermore, they have redundant connections to data communications. In addition, environmental controls, like fire suppression and air condition, to protect the data in the center. Many of these centers are extremely large in size and require the electricity of a small town. For this reason, they require knowledge to operate them properly. This is one reason for data tier certification.


What Is Tier Certification?


Data tier certification began a few decades ago in an effort to help data centers across the globe. This certification process helps to create standards that the world uses. They also offer ways to determine how these centers are performing and can help them measure whether or not they are receiving a return on their investment. Facilities can receive a “Tier Certification of Design Documents”. They can also receive a “Tier Certification of Constructed Facility”. This certification is only for the physical topology of an infrastructure of a data center. It deals with what directly affects the operations of the computer rooms. In addition, certifications are given at four levels or tiers:


• Tier 1 is for the basic Infrastructure of the site. This is a non-redundant certification.

• Tier 2 is for redundant components of the site infrastructure.

• Tier 3 is for site infrastructure that is concurrently maintainable.

• Tier 4 is for site infrastructure that is fault tolerant.


Why Certification?


Like many industries, including law, accounting, real estate, and many trades, certification is necessary for operation. In data centers, certification establishes standards across the world that hundreds of different companies and organizations rely on for their operations. The certification standards affect the infrastructures of various networks, including large corporations, governments, and other organizations. Through certification, problems with infrastructure are indentified and quantified. Then, they are improved. This promotes a better long-term exchange of information.


About Tier 1


The first tier in the data center certification is the most basic. It is simply a server room. In this type of facility, a single path connects the power and cooling to the equipment. None of the components used are redundant. Therefore, any power outages that are either planned or unplanned with have a negative effect on the data. Typically, at Tier 1 data center has 99.67% availability. Occasional downtime is expected each year. This is actually important for the function of the applications in the server room.


About Tier 2


The second tier for data certification has some redundant components; however, it still facilitates cooling and power though the same single path. With this type of system, certain components can be taken off-line. This is helpful for planned services, and it causes no disruption to any of the data processing equipment that is used. On the other hand, a power outage that has been unplanned or a disruption to the service path will result in problems with the data processing equipment. Generally, a Tier 2 data center has 99.75% availability. Therefore, it is a little bit more reliable than Tier 1. Furthermore, this type of data center is sufficient for many companies and organizations.


About Tier 3


The infrastructure for the third tier allows facilities to be available at nearly all times. This is because it provides more redundancy and reliability. The center uses redundant components and also relies on several separate cooling and power distribution paths. These all service the data processing equipment. Interesting, only one path works at a time. For this reason, regular maintenance and some unforeseen power outages will not effect the equipment in the center. Therefore, most Tier 3 facilities operate at 99.98% availability.


About Tier 4


Very few organizations need the reliability of Tier 4 infrastructures. Therefore, these standards are usually reserved for those who lack of available would have substantial impact. For example, many financial institutions operate at a Tier 4 level. This level provides multiple levels of redundant components. In addition, there are several independent cooling and power distribution paths. All of these paths are active and support the equipment in the processing center. Therefore, any equipment failure or power outage (regardless of the size or type) will not effect the equipment. Tier 4 centers are the most available and redundant around the world. They have been proven to be 99.99% available – nearly perfect.


What Is Required Of Data Centers?


Many businesses and organization rely on data centers daily. Therefore, it is not surprising that certain requirements are imposed on these centers to protect these organizations from risk. The “Telecommunications Industry Association” has determined certain standards including the size of rooms and the topology of data centers. There are also environmental requirements for these centers. The hope is to modernize many of the facilities so they are more energy efficient. However, this requires newer equipment. Furthermore, it is vital that all infrastructures work to standardize their process so that their systems are automated. Finally, it is especially important that these centers are secure to protect the data they house.


Although data centers and data center tier certification sounds confusing, it is really a complicated way of categorizing the levels of data centers across the globe.


Posted in Data Center Design | Tagged , | Comments Off

What is data virtualization?

datacentertierlevelsThe amount of data available in the world today is staggering, estimated by a 2011 Digital Universe Study from IDC and EMC to be around 1.8 zettabytes (1.8 trillion gigabytes), and projected to double every year. In addition, the study highlights the fact that the costs to create, capture, store, and manage data are about one-sixth of what they were in 2005, which means companies and enterprises large and small will continue to capture and store more and more data at lower costs.


So how much data is out there? According to data gathered from IBM, and compiled on the social media blog ViralHeat in October 2012, there are nearly 3 million emails sent every second around the world, 20 hours of video uploaded to YouTube every minute, and 50 million tweets sent every day. In addition, Google processes 24 petabytes (1 petabyte is equal to 1 quadrillion bytes, or 1 x 1015) of data every day.


With so much data around, it can be difficult to access the information you need, then harness it in a way that can benefit your organization. Data virtualization is the process of bringing together information from several different systems—including databases, applications, files, websites, and vendors—into a universal format that can be accessed from anywhere in the world, without the need to know where the original file is located or how it is formatted. Effective data virtualization tools transform several disparate systems into one cohesive and usable format, giving you the ability to access, view, and transform data that would have otherwise been impossible to aggregate.


Date virtualization systems use a virtual layer, also called an abstraction layer, between the original source of the data and the consumer, which eliminates the need to physically consolidate and store the information on your own servers. In addition, it allows for on-demand access to the most updated information from the data sources (including transaction systems, big data, data warehouses, and more) without the need to go through consolidation processes or batch processes within each data repository.


Regulating Data with Virtualization


Data is a wonderful tool for businesses, but with the volume of digital information that exists today, companies can quickly become overwhelmed if they do not have a way to manage that information. Many companies have multiple data repositories where they collect and store information internally, including individual files and computers, servers, and databases, as well as access to external information from data warehouses, transaction systems, and more. For large corporations, the information on their internal servers and computers alone could equate to millions of gigabytes of data.


In order to effectively use this information, there must be a way to aggregate all the information into one system that is useful and accessible to everyone. Prior to data virtualization, you had to access the direct source of the data, which presents some challenges. In cases where you remotely access the data, there could be downtime waiting to download the information you need, or your data could get messed up when you try to integrate it all into one system from several disparate sources. In addition, there are risks involved with allowing several people to access and manipulate the original source of data, opening the door to the possibility that someone could corrupt the original files. Since virtualization provides a map to the data through the virtual (abstraction) layer, downtime is virtually non-existent, you get access to the most up-to-date information, and you reduce or eliminate the risk of ruining the original files.


The Costs of Data Virtualization


In order to have an effective data virtualization system, companies need the right middleware platforms to provide support and functionality while reliably providing instant access to all the data available. These platforms include three key components:

• An integrated environment that grants access to all the key users, and outlines security controls, data standards, quality requirements, and data validation.

• The data virtualization server, where users input their queries and the system aggregates all the information into a format that is easy for the user to view and manipulate. This requires the ability to collect and transform the information from several different systems into a single format for consumption. These servers must also include validation and authentication to ensure data security.

• The ability to manage the servers and keep them running reliably all the time. One of the keys to quality data virtualization systems is access to high quality information in real time, which means there must be tools in place to support integration, security, and access to the system, as well as monitor the system logs to identify usage levels and key indicators to improve access.


While the costs to set up this type of system can be high initially, the return on investment a company can achieve through strategic use of the data gathered can more than outweigh the initial costs.


Case Studies in Data Virtualization


There are hundreds of examples of companies today, from large corporations to small- and medium-sized businesses, that are using data virtualization to improve the way they collect, maintain, and utilize information stored in databases and systems throughout the world.


For example, Chevron was recently recognized for implementing data virtualization in a project; by adding the virtual layer to several systems that had to be aggregated, project managers were able to cut the total time to migrate systems almost in half, and lower the risk of losing critical data.


Companies like Franklin Templeton, which rely on data to deliver results to investors, use data virtualization to manage databases more efficiently, eliminate data duplication within the system, and shorten the amount of time it takes to bring new products to the market, increasing their competitive edge.


For large corporations that aggregate data from several different data marts and high-volume warehouses, the ability to consolidate that information into a usable format that drives sales and customer retention strategies is a critical competitive advantage. Companies like AT&T are using data virtualization to consolidate hundreds of data sources into one system in real-time that inform everything from R&D to marketing

and sales.


Whether you are a small- or medium-sized business that is struggling with time-consuming routine IT tasks, such as manually managing several systems and databases, or you are a large corporation trying to access and aggregate billions of pieces of information every day, data virtualization can help you view and manipulate information that will give you the competitive edge you need. Every company has data, but without the ability to safely and reliably access and organize the key pieces that you need from all your disparate systems, you will never realize all the benefits that information can offer.

Posted in Facility Maintenance | Tagged | Comments Off

Data encryption involving authentication, authorizations, and identity management


data-center-requirementsThe Importance of Secure Data Protection For Individuals and Businesses


Protecting data is becoming increasingly important in our highly internet savvy world. With millions of people using the internet daily, and an unfortunate amount of them using it for less than honest purposes, securing your website data is a huge priority for consumers and business owners alike.


This is especially true if you have a business or organization that keeps a database of secure client information such as telephone numbers, social security information, credit card numbers, and home addresses. It’s important to make sure you use the most up-to-date types of data security and encryption processes to minimize the change of anything being intercepted as it is transferred over the internet.


Protecting Your Data With Encryption


There are a number of ways to protect online information, with data encryption being the most commonly used by to protect information transferred between servers and clients, or in data storage centers. A well trained data security team will be able to advise you on the best way to secure your information and keep it safe. Here are several important ways to protect your company data, user information and financial accounts.


Use of Encryption


Encryption transforms data, making it unreadable to anyone without the decryption key. By encrypting data as it is exchanged between web browsers and servers, personal information such as credit card numbers, social security numbers, and addresses can be sent securely over the internet with much less risk of being intercepted during the process.


Two types of protocols used during the encryption are:

● Secure Shell (SSH) Encryption Protocol – This process involves the encryption of all data between the browser and the server while they are communicating at the shell.

● Socket Layer (SSL) Encryption Protocol – This involves encrypting all data in the transaction between the web browser and the web server, before any data is transferred. This protects secure data like a “shell” covering up the data as it transfers across online connections.


Types of Data Encryption Used To Protect Your Information


Authentication is the process used to prove that a computer user is who they say they are. It identifies who the system (or person) is, and then verifies that they are “authentic”. This data encryption tool is used by servers to find out who exactly is accessing their website or online information. It’s also used by clients who need to be sure the server is the system it is claiming to be. The process of authentication generally involves the use of a username and password, or it can be accomplished through voice recognition, fingerprints, employee ID cards, or even something as complicated as retina scans.


Web servers issue authentication certificates to clients as well, which are proof that the system truly belongs to the entity it is claiming. These certificates are often processed through third party authentication, such as Thawte or Verisign. You can check which authentication is used by a company by looking on their website for a seal or link to the third party provider they use.


Authorization is usually coupled with the authentication process, determining whether or not the client has permission to access the resource or file they are requesting. By using authentication, the server identifies who you are, then checks a list of authorized users to see if you are allowed to visit the website, open the file or use the resource you are attempting to access. This may involve a password, or it may not.


Authorization usually only grants or revokes access based on your identity as you log in to the file or website. Most internet web pages are open to the public, and do not require authentication or authorization to access. Private sites, company restricted information, and other private data is generally encrypted with authentication and authorization tools.


Identity Management describes the process of managing the authentication and authorization of people within an organization. An identity management system keeps track of privileges across an entire entity, increasing security and productivity for a business. Identity management can be accomplished with active directories, identity providers, access control systems, digital identity managers, and password authentication. By keeping track of how users receive an identity, protecting that identity, and granting appropriate access, the identity management system saves money, repetitive tasks, and downtime of the system.


Back-up plan in case data is breached. How to assess the situation without a big loss


What do you do if, despite your best efforts, sensitive company or client data is breached? It’s important to have an emergency plan with an outline of the proper steps to take in this unfortunate situation. In order to act appropriately, be aware of government regulations and rules for how to handle this kind of situation.


According to the Better Business Bureau, some important steps to take to prepare for and react to a data breach situation includes:

1. Create a “Data Breach Notification Policy” to let your consumers know how you will handle the situation if data compromise has occurred.

2. Train your employees to identify possible breaches in data and how to report it.

3. When a data breach has occurred, immediately gather the facts so you know what was accessed, how it was accessed, and how you can prevent more data from being compromised.

4. Notify any financial institutions involved. For instance, if bank account numbers were accessed, notify the relevant banks immediately so they can watch accounts for suspicious activity. If credit card numbers were affected, credit companies can change card numbers and make old numbers ineffective. This will minimize damage.

5. Seek outside counsel from a lawyer, a risk consulting company, or a relevant government agency. They can help you identify the laws involved and whether you need to alert clients, consumers, or the government of the incident.


The Importance of Taking Precautions to Secure Data with Credentials


The importance of securing your data with authentication protocols and credentials cannot be overstated. Making sure that secure data is viewable and accessible by only those with the proper credentials is important for the management of any business. Find a data security partner who shares your vision for the protection of company documents, user identification information, and other private information. Take every precaution necessary to make sure your customers, employees, and your business data is protected against hackers, thieves, and those wishing to do harm to your business, clients, and employees.


Posted in Datacenter Design | Tagged , , , | Comments Off

The Benefits And Risks Of Cloud Computing

Business Data CentersCloud computing has become increasing popular over the past several years. It enables hosted services to be delivered over the internet, rather than physically storing information on computers. The data center is a critical part of service delivery. Cloud computing has benefits and risks.


Service Models


There are 3 different cloud service models: IaaS, PaaS, SaaS. IaaS (Infrastructure as a service) supplies computing resources such as network capacity, storage and servers. PaaS (Platform as a service) gives users access to software and services so they can create their own software applications. SaaS (Software as a service) connects users to the cloud provider’s own software applications.


Public or Private


There are several different types of clouds. A public cloud sells service to anyone on the internet. There are a number of well-known, established companies that provide public cloud computing services. A private cloud supplies hosted services to a defined group of people through a proprietary network. Another variation is a hybrid cloud. It combines two or more public or private clouds that are separate entities but intertwined by technology. A community cloud is designed to be used by multiple organizations in support of defined community. Community clouds can be maintained by a third party and located on or off the premises. No matter the type of cloud, the goal is to supply access to straightforward, scalable computing resources and IT services.


Low capital expenses


The computing and capacity needs of organizations are constantly in a state of flux. Cloud computing can be a very cost effective way to access additional computer resources from a state of the art data center. It provides flexibility that many businesses desire. There is no upfront investment in equipment. With cloud service, the needed equipment is already available and ready to use. Businesses can rent the latest equipment, such as servers, when required, rather than having to purchase or upgrade their own servers.


It’s elastic


Another benefit of cloud computing is that it is elastic. A user has access to as much or as little service that they may need at any point in time. It is exactly like turning on a faucet when you want a drink and turning it off when finished. The ease of scalability of cloud computing eliminates concerns a customer may have about things like having to provision their own additional capacity on short notice. A cloud provider handles these issues for their customers.


Provider managed


Cloud services can be fully managed by the provider. Many businesses find this attractive because it can help lower IT operating costs. Expensive IT staff to install, maintain and monitor equipment and applications is not needed if a business is using cloud computing.




One of the negatives of cloud computing is that services are not typically delivered with a lot of transparency. It can be difficult to know what exactly is being done and how it is executed. Cloud customers are not able to monitor availability, or do anything to resolve a service interruption or an outage. Loss of data is another concern. Even if a cloud provider states they have redundant back-up capabilities, you will not know for sure unless there is a problem. It is always a good idea for a company to back-up its own data for extra protection.


Data ownership


Cloud customers frequently don’t own their own data. There are a number of public cloud providers whose contracts contain language clearly stating any data stored is owned by the provider, not the customer. Cloud vendors believe having ownership of the data provides them more legal cover if something were to go wrong. If a vendor has access to customer data, they are also able to search and mine the data to develop more potential revenue streams.


Shared access


A potential risk of cloud computing is that involves sharing access with other users. One of the reasons the price for cloud services can be so attractive is that vendors are realizing economies of scale to drive down the cost. Multiple customers have access to the same computing resources including memory, storage and CPUs. These customers are typically totally unrelated.


Advantages of building a private cloud


It may make the most sense for a company to build its own cloud rather than rely on other options. If a company prefers to manage all its computer resources and IT operations in house, then a private cloud is a good choice. This solution can also give a company complete control over its technology. A private cloud can be an optimal way to solve an organization’s business and technology challenges. It can provide IT as a service. The results can include lower costs, increased efficiency and business innovation. A private cloud can be used for employee collaboration from any location on any device. It will also help ensure the best possible network and application security.


Things to consider when building a cloud


There are many things to consider when building a private cloud. They include infrastructure, types of applications that will be run, access methods, traffic patterns and security. When determining infrastructure, it is important to have an experienced data center partner involved in the process. This will help to ensure the quality of service. Mobile device based access has become increasingly important. Virtual desktops are another important access component. Many companies need to have traffic endpoints that are location independent. Security is always a crucial piece of cloud design. The appropriate users and devices must be able to access the needed information at the right time, while also securing the system from any attacks or breaches in security. Different types of data and traffic possess disparate levels of importance. They all must be supported across the network. There is no one size fits all private cloud model. Every organization has different priorities and needs that should be taken into account and reflected in the cloud architecture.


Cloud computing is directly influencing the future of technology. There are several service and delivery models for cloud computing. Each one has its own pros and cons. Building a private cloud may ultimately be the best option for an organization that wants to have complete control and manage everything in house.




Posted in New Products | Tagged | Comments Off

What The 2013 Data Center Census Means For The Industry

UPS Maintenance

Uninterruptible Power Supply Maintenance

The 2013 data center census works to give a glimpse into the future of the data center industry. Although the census is not able to completely predict the future, it can give some telling indications about where the future of data storage is headed. The census works to identify the answers to questions about where data centers will be spending their money, what new technology will be developed, how the cloud will affect the need for data centers, and many other important questions for any individual working within the data center industry. Although it is impossible to predict exactly what will happen and foresee any disasters that may also affect the industry, the census gives some insight into how data centers will change in the 2013 year.


The Implications Of The Cloud

The cloud has become a popular way for companies to store important information directly to the web, rather than being forced to hire a data center to handle the storage of all important information. Concerns within the industry were that this new technology would make data centers obsolete and old fashioned, and that companies would turn to the cloud as the more effective way of handling information. As the data center industry worked to evolve with this new technology, many in previous years felt that a significant amount of their time and money would be spent developing new technologies that conform to the idea of the cloud. In reality, very few data centers actually invested money in this idea in 2012, choosing instead to focus assets on everyday items like cooling, cabling, and power supplies.

According to the data center census, 2013 may just change all that. From the information within the census, it can be assumed that architecture for cloud infrastructure will be a main focus of data centers throughout the world. It’s predicted that some countries will see close to a 138% increase in uptake in regards to cloud architecture. Those countries that have a lower percentage of uptake are also those with a higher uptake over the previous years, leading one to believe that they have already implemented new technology. No matter what country is being examined, it’s clear that the cloud and the changes necessary because of it will be an important part of every data center’s future plan.


Data Center Infrastructure Management

DCIM is a fairly new idea that that works to merge the fields of network management and physical management within the data center. This works to create new systems that make the center more energy efficient.

DCIM has been considered the solution to the problem of energy efficiency within the data center. Implementing new data center infrastructure management should ideally make each data center more efficient and more cost effective. In the previous year DCIM did not perform as well as expected, but it appears that 2013 may change all of that. Countries such as Latin America and Russia predicted a high uptake in DCIM throughout the 2013 year, although many who had the lowest uptake in 2012 also expected a higher uptake in 2013.

The most important pieces of information to take from the census are that the cloud will be an important aspect of every data center going forward, that data centers will spend a large amount of money working to make the centers cloud-friendly, and that DCIM will be implemented at a higher rate than before in order to make data centers more energy efficient.

The focus on energy efficiency and developing ways to make data centers more efficient may have come from the fact that the 2012 census brought staggering numbers of the total energy consumption by data centers used throughout the world. Creating data centers that are more energy efficient helps to save money and is easier on the environment, while creating outputs that are measurable and can be examined in every aspect of the center’s energy usage.


Census Details

The census looks at information given from over 10,000 participants all over the world regarding important and relevant topics within the industry. The census also focuses on the charity efforts and philanthropic efforts made by the industry, by giving five dollars to the Engineers Without Borders organization for each survey that is completed. The previous year’s census helped organizers to amass a fund of $40,000, which was then donated to UNICEF to help children throughout the world.


Practical Applications For The Census

 For any individual working in the data center industry, there are practical applications that can be made from the information gleaned by the census. With this high number of participants, it can be assumed that the information is fairly accurate in depicting what the next year will be like for the industry.

Data center professionals can recognize from the census that the competition is focused on new technology, and on creating an infrastructure that is compatible with the cloud and allows for customers to utilize this valuable new tool. Data centers that are hoping to stay relevant within the industry will be rewarded by moving assets toward the development of a new infrastructure that includes the cloud.

Data center professionals can also assume that energy efficiency will be a big topic this year in the industry. When a center is run more efficiently, and costs are lowered, the savings can either be passed onto the customer, or used to improve the service that the customer receives. Data center management must realize that the competition is focused on lowering energy costs both as a way to improve the way consumers look at the center, and to allow them to put money into more valuable developments and tools. Ignoring the need for a data center that consumes less energy can make a center seem outdated and inefficient to the average consumer.


The Forecast For The Future

 Each year, individuals within the data center industry can focus their efforts on important updates and changes that make the center more functional and more successful. In order to determine where money should be spent in order to accomplish these goals, it’s important to pay close attention to the census information that is released each year. This can give each data center important clues as to where the industry is headed, and how quickly they need to get there.

Posted in Construction Industry | Tagged , , | Comments Off

Setting Up A Disaster Plan For A Data Center

data_center_disaster_planIn terms of disasters, few people are prepared for the chaos that can come as a result of any type of disaster. Floods, fires, tornadoes, hurricanes, and even heavy rainstorms can damage structures and belongings beyond repair. Most times, there is little to no warning that a disaster will occur, and minimizing the damage becomes difficult without a disaster recovery plan in place. This precaution is especially important for a data center, where large amounts of expensive equipment and irreplaceable information may be stored. Creating a basic disaster plan for your data center is a simple process if you know where to start.


Assess The Risks

What types of risks does your data center face on a daily basis? A center in the middle of Arizona isn’t likely to deal with a hurricane, but a fire or monsoon is a likely possibility. California data centers may not see a heavy amount of snowfall, but must be prepared for floods and earthquakes. Before you can prepare for any disaster, you must determine which disasters your data center is at risk from.

Along with natural disasters, there are man made disasters that can happen with little warning. Fires may result from an electrical shortage, equipment may be damaged by a theft or burglary, and other number of man-made disasters may occur. Data centers in all parts of the world should be prepared for these untimely incidents.


Within an operational risk assessment, examine the following information:

• The location of the building

• Access routes to the building

• Proximity in relation to highways, airports, and rail lines

• Proximity to storage tanks for fuel

• How power to the data center is generated

• Details of the security system

• Any other critical systems that may shut down in the event of a disaster


Assessing the risks is the first step in creating a contingency plan that protects the building, the information, the equipment, and the employees when the unthinkable happens.


During the risk assessment, do the following things.

• Include all IT groups to guarantee that all departments have their needs met in the event of an emergency.

• Obtain a list of all data center assets, resources, suppliers and stakeholders.

• Create a file of all important documents regarding the infrastructure, such as floor plans, network diagrams, etc.

• Obtain a copy of any previous disaster plans used for the particular data center.


Once all relevant information has been gathered regarding the data center, the design process can begin.


Preliminary Steps For Disaster Planning

The first step in creating a disaster plan for a data center is to consult with all management within the center to flush out the threats that are most serious to the center. These can be human error, system failure, a security breach, fire, and many other things depending on the individual center.

The second step is to determine, with the help of other management professionals, where the most vulnerable areas of the data center are located.

Next, study the history of any malfunctions the data center has faced and how each disaster was handled.

It’s also important to determine exactly how much time the data center can handle being without power before the situation becomes critical.

Next, review the current procedures for how an interruption to the data center power supply should be handled, and obtain information regarding when these procedures were last tested by the appropriate individuals.

Single out emergency teams for the building, and review their training in regards to emergencies to determine if additional training or updates need to be implemented.

Finally, identify the response capabilities for emergencies for each of the center’s vendors.


Developing A Data Center Disaster Recovery Plan

When compiling information in regards to risk assessment, no stone should be left unturned. The more information, the more accurate and successful the disaster recovery plan will be. Disaster recovery plans cannot be created without a good level of organization and information, and will be extremely ineffective if information is inaccurate or incomplete.

The next part in a disaster recovery plan involves compiling a gap analysis report that determines the differences between the current emergency plan, and what the new emergency plan needs to be. During this process, all changes should be clearly identified and listed in order to more efficiently address potential problems. Include the total investment that is required to make the changes along with recommendations from the proper professionals on how to implement each change. Once the report is complete, have each member of management read the report and choose which recommended actions would be put into place. Each management member should have input into which changes are made, and coming to an agreement may require more time spent at the drawing board.

Once the recommendations are in place, and each member of management has agreed that the needs of their individual department are met, it’s time to implement each of your changes for your critical assets. Hardware and software, networks, and data storage should all be addressed within this step to ensure that equipment is protected and that information can be recovered in the case of a disaster. Once changes are implemented, tests should be run to determine if system recovery assets and plans are properly functioning.

If it is determined that the updates are functional and successful in recovering and saving equipment and data, it’s now time to update all documentation for disaster recovery in company handbooks or policy manuals. Because technology is constantly changing and the needs of data centers are always evolving, disaster plan updates should be made regularly. In order to do this successfully, there must be an accurate record kept of former procedures and how well they worked as intended.


The Next Disaster Recovery Plan Update

Once a new recovery plan is in place, there is no time to relax. Changes in the plan should constantly be on the minds of management personnel, and the next update to the system and process should be scheduled before the committee adjourns.

When designing a disaster recovery plan, keep the information as simple as possible in order to stay more organized, and to avoid going overboard and overlooking important minute details. It’s not necessary to completely overhaul a system to update a plan; constant changes should be made to protect the equipment and information housed by a data center.


Posted in data center maintenance | Tagged , , | Comments Off

Facebook Opens First Data Center Outside of United States


data_center_facebookThe city of Lulela, Sweden is situated in the frigid northern part of the nation. Just sixty miles from the Arctic Circle, temperatures in Lulea can sink as low -41C. Lulela has a big new resident; social media giant Facebook recently opened a brand new data center in this icy section of Sweden. The Facebook Lulela data center is the company’s first data center located outside of the United States, a sign of their commitment to better serving their European user base. While Lulela might seem like an odd choice for such a place, Northern Sweden looks to play host to servers from a variety of web operations, including Google. Facebook has over one billion monthly users, creating a significant amount of data for the social media giant to manage. Facebook users generate over ten billion messages a day, store in excess of three hundred billion photos, and create billions of “likes” a day. That adds up to a lot of data to store.

Why Lulela?

Facebook’s choice of Lulela has been met with both praise and skepticism from users, environmental groups, and privacy advocates. Facebook’s choice of Lulela for its new data center was based on meeting two goals. Facebook’s first goal was to improve performance for its European user base, which required a new data center location that could increase speed and performance for users. Facebook has more users outside the US than inside, making the development of data centers outside the US a logical decision. The Lulela data center enhances user speed up to or beyond Google levels, which is a significant improvement. The new data center will be largely dedicated to storing “unused” data such as photographs that are infrequently accessed. Photographs account for a significant amount of Facebook traffic.  Acccording to the company, over eighty percent of traffic goes to less than ten percent of its photos. The Lulela strorage center is capable of storing the equivalent of 250 million DVDs.

Facebook’s second goal was to build an environmentally friendly center, keeping with its growing commitment to energy efficiency. The new data center is powered by hydro-electricity via a nearby dam that generates twice the power of Hoover Dam. In the event of a power outage the data center will depend on a stockpile of diesel generators. Lulela’s frigid environment was another selling point for Facebook; the eight months of icy winter will serve as a chiller for the five acres of server equipment. The facility uses the cold air to maintain a water evaporation driven cooling system. Surplus energy from the coolant system and the servers will be used to heat the data center’s office space. The “cold” system of data servers is rising in popularity due to its cost effectiveness and energy efficiency. Facebook’s new center operates on cutting edge energy efficiency levels; Facebook claims that the new data center is “the most efficient and sustainable data center in the world.” Facebook also chose a minimalist approach to their server equipment, forgoing the use of unnecessary plastic and metal cosmetic materials.

Facebook’s decision to build in Lulela was also bolstered by a perceived corporate friendly environment in Sweden and a readily available skilled workforce.

A Valuable Backup

Besides providing a boost for European users, Facebook’s new data center handles live traffic from around the world. Facebook’s other data storage centers are located in California, Oregon, and Virginia, with another center in development in North Carolina. Noting the server problems that other web and communication providers have experienced, Facebook’s new data center also serves as a backup in the event of server crashes at their other locations. After some difficulties following its Initial Public Offering, the social media giant has doubled down on efforts to keep its user base satisfied. Facebook’s continued efforts to effectively monetize the site demand a high level of performance and consistency. Maintaining and storing user’s messages, photos, and other material, regardless if they are used or not, is an important part of Facebook’s effort to keep users logging on regularly.

Security and Privacy Concerns

Some users have responded favorably to the new data center and its location, assuming that its European location will keep their valuable personal information and correspondence more secure. The recent news regarding the National Security Agency possible monitoring private communication and social media information has caused some Europeans to question the safety and privacy of their Facebook account. Facebook has repeatedly denied providing information to the NSA or other government agencies or participating in warrantless exchange of user information. This is not the first time Facebook has been accused of dispersing user information; there have been allegations that the site has allowed advertisers and other entities access to user information as well. Facebook accounts are ripe with personal information, including personal data, correspondence, pictures, shopping habits, interests, etc. This wealth of user information—and its value to advertisers—is one of Facebook’s most valuable assets. Facebook continues to face constant threats of malicious third party applications, phishing software, fake accounts, and data manipulation.

Those who espouse the data center’s location as offering a security boost might be surprised to find out that Sweden is not necessarily a haven for web privacy. A law passed in 2008 allows the Swedish government to monitor and record any web traffic that crosses their borders without a warrant. This means that all of the live traffic and data going through the Lulela data center can be legally accessed by the Swedish government at any time, for any reason.

The Future of Facebook and Data Storage

Facebook’s increasing emphasis on user customization has the potential to dramatically increase the amount of personal information stored in its data centers. Barring a massive decline in its user base, Facebook will require additional data storage at some point in the future. Facebook has experienced a recent decline in downloads of its mobile applications, leading the company to consider expanding its partnerships with mobile device providers. It is too early to tell if the planned emphasis on mobile devices will have a significant impact on Facebook’s data storage requirements. Facebook looks to continue to develop new data storage technology. While Facebook handles the majority of data center design and development, it is not opposed to working with other tech and web companies to further innovation.

Posted in Technology Industry | Tagged , , , | Comments Off

A Guide To Modular Data Centers

Modular Data CenterModular data centers became popular when the economy started to swirl the drain. Businesses needed new ways to secure funding in small amounts while simultaneously decreasing the risks that came with creating a data center. Two of the main gripes with traditional data centers are the speed at which they could be deployed and revenue. It takes an abundance of both time and money to construct a building to house a traditional data center. Something else to bear in mind is that advancing technology encourages businesses and organizations to shift their focus to scalable and rapid modular data center designs. Besides these two reasons, there are several other reasons that a modular data center is preferable to a traditional data center.


The Overall Design of Modular Data Centers


From the original order to the deployment, modular solutions offer an extremely fast timetable. The reason for this is that they’re designed to be able to be personalized, ordered and shipped to data centers in a matter of months of less. Modular data centers also allow for a parallel dependent transition as opposed to linear dependent transition when it comes to construction. Since they have a design that can easily be standardized and repeated, it’s no problem to match scale infrastructure and demand for modular data centers. The only limits on modular scale are the foundational infrastructure at the data center site and the open area that’s available. Another aspect of scalability is the ease it allows by having modules that can be efficiently and quickly be replaced if the technology ever needs to be upgraded. What this means for businesses and organizations is that they can predict shifts in technology a matter of months in advance.


Scalability for modular data centers is not only determined by how quickly the proper environment for a data center can be set up. Having an agile data center foundation is the same as swiftly being able to satisfy the needs of a growing and shifting business. Such needs might include creating a revolutionary new service or cutting down on downtime. It’s all about agility. Some business want a data center for the sole purpose of capacity planning while others like modular data centers because they have some of the best disaster recovery operations.


Something else to think about with modular data centers is that they can be delivered anywhere in the world that the end user desires. Rather than delivering the center all at once, a data center can be delivered and re-assembled rapidly once it arrives at its final destination. Such mobility can be one of the top selling points for businesses and organizations who make disaster recovery one of their top priorities since modular data centers can be shipped to a recovery site, put back together and have the organization running in very little time.


The Disadvantages of Modular Data Centers


One of the disadvantages of modular data centers is that some of their standard configurations have a limited value for organizations and businesses that want high-performance computing, in addition to the heavy cooling and power requirements that come with them. If owners were to take a look at the cost analyses, they might see that they would see very little to no savings if they were to have a module installed anywhere outside of the data center. Site preparation work would still need to be done, which requires utilities to be brought over and trenching, both of which cost money. When looking at it this way, it would actually be more affordable to have a traditional data center constructed.


Being locked in with a single vendor is something else that organizations worry about when it comes to modular data centers. The reason they might not want to be locked into a contract with a single vendor is that the organization or business might not have as many choices in the models and brands of internal components or the choices of terms of service if something were to ever go wrong with the data center. Being stuck with a vendor also means that data center owners aren’t able to keep looking for lower cost repair and maintenance services.


Those looking into data centers also have to bear in mind how well they’ll work with the resources they already have. The infrastructure management applications of a data center have a main console for keeping an digital eye on a vast network of resources, such as virtual and physical servers and power distribution and cooling centers. If a modular data center has DCIM capabilities of its own, it will most likely be able to work in conjunction with standard interfaces for swapping information with the organization’s existing DCIM systems management applications or programs. To keep from having to supervise modules separately as opposed to part of the whole, it’s best that buyers ask for a rundown of specifics on how open the modular unit is.


If you want a data center that’s based on open standards, it’s best that you have a list of primary standards to show the data center company since not everyone has the same idea of what open standards are. This will keep you from wasting time and possibly money to clear up the confusion before you and you modular data center up and running.


Advantages of Modular Data Centers


Modular data centers are an engineered product, which means that their internal subsystems are intricately integrated in order for them to be more efficient when it comes to both gaining power and cooling the module. Pure IT and first generation modules will more than likely not have the same levels of efficiency gains as those that have similar containment solutions within a traditional data center. In order to save funds on distribution gear and to avoid power loss from close proximity, it’s suggested that you set the data center’s power plant relatively close to the IT servers. You’ll also find that you’ll have chances to make use of energy management platforms inside of modules, with every subsystem being created as a single piece.


Do your homework and plenty of research before making a final decision on whether you should get a modular data center or a traditional one. If you’re looking for efficiency, easy setup, resiliency and scalability, you’ll more than likely benefit from a modular data center.



Posted in Construction Industry | Comments Off

Data Center Regulations And Construction Requiring PUE Surveys

Possible Future Ramifications For Data Center Regulations And Construction Requiring PUE Surveys

Low PUE Data CenterThere are three main categories in which PUE surveys would likely be used in the future construction and maintenance of data centers.  These include future government regulations pertaining to usage of greenhouse gasses; standardization for data center construction; and using PUE surveys to optimize return-on-investment due to the predicted short “shelf-life” of data centers.

New data centers have been estimated to be obsolete in under a decade by research firms including Gartner (7 years) and the International Data Corporation (9 years).  Many others speculate that the shelf-life of data centers is closer to five years.  Thus, data center construction and maintenance must take cost-benefit into account to a greater degree than many other structures built to support infrastructure or service providers.

Data center construction heavily relies on a variety of specific needs.  For example, data centers must be in a position to effectively re-route many utility lines during construction; design and build firms must work closely with government agencies and service providers to ensure that a large data center will not overwhelm an existing power or utility grid; data centers must have complex HVAC systems; and, any loss of power or lack of maintenance could be catastrophic for data centers catering to individual clients.  The potential for lost data and the lost ability for customers to have their websites live could easily result in various lawsuits citing lost earnings and corresponding legal fees.

To make a new data center successful, it is imperative to stay informed on all pertinent news and stay abreast of all likely future trends in regulation regarding construction methods and energy usage, especially on hot topics such as utilization of greenhouse gasses.  Due to the costs associated with building and maintaining a data center, knowing the nuances of data center maintenance such as Power Usage Effectiveness (PUE) surveys can be the difference between success and failure.

PUE survey services

PUE surveys can show whether or not a data center is working to its full potential.  One of the most considerable costs when running a data center include costs associated with energy.  Taking preventative measures to ensure that your data center is functioning optimally can save money now and help prepare for possible future regulations that address the amount of greenhouse gasses consumed by a certain facility.  Three common types of PUE surveys include:

  • Thermal imaging surveys
  • Power quality surveys
  • HVAC and thermal imaging surveys

Examining the overall power consumption of a data center is helpful to an extent.  A sudden and unexplainable spike in energy consumption should raise cause for concern.  However, overall energy consumption reveals little about target areas that are not preforming with optimal efficiency or a starting point to remedy the problem.  Without specific PUE data, trying to optimize efficiency and address problem areas can be like searching for a needle in a haystack.

Thermal imaging surveys

Routine maintenance is highly recommended.  It is not only conceivable but common for necessary re-calibration of measurement tools.  In addition, PUE surveys are designed primarily as a preventative measure.  Instead of waiting until there is a noticeable problem, thermal imaging surveys can detect an atypical transfer of energy, such as heat.  Thermal imaging can be especially effective in a data center environment as data centers rely heavily on equipment functioning in an artificial climate.

Thermal imaging technology can provide more accurate PUE data due to the consistent temperature within a data center as opposed to a structure with a less advanced (and less predictable) HVAC system.  Thermal imaging can target specific rooms or smaller areas.  Furthermore, thermal imaging can provide data that can help prevent the chance of fires, unseen faults in electrical systems, and determine to what extent a data center is in jeopardy of loss of data due to an unknown electrical problem or electrical fire.

Power quality surveys

Power quality surveys are able to gather data relevant to power consumption to a much more specific degree than power consumption for the entire facility.  They can investigate flicker, slag, and other similar phenomena.  In addition, power quality surveys can ensure that enough power supply is available to meet demand.  As many data centers have additional power redundancies (e.g. generators) in addition to being connected to an existing power grid, it is essential to know to what extent power supply is readily available to prevent a power outage or crashing the power grid within the data center itself.    Electricity and energy consumption should be ideally dispersed throughout an entire data center instead of taking a risk by having a disproportionate amount of power go to a specific area.

HVAC and thermal imaging surveys

HVAC systems are imperative for the functioning of almost all data centers.  Troubleshooting problems with HVAC systems along with collecting data that suggests inefficiencies in the system is recommended routine maintenance in order to save money or avoid a system meltdown.  Thermal imaging is highly indicative of optimal HVAC systems for obvious reasons.   Some of the first signs of HVAC malfunction include uneven distribution of heat exchange.  In addition, excess power is wasted when trying to support a sub-optimal HVA system.

How PUE surveys may affect the future of data center construction and maintenance

There are three main factors in determining the future of data centers: government regulations, the “shelf life” of new data centers, and the necessary return on investment from construction to when the data center is rendered obsolete.  In short, data centers need to be designed for the future and always strive to operate at optimal levels of efficiency.  Thus, PUE surveys may impact future data centers in the following ways:

  • Mandatory laws regarding PUE surveys and increased government regulation
  • Need for increased PUE surveys to optimize overall efficacy due to shorter data center shelf lives to optimize return on investment
  • Standardized construction methods to promote longevity of data centers and preservation of resources

After investing in a data center, ensure that PUE surveys are conducted regularly to save money now and stay in compliance with possible future government regulations.  Aside from early detection of possible catastrophes, PUE surveys can help prepare for the future of profitable data centers along with preempting possible future regulations pertaining to energy consumption and greenhouse gasses.



Posted in Data Center Construction | Comments Off

A Look At Google’s Data Center And What It Takes To Keep It Running Smoothly

Inside Google Data CenterGoogle currently has 13 data centers located in North America, South America, Asia and Europe. These data centers house the thousands of machines needed to run Google’s operations. Whether a customer is using Google to send an email, make an online transaction, search the internet or do business with Google Apps, all the information is processed through a Google data center.

Employees at the data centers work hard to keep Google running smoothly. In addition, they work to assure that customers’ information is kept safe and secure. The following details the facets of keeping Google’s data centers working and constantly improving.





Google is the first of the main internet services companies to be given certification due to the high environmental standards employed in their US data centers. They are dedicated to being green through the wise use of energy. Google is working hard to conserve and reduce energy in all of their data centers. Most data centers spend 80 to 90 percent more energy in cooling their machines than in running them; Google’s cooling costs are only 12 percent higher than machine operating costs. Only 50 percent as much energy is used at Google data centers as is used at other data centers. Thus far, Google has over a billion dollars in energy costs.


At Google data centers throughout the world, a wide variety of methods are being used to keep energy costs down while protecting the environment. Google buys electricity from wind farms near their data centers. Additionally, 33 percent of the energy they use is renewable energy. They also have solar panels on the roofs of their data centers.


In an effort to improve the environment by reducing the number of vehicles on the road, Google has created a bike to work program. They also have a shuttle program to transport employees to and from work.


Inside Google’s data centers, the temperature is raised to 80 degrees Fahrenheit. Outside air is then used for cooling, thus further reducing energy costs. Google’s servers are specifically designed to use as little power as possible. Removing all unnecessary parts and minimizing power loss makes the servers more green.


Not only is Google continually looking for ways to reduce energy and improve the environment, they are also helping others to make an impact on the planet. Email hosted on local servers can leave a carbon footprint more than 80 times that left by a Gmail user. Companies that use Gmail decrease their environmental impact up to 98 percent.  Through Google’s desire to produce renewable energy via solar panels and wind farms, they are actually able to produce more energy than they need. Over 500,000 homes could be powered with the excess energy Google produces.


Reusing and recycling are an important part of Google’s business. When machines become outdated, they are repurposed and continue to be used in Google’s data centers. After a machine is no longer usable, all data is completely erased and parts are either reused or sold. By repurposing machines, Google has been able to eliminate the need of purchasing over 300,000 new servers.


Data Security


When it comes to security, Google does not take any chances; they are committed to protecting the proprietary information of their customers. From physically securing data centers to meticulously building and monitoring servers, employees are working hard to keep customer and company information safe. As technology continues to evolve, Google persists in their efforts of improving security measures to ensure the ongoing safety of its customers’ information.


Google builds their own custom servers at each of the data centers. The servers automatically back up data, which protects customers in the event of their own system failure. These servers do not leave the data centers until they are non-functional. Then the data is completely erased and they are broken down and sold as parts. In addition, when hard drives become unusable, the data on them is deleted in a thorough process. Then they are either crushed or shredded and then sent to a recycling facility.


To prevent hacking, Google stores each customer’s information in chunks across many data centers. These information chunks are unreadable to humans and are named randomly. As malware is a legitimate threat, Google makes every effort to prevent and eliminate it. However, if a security incident were to occur, Google’s security team makes it their priority to resolve the issue as soon as possible.


There are specific plans in place in the event of a disaster of any kind. If there were a disaster, including a natural disaster, fire or security breach, data would automatically be transferred to a server at another data center. If a power failure occurs at any of the data centers, there are backup generators to keep everything running. To avoid potential power outages, Google has an air cooling system to keep machines at a constant temperature, which keeps them from overheating.


Physical Security


Each data center employs a security team around the clock. This security team is dedicated to maintaining a security at each data center. Security guards, surveillance cameras and fencing help to keep the facility secure. Improved technology including thermal imaging cameras help the security team members look for suspicious activity on the premises. Some data centers use biometric devices to further ensure a safe and secure facility.


Only authorized personnel are allowed on the data center grounds. No public tours are permitted, and security guards at guard stations allow only authorized employees past security checkpoints. In order to keep the inside of the facility safe, video monitoring of all areas allows the security team to view all areas of the data center.


Employee Security


All employees must undergo an extensive background and reference check. They are trained in procedures of security and ethics. Google limits employee access based on their position. This is just another way Google is working to keep its customers’ data secure.


Google’s data centers contain vital customer information, and Google is dedicated to securing and protecting the information. Through custom machine production to detailed security procedures, Google ensures data centers are running effectively and efficiently. Using renewable energy allows them not only to save money, but also to improve the environment.


Posted in Mission Critical Industry | Comments Off