Data Protection Solutions and Virtualization

In today’s modern world, protecting data from hackers and viruses has never been more important. There are many different types of data protection available on today’s market, all of which possess unique features, which can make it difficult to select the one that is right for you.

User Friendly?

When it comes to data protection products, the bottom line is ease of use. This is important because the person responsible for data protection may not be very knowledgeable or available to manage the product every day.

When selecting a data protection product, look for a system that can be managed on one user interface. The interface should be simple so that it only takes a couple of minutes to complete daily tasks. Another must is the protection product should be able to send alerts to your mobile devices and inbox with a simple, straightforward report about the recent activities and their status.

The ease of use requirement coincides well with historical reporting. If the system user understands what the solution is doing daily, then he or she will be able to understand what the system has been doing for the previous days, weeks, months and more. This feature allows for future planning with very little hands-on work. A great example of this is if there is a report of growth for the last four months, the user will then be able to predict when the system will be running out of space and be able to know when they should purchase more tapes. Although simple, this example shows that data protection products should help plan for the future usage, future budgets, in addition to operating well in the present.

One of the other features that can dictate whether the solution will be easier or harder to manage is the single-server footprint versus the master/media footprint. This is the same with automatic client software updates because manually updating systems across an entire infrastructure takes time away from the IT administers.

A successful experience with data protection products should have administrative time-savings inbuilt to help reduce the cost of operations.

The Life-Cycle of Entering Data

All data is not created equal, and it shouldn’t all be stored the same way. Sometimes IT organizations will drive up the cost of storage because the treat all the data the same and store it on the same media. Things like hierarchical storage management (HSM) or long-term archive management allows for flexible data storage on different tiers and with specific policies. These types of storage systems allow for administrators to migrate and store data on the tier that is most appropriate for the data they are storing. With these types of storage, older and less frequented data can be moved to a storage platform that is slower and much less expensive, like tape, while newer and more frequented data can be moved to faster, more expensive data storage. Automated data archiving can also help in the life cycle of data by helping organizations comply with data retention policies as well as reducing cost that is incurred because of that compliance.

It is important to look for data storage systems that reduce the overall cost because of automated data life-cycle management based on policy. Also, moving data to the most appropriate tier helps an organization become cost effective while still maintaining service level requirements.

Hierarchical Storage Management (HSM)

Hierarchical storage management (HSM) is a technique to store data by automatically moving it between high-cost and low cost storage media based on the data itself. High-speed storage devices (hard disc drive arrays) are much more expensive than slower device (optical discs, magnetic tape drives). In an ideal world, all data would be available on high-speed devices all of the time, but in the real world, this is extremely expensive. HSM offers a unique solution by storing the bulk of the organizations data on slower devices and copying it to faster drives when needed. HSM monitors how and how frequently data is used, and ‘decides’ which data should be moved to slower drives and which should stay on the faster drives. Data files which are used frequently are stored on fast drives, but will eventually be migrated to a slower drive (like tape) if they aren’t used for a certain period of time. The biggest advantage to HSM is that the total amount of data stored can actually be much larger than the capacity of the high speed disk storage available.

Virtualization

The technology surrounding virtualization has drastically helped IT organizations of every size reduce their costs. It has done this by reducing application provisioning times and improving server utilization. These cost reductions, however, can disappear quickly when faced with a virtual machine sprawl. Also, the connection between physical and logical devices becomes very hard to track and map, creating a virtual environment that is more complex than has ever been seen before. In these complex virtual environments, backing up and restoring data can become very difficult. For example, restoring and backing up data for a group of virtual machines that reside in one physical server can make all other operations running off of that server stop completely, including data protection services.

Data Reduction

Data reduction technology is the first line of defense against the volumes and cost of data that is rapidly expanding. Solutions to this problem include progressive-incremental backup, data compression, and data deduplication. These things can help organizations to reduce their backup storage capacity by as high as 95 percent.  Efficient tape management and utilization can also help reduce data storage capacity requirements.

 

Selecting a data protection service can be arduous, but if you understand the services offered and which are most useful to you, the selection will be much easier.

Posted in data center equipment | Comments Off

Ways to Ensure Security Within a Data Center

datacentersecurity2If you are designing a data center, one of the most critical issues to focus on is security. There are an abundance of threats that the data center will face, some of which can be prevented with proper planning. Hackers, disgruntled or careless workers, and even weather-related disasters can all wreak havoc on a data center. Not only can this lead to data loss, but theft of sensitive information and many other types of damage. In order to prevent these intrusions, it is a good idea to act before something occurs. When the data center is prepared for an emergency scenario, the damage will likely have less of an impact. Although it might cost more to do everything you can to increase security, the investment could pay off very well in the long run by addressing (or even preventing) complications.

Risks

The risks that data centers face include everything from seasoned criminals breaking in to employees that don’t pay attention on the job. By taking into account the multitude of problems that could threaten the data center, you will be less likely to experience a devastating intrusion or accident. When workers are aware of the various problems, accidental damage will be less likely and they will be more vigilant. Also, when you do everything you can to keep criminals out, they could decide not to target the facility, and if they do, you will be ready.

Landscaping and Windows

If you want the data center to have a quieter presence, you can employ some landscaping tactics to blend in better. By strategically planting trees or placing boulders, you can make the building less noticeable. Not only will this deter unwanted attention, but it will complement the facility as well. After all, who doesn’t appreciate nice landscaping?

Although windows might seem like a good idea, they should actually be avoided in a data center. The center shouldn’t be designed with typical rooms, where windows are often important. If you absolutely have to include windows, make sure to use glass that is laminated and resistant to bombs. If you have weak windows, this is a huge vulnerability that any potential intruders are likely to notice.

Entry Points

Every point of entry and exit should be watched at all times. Make sure that you know exactly who is going through entry points and record their movements. Not only could this come in handy in the event of a fire, but if a security issue ever arises, you will know which people were in different parts of the building at a certain time. Furthermore, you should reduce the number of entry points. Not only will this be less expensive, but much easier to control as well. For example, it could be a good idea to only have a front and back entrance.

Authentication

Make sure you have top-notch authentication. For example, with biometric authentication, such as scanning a fingerprint, the building will be much more secure and the likelihood of unauthorized access will be significantly reduced. In parts of the building that are not as sensitive, simpler means of authentication (such as a card) could work. The level of authentication should depend on how sensitive a particular part of the building is. If a room houses highly important information and equipment, it is wise to employ strict authentication.

Video Cameras

Fortunately, many people understand the benefits of using video cameras to improve security. Not only are they worth the cost, but they work well. If something happens to the data center, it is extremely helpful to have the ability to see what happened. In order to make security cameras more effective, strategically place them throughout the premises. Plus, you can consider hidden and motion-sensing cameras to further enhance security. It is smart to keep track of all the footage at another location.

Keep it Clean

Although security and cleanliness don’t always appear to go together, people have a tendency to spill food. Of course, when food and drinks are spilled on computer equipment, it can damage them, so make sure everyone understands the importance of eating in a designated area. There are already too many things that can go wrong in a data center and spilling food shouldn’t be an issue.

Doors and Walls

Another way to reduce the likelihood of an intrusion is to use doors that only have handles on the inside. Exits are essential, in case there is ever a fire, but keep the handles on one side. Also, set up an alarm system so that if one of these doors is opened, security will be aware.

Although some people might not devote too much attention to the walls, it is crucial to ensure that nothing will remain hidden on them (and the ceilings). Make sure that there are no points of access that are not visible and that walls go from beneath the floor to slab ceilings.

Get Started

If you want to ensure that everything is done to protect your data center, you should always focus on the numerous ways to improve security. If you haven’t thought about security as much as you should have, start researching and preparing as much as possible. Even after the data center has been operating for years, there are still ways to make the facility safer and more secure. Because there are countless threats that will never go away, security is a constant concern. By remaining vigilant, taking advantage of new technology and tactics, and doing everything you can to prevent and deal with threats, your data center is less likely to suffer from a future attack.

Posted in data center maintenance, Facility Maintenance | Tagged , , , | Comments Off

Should You Build Your Own Data Center or Partner With Someone Who Already Has One?

datacenter45There’s nothing quite like the satisfaction that comes with doing things on your own, but you might be better off working with someone else for certain tasks, such as data storage. Even if you have a small business, there are circumstances in which you’ll want to either pay for data center services or partner with someone who already has a well-established data center. So how do you decide between maintaining your own data center and partnering up with someone who already has one?
How Much Data Do You Have?
If you have a massive amount of data to keep up with, then you might be better off partnering up with someone who has a data center and an equally massive amount of data that needs to be kept safe. By pairing up with someone who already has the same needs as you, you won’t have to worry about doing a lot of trial and error as you’re trying to figure out all of the requirements necessary for keeping such a large amount of data safe, and that’s especially true if you have sensitive information from your customers and clients.
If you don’t have very much data that you need to store in a data center, then you’ll probably be fine with building your own data center. Just make sure that that data center has all of the security features necessary to truly keep your information safe and that it can grow as your business grows.
How Sensitive is the Information You’re Storing?
Sensitive information calls for an equally sensitive data center. If you have credit card numbers, bank accounts, social security numbers or any other kind of sensitive financial or personal information from your customers and clients, then you might want to strongly consider partnering with someone who has an extremely well-protected data center. Not only can cyber criminals do a lot of damage to your customers and clients if they manage to get ahold of their information, they can do a lot of damage to your reputation as well. How many people are going to want to do business with you if they can’t trust you to keep their information safe?
Even if cyber criminals don’t steal financial and personal information, they might infect your data center with a virus that could wipe everything out or corrupt your data, which also won’t do any favors for your reputation. Identity theft and viruses can cost you a lot of money in terms of your having to inform all of your customers and clients in addition to having to pay professionals to take care of all of the damage done by cyber criminals. Eliminate the process by pairing with someone with a secure data center.
What is the Upkeep and Run-Time of Keeping Your Data Safe and Secure?
If your data requires an abundance of upkeep and run-time, you have to ask yourself whether or not you have the time and resources necessary to take care of all of that. Individuals who aren’t very technologically savvy or who have a limited amount of time on their hands might want to think about letting someone else handle the run-time and upkeep of the data center. One mistake in either could cost you in time and money. Something else to think about is that as time passes and your needs grow, you might need more upkeep and run-time in order to keep your data center properly up and running. If your business is growing along with your run-time and upkeep, you might not be able to keep up with everything on your plate. Don’t just consider your present needs when thinking about data center requirements, think about the future and your individual goals as well when determining whether or not you’d prefer to have your own data center or partner with someone who already has one.
Do You Plan on Building a Larger Database?
If your current data center is more of a starter data center, you should look into co-location. One of the many joys of co-location is that you can find a facility that is built specifically to your individual needs and requirements. Something else to think about with co-location is that data centers have their own air conditioning, security systems, generators and constant professional monitoring, all of which can cost millions of dollars if you were to buy them on your own.
Another good thing about co-location is that it’s a great way to manage your risks if your main data is kept at your main office. If anything unfortunate should ever happen to your office, all of your data will still be safe and sound at a co-location facility. One good idea is to use your main office for backups and recovery while you use co-location center as the main location where your data is kept.
With co-location, you can rent out property for a long period of time while being able to upgrade to building a bigger and better business. If you choose to build your own data center, the needs of your business might quickly outpace your data center capabilities, which can potentially cost you several lucrative business opportunities.
There are more resources than ever that allow you to take care of your data storage needs yourself, but unless you have the education and experience necessary to handle each and every one of your data needs, you’ll be better off partnering with someone who already has a data center and the experience and resources necessary to take good care of your data and your customers.

Posted in Data Center Design, Partnerships | Tagged , , | Comments Off

Google’s Going Green

windenergygoogleNon-renewable Resources

Non-renewable resources such as petroleum, uranium, coal, and natural gas provide us with the energy used to power our cars and houses, offices, and factories.  Without energy we wouldn’t be able to enjoy the luxuries that we are used to living with in our everyday lives.  Our factories would shut down, our cars wouldn’t work, and our houses would remain unlit.  The problem with using non-renewable resources is that they can run out, and if they do we will have to find other ways to get power.

As of 2010 the world’s coal reserves were estimated to last until May 19, 2140, and petroleum reserves were estimated to hold out until October 22, 2047.  While these dates may seem far off, they are coming fast upon us.  If the petroleum reserves run out, as predicted, in 34 years, many of us will feel the effects.  Not only will we not have gas for cars, trucks, and planes, but commodities such as heating oil, paint, photography film, certain medicines, laundry detergent, and even plastic will cease to exist.

While there has been a push to preserve petroleum by converting cars to run on natural gas, this isn’t a perfect solution, as the natural gas reserve is only expected to last until September 12, 2068.  Although coal energy will last us a little bit longer, in less than 150 years it is predicted that this resource will run out as well, at which point we will be forced to make a change in our energy use, unless we make that change now.

Green Power

Many big businesses are making the shift to renewable resources such as water, solar, and wind energy in an attempt to preserve these quickly depleting resources.  Not only are these alternative power sources easily accessed and sustainable, but they also leave less impact environmentally and can be domestically produced lessening our dependence on foreign energy sources. Google is one company that is making the transition to “green power.”

Green power, according to the EPA, is any form of energy obtained from an indefinitely available resource whose generation will have no negative environmental impact.  This includes wind power; wave, tidal, and hydropower; biomass power; landfill gas; geothermal power; and solar power.  According to Google Green, Google’s goal is to power their company with 100% renewable energy, both for environmental reasons, and for the business opportunity provided in helping accelerate the shift to green power in order to create a better future.

One way Google has begun the transition is by piloting new green energy technology on many of its campuses.  One of the first pilot programs began in 2007 when they installed the largest corporate solar panel – generating 1.7 MW – at the Mountain View campus in California.  Today they have expanded that panel to 1.9 MW and from it are able to generate 30% of the energy needed to power the building the panel was built on.

Google’s Wind Power Investments

Along with solar power, Google has recently begun using wind power as well.  On June 06, 2013 they signed a contract with O2, an energy company and wind farm developer in Sweden, to purchase 100% of the power output from a new wind farm being built in northern Sweden for the next 10 years.  The 24 turbine project, which is expected to be fully operational by 2015, will provide 72 MW of power to Google’s Hamina, Finland data center, furthering the companies attempt to use only carbon free, renewable energy sources.

Because of Europe’s integrated electricity market and Scandinavia’s shared electricity market, the shift to wind power has been relatively simple.  Google has been able to buy 100 % of the wind farm’s electricity output with the guarantee that the energy they purchased will be used to power their data tower alone.  The wind farm itself will be powered by Nord Pool, the Leading power market in Europe.

Shortly after purchasing the contract with O2, Google made another wind farm investment, this time domestic, in Amarillo, Texas.  On September 17, 2013 the company announced its purchase of 240 MW of wind from Happy Hereford wind farm, which will be fully operating by the end of 2014.  This is Google’s largest investment to date, and will provide power to the Mayes County grid in Oklahoma, where another of their data towers is located.  Unlike their investment in Sweden, Google is unable to send the power directly from the wind farm to their tower because of local energy regulations, so they are taking a different course of action.  By selling their purchased energy into the wholesale energy market they are able to utilize it along with the surrounding area, thus decreasing their carbon footprint.

Other Investments

Google has taken this rout before due to domestic power regulations, and while their Texas contract is their first wind power purchase in the United States, they have invested equity in many other wind farms throughout the state.  One of these is the Spinning Spur Wind Project, a 161 MW wind farm consisting of 70 2.3 MW wind turbines.  The farm spans 28,426 acres and is located about 30 miles west of the Happy Hereford project.  When the project is finished it will provide power for around 60,000 homes in the Oldham County area.

Over all Google has invested over $1 billion into 2GW of carbon free, clean power projects and pilot programs.  Some of their other investments include a solar power project in South Africa; photovoltaic projects in both California and Germany; and Solar City, a project providing solar panels for thousands of home rooftops.  By investing in these projects and purchasing power both directly and indirectly they are able further their goal of becoming 100% reliable on renewable carbon free resources for energy as well as help others do the same, thus aiding in lessening the carbon footprint left by everyone.

Posted in Technology Industry | Tagged , , , | Comments Off

OpenPower Consortium is IBM’s Answer to Declining Server Market

google-datacenter-places-07_620x413A group of leading high-tech companies, led by IBM, has formed a new alliance to further the development of advanced server technologies that will power a new generation of faster, greener and more efficient data centers.

The OpenPower Consortium, which will initially include Google, NVIDIA, Mellanox and Tyan, will follow the ARM model of allowing the open source development of IBM’s Power chip technology by other companies from server manufacturers to those companies with complex server data centers responsible for running their businesses. Third parties can now use the IBM Power technology as the basis for their own server systems. In addition, IBM will continue to design and manufacture its own servers, unlike competitor ARM who simply licenses its technology to third parties but does not engage in direct manufacturing itself.

Today’s Server Market: A Picture of Decline

Strong competition and market changes are not foreign concepts to the high-tech industry. The constant race to improve technology, make things faster, more powerful or better in some other way is the way of the world for technology companies. Market conditions, economic stimuli and competitors’ actions all have direct influence on trends related to specific products.

Today’s highly competitive global server market has been in a state of decline for some time. The two longtime market leaders, IBM and Hewlett-Packard, have both experienced losses in their shares of the server market in 2013. While other companies, most notably Dell and Cisco, have seen their server sales increase, the overall status of server sales worldwide is down. There are many factors affecting this trend.

• Increased reliance on the cloud—While it is true that cloud-based data centers rely on servers to power them, cloud applications are more efficient which results in less load and demand on servers. More and more companies are looking to the cloud as a key component for their data storage and application power needs.

• Data center consolidation—A challenged global economy pushed companies to find ways to reduce IT costs and assets. More efficient data center operations have led to a need for fewer servers as companies were able to obtain the level of server performance needed with fewer server products.

• Server virtualization—Through the use of software, server virtualization essentially allows one server to provide the functionality of many. This makes it a key component of any data center consolidation and has also directly contributed to the declining server market.

• Weak PC market—PC sales have been at risk for some time and an April 2013 report from IDC revealed the worst decline in PC sales ever. A slow economy, less than positive reception of Windows 8, continued growth in sales of smartphones and tablets and a growing adoption of cloud computing have combined to put the PC market under heavy siege.

It could be said that server manufacturers themselves helped to create a lessened need for their server products by developing those products and technologies associated with cloud computing, data center consolidation and server virtualization. That is not to say that IBM and others should not have focused on such efforts but simply to highlight those factors as clear and logical reasons for the changing market.

How OpenPower Will Help IBM

In order to remain viable in a new server market, reshaping that market will be required. Ultimately, IBM’s goal with the announcement of the OpenPower Consortium is to encourage the development of more energy efficient and powerful cloud computing, led by the open source model. By so doing, IBM is directly responding to the factors that have so greatly influenced the current decline in server sales. Facilitating industry-wide adoption of a focus on the creation of next-generation processing technology will enable companies to develop and implement more energy efficient servers and data centers that deliver more power and yield faster, more streamlined processing. As this happens, a new market will emerge based around these products and technologies.

By allowing use of its Power chip technology by other companies, IBM hopes to breathe new life into its flagging server sales. Revenue from the licenses will be used to further IBM’s own development of the Power technology. Companies leveraging the Power technology can redesign it and customize key data center functionalities to custom fit their specific needs and applications, making the options for market share growth real and virtually unlimited.

Leveraging the Power of ARM Processing

ARM, or Advanced RISC Machine, processing concepts are at the heart of IBM’s OpenPower Consortium plan. The core concept of ARM processing is to implement simplified instructions that yield greater efficiency and, therefore, greater computing power. Additionally, systems built on an ARM platform operate using fewer transistors to deliver the same or increased power.

The reduced number of components and instructions required together contribute to lower costs. Similarly, because end products could have fewer transistors or other components, they generate less heat and use less power, reducing energy costs. Lower costs, increased energy efficiency and smaller form factors make ARM ideal for products requiring portability such as smartphones and tablets. It is also highly used in laptops, digital TVs and other such products.

The Market Will Watch

Customers, competitors and analysts alike will no doubt be carefully watching the path of IBM’s new strategy. What initial consortium members do with their use of the Power chip as well as what advanced IBM itself makes with the chip will give clear signs as to the success of this chosen direction.

Whether highly successful or not, it is clear that IBM needed to take strong action to combat the struggling server market conditions. Their choice to essentially open their intellectual property to others has the potential to be a true game changer in the world of today’s server markets. It hits head-on the need for a more holistic approach to server technology and usage by more closely marrying the development of the server chip with the functionalities of data center consolidation, virtualization and cloud processing.

These components respond directly to the heart of todays’ business marketplace that continues to require more with less, essentially. Less energy used, less money spent on energy costs, more processing power, more efficient operations—such are the mantras propelling today’s successful companies.

 

Posted in Technology Industry | Tagged , , | Comments Off

Swedish Data Center Saves Money by Using Seawater for Cooling

interxion-containment-overhIn modern society, people are very dependent on technology. With all of the advancements in modern electronics and technology, data centers are essential. Many servers are required to store data and allow the expansion of both information and technology to continue on. However, these data centers generate a lot of heat, which can damage hardware and cause other problems without cooling. A cutting edge method to cool data centers has emerged, where cold seawater is used, and companies are taking notice. One such company in Sweden shelled out about a million dollars on this system, which paid for itself after roughly a year. When it comes to cooling a data center, a million isn’t much, and the fact that it pays for itself so soon is sure to contribute to the expansion of this cooling system.

Because this Swedish seawater is cold to begin with, nature has already helped out. With water that is less than 43 degrees Fahrenheit, it is easy to see why companies would want to implement this system. After the water has made its way through the system, it reaches a temperature of roughly 75 degrees Fahrenheit. Also, additional heat that is generated is then used to heat other homes and businesses. For someone who doesn’t understand the level of heat that a data center generates on a daily basis, this will help them realize just how hot it can get. Imagine the damage and havoc this heat would cause for a data center that doesn’t have a cooling system. As anyone can see, this method takes advantage of everything it can to make the cooling system as effective as possible.

Primary benefits of this cooling system include:

• Efficiently cools data centers so hardware doesn’t overheat

• Saves companies a significant amount of money

• The cost to build this system is low

Although there are a lot of reasons companies would be smart to build a cooling system like this, it isn’t perfect. One of the primary drawbacks with this technology is the impact it has on the environment. While these cooling systems take advantage of the cold temperatures that nature provides, they also pose a risk to sea life.

Threats to Jellyfish

Although using seawater to cool data centers has a number of excellent benefits, it isn’t flawless. When pumps operate and suck up seawater, it is believed that jellyfish sometimes get sucked up as well. Because of the threat posed to jellyfish and other environmental concerns, the government sometimes requires the cooling system to cease operations. While this can be somewhat complicating, companies will continue to implement this technology and take advantage of the benefits that come with using this method. When it comes to technological innovation, there are usually a number of pros and cons. With this cooling system, it is no different. As long as the perks significantly outweigh the downsides, companies will continue to find this system lucrative.

Pumps Suck Away Sea Life, Harm Environment

Not only do these pumps pose a significant threat to jellyfish, but other types of sea life as well. Unfortunately, when they operate they sometimes suck up a variety of nearby sea life. When you think about these massive pumps that suck seawater in to cool off a data center, it isn’t difficult to picture how sea life can get pulled in also. Obviously, companies can’t fully control what they suck up along with the seawater. In the future, it is very likely these cooling systems will be found in other locations, such as frigid rivers, so it is very important to keep in mind the current drawbacks to this method. Although these cooling systems are required to shut down at certain times, the government doesn’t always step in and stop operations. Because of this, it is highly likely these pumps will spread to other parts of the globe.

This System is Inexpensive to Build

So, how much does this system cost to implement, anyway? One company in Sweden invested roughly a million dollars on the cooling system. Although that might sound like a lot of money to most people, it is relatively modest for a large data center. With the global economy experiencing complications in recent years, many companies are looking for ways to save money. While some have to lay off employees, others can find savings by implementing more cost-effective technology, such as this cooling system. Because this system paid for itself after about a year of operation, many companies are likely to make the decision to build one of these cooling systems for themselves. Although they will take the environmental risks into consideration, it is not likely to prevent them from saving money and cooling their data center conveniently. Also, even though the government might step in from time to time and tell them to shut the system down, the frequency of such setbacks is not often enough to keep them from wanting it. Companies would be smart to build systems that are more affordable, especially if the costs associated with implementation can be recovered in roughly a year.

The Cooling Will Continue

After taking a look at the benefits of this cooling system, you can understand why this technology is likely to become more widespread in the coming years. Companies will always want to find savings and are sure to be attracted to the fact that this system pays for itself so soon. Plus, the relatively low cost to build this system will certainly draw in more companies. This system is far from perfect, with the threat that sea life faces as a result of the pump sucking water. Although the system gets its cold water, it also sucks up jellyfish. Government regulation occasionally requires companies to shut down their cooling systems, but not often enough to take away from how lucrative this system is. It is important to understand this cooling system, since it will likely be found in different areas as it increases in popularity in the future. Because of the increasing demand for energy and the fact that companies are always looking for ways to reduce their energy costs, it is easy to see why this cooling system is likely to become more prevalent.

 

Posted in New Products | Tagged , , | Comments Off

The Best Locations for Data Centers

Data Center Best LocationBusinesses often use data centers to backup and protect their vital information and data as well as the information and data of their customers and clients. Since using a data center can be a huge expense, business have to do their research about the best locations for data centers in order to make sure they’re getting their money’s worth and so that they can be sure that all of their data and information is being kept in a safe place.

What to Look For When Considering a Location

No matter what location you’re looking at for a data center, there are certain factors that you’ll want to check out with the center before making a final decision. Since more power is being used now than ever before, the location you choose for your data center should have steady access to power. Check to see if there have been any blackouts or brownouts in the immediate area. Should the data center ever lose power, it could cost you money as well as business.

Something else to look for is a data center is a location that’s in a cool climate. Data centers generate a lot of heat, and unless that heat can be kept in check, the equipment is in danger of overheating and failing. Another reason that you’ll want to find a data center that’s in a cool climate is that you won’t have to pay more for the cost of cooling. So not only will be helping to save your data, you’ll also be helping to save yourself some money.

Pay attention to the geography of the data center’s location. Is it located on or near a fault line, below sea level, on a flood plain or in the path of air traffic? If so, those might be places that you’ll want to avoid to keep from putting your data and information at unnecessary risk.

It should go without saying, but you’ll want to make sure that the data center has excellent security to keep your information and data as well as the personal data of your clients and customers as safe as possible. Recent compliance regulations have made it so that you have to store your data within the nation or region that you collected in. Check to make sure that you’re following all regulations for your protection and the protection of your customers.

Locations with the Best Annual Operating Costs

Certain locations have lower annual operating costs than other. One of the least expensive locations is Rolla, Missouri with a total annual labor cost of $6,235,281 and a total operating cost of $11,988,699. Another option for a location with a low annual operating cost is Winston-Salem, North Carolina, which has a total annual labor cost of roughly $5,989,000 and a total operating cost of approximately $11,908,760. Bloomington, Indiana has a yearly labor cost of $6,094,411 and a total operating cost of $11,834,591.

Other recommended locations for data centers are New York State, Ontario, Eastern Washington State, Colorado and the Netherlands. While New York City has high real estate costs that aren’t very tempting for individuals looking for data center locations, other parts of New York have perfect sites. The state also has tax exemptions for data centers in addition to a dependable power infrastructure. The Providence of Ontario is located in Canada, which has become one of the best places on the globe for business. Ontario has a solid infrastructure and an international airport as well. Other advantages include low risk for natural disasters and little government interference with corporate data. Just make sure that you won’t be breaking any privacy regulations by setting up your data center there.

Locations with the Worst Annual Operating Costs

Locations that are the most expensive for data centers include Oakland, CA, which has an annual labor cost of approximately $7,469,700 and a yearly operating cost of $18,879,982. Boston is another expensive location with a total annual labor cost of $7,287,955 and a yearly operating cost of $19,079,992 while Newark, New Jersey has a yearly operating cost of $19,245,362. One of the most expensive locations is New York City, which has a yearly labor cost of roughly $7,533,300 and a yearly operating cost of about $28,067,100.

Cooler Climates Equal Cooler Costs

It’s estimated that data centers consume an estimated 30 billion watts of electricity. The Environmental Protection Agency predicts that data center energy consumption will increase by 12% each year. Choosing a location for your data center near a body of water with cool temperatures and chilly waters will go a long way in naturally cooling and protecting the data center. In the US, Michigan offers one of the most favorable year-round climates, even in the summer. Data centers located in the southwestern or southern regions are areas that tend to have the most powered cooling data centers. A Michigan data center earned the US Environmental Protection Agency’s Energy Star certification. What this means is that the data center performed in the top 25% of similarly performing facilities in America for energy efficiency and also adhered to energy efficient performance regulations that are set by the EPA.

Severity of Natural Disasters

Should a natural disaster strike in the area your data center is located in, there’s a chance that you could lose all of the data and information stored there. The geographical area you choose for your data center should be in a safe region, such as the Midwest, or anywhere else where there’s a low occurrence of floods, tornadoes, earthquakes and hurricanes. Michigan is another low risk area when it comes to natural disasters.

Other locations that have occurrences of natural disasters include the Great Lakes region, Texas and California.

 

Once you’ve found the perfect location for your data center, you’ll want to make sure that the workforce is competent, experienced and possibly even licensed depending on the types of programs that they’ll be using. As you’re researching possible locations for your data center, do some research on the knowledge and talent of the region’s workforce. Having the perfect location for a data center won’t do you much good if the employees don’t have a clue about what they’re doing or how to do it.

Data centers are all about location. Even if you have to spend more money than you’d like to find the perfect spot for your data center, it will undoubtedly be worth it in the long run.

 

Posted in Data Center Construction | Tagged , | Comments Off

Understanding Data Centers and Data Center Tier Certifications

datacenter726Data centers have been around for ages. In the beginning of time, they were like huge computer rooms and were very complex to maintain and operate. In fact, they sometimes required a special environment to operate them properly and a lot of cords and components to ensure that the system worked correctly. These systems usually required a lot of power and had to be constantly cooled to ensure that they would not overheat. They were also extremely costly. Although the data centers were used for various purposes back in the day, their main objective was to help with military efforts. Much has changed since those early stages of the data center. Today, they are more complex, have better technology, and require tier certification in order to operate them properly. Learn more about these data centers and the tier certification processes that go along with them.

 

Today’s Data Centers

 

The data centers used today are homes to computer systems. They also hold various components that complement these computers, including telecommunications and storage systems. These centers also have backup or redundant supplies of power. Furthermore, they have redundant connections to data communications. In addition, environmental controls, like fire suppression and air condition, to protect the data in the center. Many of these centers are extremely large in size and require the electricity of a small town. For this reason, they require knowledge to operate them properly. This is one reason for data tier certification.

 

What Is Tier Certification?

 

Data tier certification began a few decades ago in an effort to help data centers across the globe. This certification process helps to create standards that the world uses. They also offer ways to determine how these centers are performing and can help them measure whether or not they are receiving a return on their investment. Facilities can receive a “Tier Certification of Design Documents”. They can also receive a “Tier Certification of Constructed Facility”. This certification is only for the physical topology of an infrastructure of a data center. It deals with what directly affects the operations of the computer rooms. In addition, certifications are given at four levels or tiers:

 

• Tier 1 is for the basic Infrastructure of the site. This is a non-redundant certification.

• Tier 2 is for redundant components of the site infrastructure.

• Tier 3 is for site infrastructure that is concurrently maintainable.

• Tier 4 is for site infrastructure that is fault tolerant.

 

Why Certification?

 

Like many industries, including law, accounting, real estate, and many trades, certification is necessary for operation. In data centers, certification establishes standards across the world that hundreds of different companies and organizations rely on for their operations. The certification standards affect the infrastructures of various networks, including large corporations, governments, and other organizations. Through certification, problems with infrastructure are indentified and quantified. Then, they are improved. This promotes a better long-term exchange of information.

 

About Tier 1

 

The first tier in the data center certification is the most basic. It is simply a server room. In this type of facility, a single path connects the power and cooling to the equipment. None of the components used are redundant. Therefore, any power outages that are either planned or unplanned with have a negative effect on the data. Typically, at Tier 1 data center has 99.67% availability. Occasional downtime is expected each year. This is actually important for the function of the applications in the server room.

 

About Tier 2

 

The second tier for data certification has some redundant components; however, it still facilitates cooling and power though the same single path. With this type of system, certain components can be taken off-line. This is helpful for planned services, and it causes no disruption to any of the data processing equipment that is used. On the other hand, a power outage that has been unplanned or a disruption to the service path will result in problems with the data processing equipment. Generally, a Tier 2 data center has 99.75% availability. Therefore, it is a little bit more reliable than Tier 1. Furthermore, this type of data center is sufficient for many companies and organizations.

 

About Tier 3

 

The infrastructure for the third tier allows facilities to be available at nearly all times. This is because it provides more redundancy and reliability. The center uses redundant components and also relies on several separate cooling and power distribution paths. These all service the data processing equipment. Interesting, only one path works at a time. For this reason, regular maintenance and some unforeseen power outages will not effect the equipment in the center. Therefore, most Tier 3 facilities operate at 99.98% availability.

 

About Tier 4

 

Very few organizations need the reliability of Tier 4 infrastructures. Therefore, these standards are usually reserved for those who lack of available would have substantial impact. For example, many financial institutions operate at a Tier 4 level. This level provides multiple levels of redundant components. In addition, there are several independent cooling and power distribution paths. All of these paths are active and support the equipment in the processing center. Therefore, any equipment failure or power outage (regardless of the size or type) will not effect the equipment. Tier 4 centers are the most available and redundant around the world. They have been proven to be 99.99% available – nearly perfect.

 

What Is Required Of Data Centers?

 

Many businesses and organization rely on data centers daily. Therefore, it is not surprising that certain requirements are imposed on these centers to protect these organizations from risk. The “Telecommunications Industry Association” has determined certain standards including the size of rooms and the topology of data centers. There are also environmental requirements for these centers. The hope is to modernize many of the facilities so they are more energy efficient. However, this requires newer equipment. Furthermore, it is vital that all infrastructures work to standardize their process so that their systems are automated. Finally, it is especially important that these centers are secure to protect the data they house.

 

Although data centers and data center tier certification sounds confusing, it is really a complicated way of categorizing the levels of data centers across the globe.

 

Posted in Data Center Design | Tagged , | Comments Off

What is data virtualization?

datacentertierlevelsThe amount of data available in the world today is staggering, estimated by a 2011 Digital Universe Study from IDC and EMC to be around 1.8 zettabytes (1.8 trillion gigabytes), and projected to double every year. In addition, the study highlights the fact that the costs to create, capture, store, and manage data are about one-sixth of what they were in 2005, which means companies and enterprises large and small will continue to capture and store more and more data at lower costs.

 

So how much data is out there? According to data gathered from IBM, and compiled on the social media blog ViralHeat in October 2012, there are nearly 3 million emails sent every second around the world, 20 hours of video uploaded to YouTube every minute, and 50 million tweets sent every day. In addition, Google processes 24 petabytes (1 petabyte is equal to 1 quadrillion bytes, or 1 x 1015) of data every day.

 

With so much data around, it can be difficult to access the information you need, then harness it in a way that can benefit your organization. Data virtualization is the process of bringing together information from several different systems—including databases, applications, files, websites, and vendors—into a universal format that can be accessed from anywhere in the world, without the need to know where the original file is located or how it is formatted. Effective data virtualization tools transform several disparate systems into one cohesive and usable format, giving you the ability to access, view, and transform data that would have otherwise been impossible to aggregate.

 

Date virtualization systems use a virtual layer, also called an abstraction layer, between the original source of the data and the consumer, which eliminates the need to physically consolidate and store the information on your own servers. In addition, it allows for on-demand access to the most updated information from the data sources (including transaction systems, big data, data warehouses, and more) without the need to go through consolidation processes or batch processes within each data repository.

 

Regulating Data with Virtualization

 

Data is a wonderful tool for businesses, but with the volume of digital information that exists today, companies can quickly become overwhelmed if they do not have a way to manage that information. Many companies have multiple data repositories where they collect and store information internally, including individual files and computers, servers, and databases, as well as access to external information from data warehouses, transaction systems, and more. For large corporations, the information on their internal servers and computers alone could equate to millions of gigabytes of data.

 

In order to effectively use this information, there must be a way to aggregate all the information into one system that is useful and accessible to everyone. Prior to data virtualization, you had to access the direct source of the data, which presents some challenges. In cases where you remotely access the data, there could be downtime waiting to download the information you need, or your data could get messed up when you try to integrate it all into one system from several disparate sources. In addition, there are risks involved with allowing several people to access and manipulate the original source of data, opening the door to the possibility that someone could corrupt the original files. Since virtualization provides a map to the data through the virtual (abstraction) layer, downtime is virtually non-existent, you get access to the most up-to-date information, and you reduce or eliminate the risk of ruining the original files.

 

The Costs of Data Virtualization

 

In order to have an effective data virtualization system, companies need the right middleware platforms to provide support and functionality while reliably providing instant access to all the data available. These platforms include three key components:

• An integrated environment that grants access to all the key users, and outlines security controls, data standards, quality requirements, and data validation.

• The data virtualization server, where users input their queries and the system aggregates all the information into a format that is easy for the user to view and manipulate. This requires the ability to collect and transform the information from several different systems into a single format for consumption. These servers must also include validation and authentication to ensure data security.

• The ability to manage the servers and keep them running reliably all the time. One of the keys to quality data virtualization systems is access to high quality information in real time, which means there must be tools in place to support integration, security, and access to the system, as well as monitor the system logs to identify usage levels and key indicators to improve access.

 

While the costs to set up this type of system can be high initially, the return on investment a company can achieve through strategic use of the data gathered can more than outweigh the initial costs.

 

Case Studies in Data Virtualization

 

There are hundreds of examples of companies today, from large corporations to small- and medium-sized businesses, that are using data virtualization to improve the way they collect, maintain, and utilize information stored in databases and systems throughout the world.

 

For example, Chevron was recently recognized for implementing data virtualization in a project; by adding the virtual layer to several systems that had to be aggregated, project managers were able to cut the total time to migrate systems almost in half, and lower the risk of losing critical data.

 

Companies like Franklin Templeton, which rely on data to deliver results to investors, use data virtualization to manage databases more efficiently, eliminate data duplication within the system, and shorten the amount of time it takes to bring new products to the market, increasing their competitive edge.

 

For large corporations that aggregate data from several different data marts and high-volume warehouses, the ability to consolidate that information into a usable format that drives sales and customer retention strategies is a critical competitive advantage. Companies like AT&T are using data virtualization to consolidate hundreds of data sources into one system in real-time that inform everything from R&D to marketing

and sales.

 

Whether you are a small- or medium-sized business that is struggling with time-consuming routine IT tasks, such as manually managing several systems and databases, or you are a large corporation trying to access and aggregate billions of pieces of information every day, data virtualization can help you view and manipulate information that will give you the competitive edge you need. Every company has data, but without the ability to safely and reliably access and organize the key pieces that you need from all your disparate systems, you will never realize all the benefits that information can offer.

Posted in Facility Maintenance | Tagged | Comments Off

Data encryption involving authentication, authorizations, and identity management

 

data-center-requirementsThe Importance of Secure Data Protection For Individuals and Businesses

 

Protecting data is becoming increasingly important in our highly internet savvy world. With millions of people using the internet daily, and an unfortunate amount of them using it for less than honest purposes, securing your website data is a huge priority for consumers and business owners alike.

 

This is especially true if you have a business or organization that keeps a database of secure client information such as telephone numbers, social security information, credit card numbers, and home addresses. It’s important to make sure you use the most up-to-date types of data security and encryption processes to minimize the change of anything being intercepted as it is transferred over the internet.

 

Protecting Your Data With Encryption

 

There are a number of ways to protect online information, with data encryption being the most commonly used by to protect information transferred between servers and clients, or in data storage centers. A well trained data security team will be able to advise you on the best way to secure your information and keep it safe. Here are several important ways to protect your company data, user information and financial accounts.

 

Use of Encryption

 

Encryption transforms data, making it unreadable to anyone without the decryption key. By encrypting data as it is exchanged between web browsers and servers, personal information such as credit card numbers, social security numbers, and addresses can be sent securely over the internet with much less risk of being intercepted during the process.

 

Two types of protocols used during the encryption are:

● Secure Shell (SSH) Encryption Protocol – This process involves the encryption of all data between the browser and the server while they are communicating at the shell.

● Socket Layer (SSL) Encryption Protocol – This involves encrypting all data in the transaction between the web browser and the web server, before any data is transferred. This protects secure data like a “shell” covering up the data as it transfers across online connections.

 

Types of Data Encryption Used To Protect Your Information

 

Authentication is the process used to prove that a computer user is who they say they are. It identifies who the system (or person) is, and then verifies that they are “authentic”. This data encryption tool is used by servers to find out who exactly is accessing their website or online information. It’s also used by clients who need to be sure the server is the system it is claiming to be. The process of authentication generally involves the use of a username and password, or it can be accomplished through voice recognition, fingerprints, employee ID cards, or even something as complicated as retina scans.

 

Web servers issue authentication certificates to clients as well, which are proof that the system truly belongs to the entity it is claiming. These certificates are often processed through third party authentication, such as Thawte or Verisign. You can check which authentication is used by a company by looking on their website for a seal or link to the third party provider they use.

 

Authorization is usually coupled with the authentication process, determining whether or not the client has permission to access the resource or file they are requesting. By using authentication, the server identifies who you are, then checks a list of authorized users to see if you are allowed to visit the website, open the file or use the resource you are attempting to access. This may involve a password, or it may not.

 

Authorization usually only grants or revokes access based on your identity as you log in to the file or website. Most internet web pages are open to the public, and do not require authentication or authorization to access. Private sites, company restricted information, and other private data is generally encrypted with authentication and authorization tools.

 

Identity Management describes the process of managing the authentication and authorization of people within an organization. An identity management system keeps track of privileges across an entire entity, increasing security and productivity for a business. Identity management can be accomplished with active directories, identity providers, access control systems, digital identity managers, and password authentication. By keeping track of how users receive an identity, protecting that identity, and granting appropriate access, the identity management system saves money, repetitive tasks, and downtime of the system.

 

Back-up plan in case data is breached. How to assess the situation without a big loss

 

What do you do if, despite your best efforts, sensitive company or client data is breached? It’s important to have an emergency plan with an outline of the proper steps to take in this unfortunate situation. In order to act appropriately, be aware of government regulations and rules for how to handle this kind of situation.

 

According to the Better Business Bureau, some important steps to take to prepare for and react to a data breach situation includes:

1. Create a “Data Breach Notification Policy” to let your consumers know how you will handle the situation if data compromise has occurred.

2. Train your employees to identify possible breaches in data and how to report it.

3. When a data breach has occurred, immediately gather the facts so you know what was accessed, how it was accessed, and how you can prevent more data from being compromised.

4. Notify any financial institutions involved. For instance, if bank account numbers were accessed, notify the relevant banks immediately so they can watch accounts for suspicious activity. If credit card numbers were affected, credit companies can change card numbers and make old numbers ineffective. This will minimize damage.

5. Seek outside counsel from a lawyer, a risk consulting company, or a relevant government agency. They can help you identify the laws involved and whether you need to alert clients, consumers, or the government of the incident.

 

The Importance of Taking Precautions to Secure Data with Credentials

 

The importance of securing your data with authentication protocols and credentials cannot be overstated. Making sure that secure data is viewable and accessible by only those with the proper credentials is important for the management of any business. Find a data security partner who shares your vision for the protection of company documents, user identification information, and other private information. Take every precaution necessary to make sure your customers, employees, and your business data is protected against hackers, thieves, and those wishing to do harm to your business, clients, and employees.

 

Posted in Datacenter Design | Tagged , , , | Comments Off