header-about-us-sub

Google’s Going Green

windenergygoogleNon-renewable Resources

Non-renewable resources such as petroleum, uranium, coal, and natural gas provide us with the energy used to power our cars and houses, offices, and factories.  Without energy we wouldn’t be able to enjoy the luxuries that we are used to living with in our everyday lives.  Our factories would shut down, our cars wouldn’t work, and our houses would remain unlit.  The problem with using non-renewable resources is that they can run out, and if they do we will have to find other ways to get power.

As of 2010 the world’s coal reserves were estimated to last until May 19, 2140, and petroleum reserves were estimated to hold out until October 22, 2047.  While these dates may seem far off, they are coming fast upon us.  If the petroleum reserves run out, as predicted, in 34 years, many of us will feel the effects.  Not only will we not have gas for cars, trucks, and planes, but commodities such as heating oil, paint, photography film, certain medicines, laundry detergent, and even plastic will cease to exist.

While there has been a push to preserve petroleum by converting cars to run on natural gas, this isn’t a perfect solution, as the natural gas reserve is only expected to last until September 12, 2068.  Although coal energy will last us a little bit longer, in less than 150 years it is predicted that this resource will run out as well, at which point we will be forced to make a change in our energy use, unless we make that change now.

Green Power

Many big businesses are making the shift to renewable resources such as water, solar, and wind energy in an attempt to preserve these quickly depleting resources.  Not only are these alternative power sources easily accessed and sustainable, but they also leave less impact environmentally and can be domestically produced lessening our dependence on foreign energy sources. Google is one company that is making the transition to “green power.”

Green power, according to the EPA, is any form of energy obtained from an indefinitely available resource whose generation will have no negative environmental impact.  This includes wind power; wave, tidal, and hydropower; biomass power; landfill gas; geothermal power; and solar power.  According to Google Green, Google’s goal is to power their company with 100% renewable energy, both for environmental reasons, and for the business opportunity provided in helping accelerate the shift to green power in order to create a better future.

One way Google has begun the transition is by piloting new green energy technology on many of its campuses.  One of the first pilot programs began in 2007 when they installed the largest corporate solar panel – generating 1.7 MW – at the Mountain View campus in California.  Today they have expanded that panel to 1.9 MW and from it are able to generate 30% of the energy needed to power the building the panel was built on.

Google’s Wind Power Investments

Along with solar power, Google has recently begun using wind power as well.  On June 06, 2013 they signed a contract with O2, an energy company and wind farm developer in Sweden, to purchase 100% of the power output from a new wind farm being built in northern Sweden for the next 10 years.  The 24 turbine project, which is expected to be fully operational by 2015, will provide 72 MW of power to Google’s Hamina, Finland data center, furthering the companies attempt to use only carbon free, renewable energy sources.

Because of Europe’s integrated electricity market and Scandinavia’s shared electricity market, the shift to wind power has been relatively simple.  Google has been able to buy 100 % of the wind farm’s electricity output with the guarantee that the energy they purchased will be used to power their data tower alone.  The wind farm itself will be powered by Nord Pool, the Leading power market in Europe.

Shortly after purchasing the contract with O2, Google made another wind farm investment, this time domestic, in Amarillo, Texas.  On September 17, 2013 the company announced its purchase of 240 MW of wind from Happy Hereford wind farm, which will be fully operating by the end of 2014.  This is Google’s largest investment to date, and will provide power to the Mayes County grid in Oklahoma, where another of their data towers is located.  Unlike their investment in Sweden, Google is unable to send the power directly from the wind farm to their tower because of local energy regulations, so they are taking a different course of action.  By selling their purchased energy into the wholesale energy market they are able to utilize it along with the surrounding area, thus decreasing their carbon footprint.

Other Investments

Google has taken this rout before due to domestic power regulations, and while their Texas contract is their first wind power purchase in the United States, they have invested equity in many other wind farms throughout the state.  One of these is the Spinning Spur Wind Project, a 161 MW wind farm consisting of 70 2.3 MW wind turbines.  The farm spans 28,426 acres and is located about 30 miles west of the Happy Hereford project.  When the project is finished it will provide power for around 60,000 homes in the Oldham County area.

Over all Google has invested over $1 billion into 2GW of carbon free, clean power projects and pilot programs.  Some of their other investments include a solar power project in South Africa; photovoltaic projects in both California and Germany; and Solar City, a project providing solar panels for thousands of home rooftops.  By investing in these projects and purchasing power both directly and indirectly they are able further their goal of becoming 100% reliable on renewable carbon free resources for energy as well as help others do the same, thus aiding in lessening the carbon footprint left by everyone.

Posted in Technology Industry | Tagged , , , | Comments Off

OpenPower Consortium is IBM’s Answer to Declining Server Market

google-datacenter-places-07_620x413A group of leading high-tech companies, led by IBM, has formed a new alliance to further the development of advanced server technologies that will power a new generation of faster, greener and more efficient data centers.

The OpenPower Consortium, which will initially include Google, NVIDIA, Mellanox and Tyan, will follow the ARM model of allowing the open source development of IBM’s Power chip technology by other companies from server manufacturers to those companies with complex server data centers responsible for running their businesses. Third parties can now use the IBM Power technology as the basis for their own server systems. In addition, IBM will continue to design and manufacture its own servers, unlike competitor ARM who simply licenses its technology to third parties but does not engage in direct manufacturing itself.

Today’s Server Market: A Picture of Decline

Strong competition and market changes are not foreign concepts to the high-tech industry. The constant race to improve technology, make things faster, more powerful or better in some other way is the way of the world for technology companies. Market conditions, economic stimuli and competitors’ actions all have direct influence on trends related to specific products.

Today’s highly competitive global server market has been in a state of decline for some time. The two longtime market leaders, IBM and Hewlett-Packard, have both experienced losses in their shares of the server market in 2013. While other companies, most notably Dell and Cisco, have seen their server sales increase, the overall status of server sales worldwide is down. There are many factors affecting this trend.

• Increased reliance on the cloud—While it is true that cloud-based data centers rely on servers to power them, cloud applications are more efficient which results in less load and demand on servers. More and more companies are looking to the cloud as a key component for their data storage and application power needs.

• Data center consolidation—A challenged global economy pushed companies to find ways to reduce IT costs and assets. More efficient data center operations have led to a need for fewer servers as companies were able to obtain the level of server performance needed with fewer server products.

• Server virtualization—Through the use of software, server virtualization essentially allows one server to provide the functionality of many. This makes it a key component of any data center consolidation and has also directly contributed to the declining server market.

• Weak PC market—PC sales have been at risk for some time and an April 2013 report from IDC revealed the worst decline in PC sales ever. A slow economy, less than positive reception of Windows 8, continued growth in sales of smartphones and tablets and a growing adoption of cloud computing have combined to put the PC market under heavy siege.

It could be said that server manufacturers themselves helped to create a lessened need for their server products by developing those products and technologies associated with cloud computing, data center consolidation and server virtualization. That is not to say that IBM and others should not have focused on such efforts but simply to highlight those factors as clear and logical reasons for the changing market.

How OpenPower Will Help IBM

In order to remain viable in a new server market, reshaping that market will be required. Ultimately, IBM’s goal with the announcement of the OpenPower Consortium is to encourage the development of more energy efficient and powerful cloud computing, led by the open source model. By so doing, IBM is directly responding to the factors that have so greatly influenced the current decline in server sales. Facilitating industry-wide adoption of a focus on the creation of next-generation processing technology will enable companies to develop and implement more energy efficient servers and data centers that deliver more power and yield faster, more streamlined processing. As this happens, a new market will emerge based around these products and technologies.

By allowing use of its Power chip technology by other companies, IBM hopes to breathe new life into its flagging server sales. Revenue from the licenses will be used to further IBM’s own development of the Power technology. Companies leveraging the Power technology can redesign it and customize key data center functionalities to custom fit their specific needs and applications, making the options for market share growth real and virtually unlimited.

Leveraging the Power of ARM Processing

ARM, or Advanced RISC Machine, processing concepts are at the heart of IBM’s OpenPower Consortium plan. The core concept of ARM processing is to implement simplified instructions that yield greater efficiency and, therefore, greater computing power. Additionally, systems built on an ARM platform operate using fewer transistors to deliver the same or increased power.

The reduced number of components and instructions required together contribute to lower costs. Similarly, because end products could have fewer transistors or other components, they generate less heat and use less power, reducing energy costs. Lower costs, increased energy efficiency and smaller form factors make ARM ideal for products requiring portability such as smartphones and tablets. It is also highly used in laptops, digital TVs and other such products.

The Market Will Watch

Customers, competitors and analysts alike will no doubt be carefully watching the path of IBM’s new strategy. What initial consortium members do with their use of the Power chip as well as what advanced IBM itself makes with the chip will give clear signs as to the success of this chosen direction.

Whether highly successful or not, it is clear that IBM needed to take strong action to combat the struggling server market conditions. Their choice to essentially open their intellectual property to others has the potential to be a true game changer in the world of today’s server markets. It hits head-on the need for a more holistic approach to server technology and usage by more closely marrying the development of the server chip with the functionalities of data center consolidation, virtualization and cloud processing.

These components respond directly to the heart of todays’ business marketplace that continues to require more with less, essentially. Less energy used, less money spent on energy costs, more processing power, more efficient operations—such are the mantras propelling today’s successful companies.

 

Posted in Technology Industry | Tagged , , | Comments Off

Swedish Data Center Saves Money by Using Seawater for Cooling

interxion-containment-overhIn modern society, people are very dependent on technology. With all of the advancements in modern electronics and technology, data centers are essential. Many servers are required to store data and allow the expansion of both information and technology to continue on. However, these data centers generate a lot of heat, which can damage hardware and cause other problems without cooling. A cutting edge method to cool data centers has emerged, where cold seawater is used, and companies are taking notice. One such company in Sweden shelled out about a million dollars on this system, which paid for itself after roughly a year. When it comes to cooling a data center, a million isn’t much, and the fact that it pays for itself so soon is sure to contribute to the expansion of this cooling system.

Because this Swedish seawater is cold to begin with, nature has already helped out. With water that is less than 43 degrees Fahrenheit, it is easy to see why companies would want to implement this system. After the water has made its way through the system, it reaches a temperature of roughly 75 degrees Fahrenheit. Also, additional heat that is generated is then used to heat other homes and businesses. For someone who doesn’t understand the level of heat that a data center generates on a daily basis, this will help them realize just how hot it can get. Imagine the damage and havoc this heat would cause for a data center that doesn’t have a cooling system. As anyone can see, this method takes advantage of everything it can to make the cooling system as effective as possible.

Primary benefits of this cooling system include:

• Efficiently cools data centers so hardware doesn’t overheat

• Saves companies a significant amount of money

• The cost to build this system is low

Although there are a lot of reasons companies would be smart to build a cooling system like this, it isn’t perfect. One of the primary drawbacks with this technology is the impact it has on the environment. While these cooling systems take advantage of the cold temperatures that nature provides, they also pose a risk to sea life.

Threats to Jellyfish

Although using seawater to cool data centers has a number of excellent benefits, it isn’t flawless. When pumps operate and suck up seawater, it is believed that jellyfish sometimes get sucked up as well. Because of the threat posed to jellyfish and other environmental concerns, the government sometimes requires the cooling system to cease operations. While this can be somewhat complicating, companies will continue to implement this technology and take advantage of the benefits that come with using this method. When it comes to technological innovation, there are usually a number of pros and cons. With this cooling system, it is no different. As long as the perks significantly outweigh the downsides, companies will continue to find this system lucrative.

Pumps Suck Away Sea Life, Harm Environment

Not only do these pumps pose a significant threat to jellyfish, but other types of sea life as well. Unfortunately, when they operate they sometimes suck up a variety of nearby sea life. When you think about these massive pumps that suck seawater in to cool off a data center, it isn’t difficult to picture how sea life can get pulled in also. Obviously, companies can’t fully control what they suck up along with the seawater. In the future, it is very likely these cooling systems will be found in other locations, such as frigid rivers, so it is very important to keep in mind the current drawbacks to this method. Although these cooling systems are required to shut down at certain times, the government doesn’t always step in and stop operations. Because of this, it is highly likely these pumps will spread to other parts of the globe.

This System is Inexpensive to Build

So, how much does this system cost to implement, anyway? One company in Sweden invested roughly a million dollars on the cooling system. Although that might sound like a lot of money to most people, it is relatively modest for a large data center. With the global economy experiencing complications in recent years, many companies are looking for ways to save money. While some have to lay off employees, others can find savings by implementing more cost-effective technology, such as this cooling system. Because this system paid for itself after about a year of operation, many companies are likely to make the decision to build one of these cooling systems for themselves. Although they will take the environmental risks into consideration, it is not likely to prevent them from saving money and cooling their data center conveniently. Also, even though the government might step in from time to time and tell them to shut the system down, the frequency of such setbacks is not often enough to keep them from wanting it. Companies would be smart to build systems that are more affordable, especially if the costs associated with implementation can be recovered in roughly a year.

The Cooling Will Continue

After taking a look at the benefits of this cooling system, you can understand why this technology is likely to become more widespread in the coming years. Companies will always want to find savings and are sure to be attracted to the fact that this system pays for itself so soon. Plus, the relatively low cost to build this system will certainly draw in more companies. This system is far from perfect, with the threat that sea life faces as a result of the pump sucking water. Although the system gets its cold water, it also sucks up jellyfish. Government regulation occasionally requires companies to shut down their cooling systems, but not often enough to take away from how lucrative this system is. It is important to understand this cooling system, since it will likely be found in different areas as it increases in popularity in the future. Because of the increasing demand for energy and the fact that companies are always looking for ways to reduce their energy costs, it is easy to see why this cooling system is likely to become more prevalent.

 

Posted in New Products | Tagged , , | Comments Off

The Best Locations for Data Centers

Data Center Best LocationBusinesses often use data centers to backup and protect their vital information and data as well as the information and data of their customers and clients. Since using a data center can be a huge expense, business have to do their research about the best locations for data centers in order to make sure they’re getting their money’s worth and so that they can be sure that all of their data and information is being kept in a safe place.

What to Look For When Considering a Location

No matter what location you’re looking at for a data center, there are certain factors that you’ll want to check out with the center before making a final decision. Since more power is being used now than ever before, the location you choose for your data center should have steady access to power. Check to see if there have been any blackouts or brownouts in the immediate area. Should the data center ever lose power, it could cost you money as well as business.

Something else to look for is a data center is a location that’s in a cool climate. Data centers generate a lot of heat, and unless that heat can be kept in check, the equipment is in danger of overheating and failing. Another reason that you’ll want to find a data center that’s in a cool climate is that you won’t have to pay more for the cost of cooling. So not only will be helping to save your data, you’ll also be helping to save yourself some money.

Pay attention to the geography of the data center’s location. Is it located on or near a fault line, below sea level, on a flood plain or in the path of air traffic? If so, those might be places that you’ll want to avoid to keep from putting your data and information at unnecessary risk.

It should go without saying, but you’ll want to make sure that the data center has excellent security to keep your information and data as well as the personal data of your clients and customers as safe as possible. Recent compliance regulations have made it so that you have to store your data within the nation or region that you collected in. Check to make sure that you’re following all regulations for your protection and the protection of your customers.

Locations with the Best Annual Operating Costs

Certain locations have lower annual operating costs than other. One of the least expensive locations is Rolla, Missouri with a total annual labor cost of $6,235,281 and a total operating cost of $11,988,699. Another option for a location with a low annual operating cost is Winston-Salem, North Carolina, which has a total annual labor cost of roughly $5,989,000 and a total operating cost of approximately $11,908,760. Bloomington, Indiana has a yearly labor cost of $6,094,411 and a total operating cost of $11,834,591.

Other recommended locations for data centers are New York State, Ontario, Eastern Washington State, Colorado and the Netherlands. While New York City has high real estate costs that aren’t very tempting for individuals looking for data center locations, other parts of New York have perfect sites. The state also has tax exemptions for data centers in addition to a dependable power infrastructure. The Providence of Ontario is located in Canada, which has become one of the best places on the globe for business. Ontario has a solid infrastructure and an international airport as well. Other advantages include low risk for natural disasters and little government interference with corporate data. Just make sure that you won’t be breaking any privacy regulations by setting up your data center there.

Locations with the Worst Annual Operating Costs

Locations that are the most expensive for data centers include Oakland, CA, which has an annual labor cost of approximately $7,469,700 and a yearly operating cost of $18,879,982. Boston is another expensive location with a total annual labor cost of $7,287,955 and a yearly operating cost of $19,079,992 while Newark, New Jersey has a yearly operating cost of $19,245,362. One of the most expensive locations is New York City, which has a yearly labor cost of roughly $7,533,300 and a yearly operating cost of about $28,067,100.

Cooler Climates Equal Cooler Costs

It’s estimated that data centers consume an estimated 30 billion watts of electricity. The Environmental Protection Agency predicts that data center energy consumption will increase by 12% each year. Choosing a location for your data center near a body of water with cool temperatures and chilly waters will go a long way in naturally cooling and protecting the data center. In the US, Michigan offers one of the most favorable year-round climates, even in the summer. Data centers located in the southwestern or southern regions are areas that tend to have the most powered cooling data centers. A Michigan data center earned the US Environmental Protection Agency’s Energy Star certification. What this means is that the data center performed in the top 25% of similarly performing facilities in America for energy efficiency and also adhered to energy efficient performance regulations that are set by the EPA.

Severity of Natural Disasters

Should a natural disaster strike in the area your data center is located in, there’s a chance that you could lose all of the data and information stored there. The geographical area you choose for your data center should be in a safe region, such as the Midwest, or anywhere else where there’s a low occurrence of floods, tornadoes, earthquakes and hurricanes. Michigan is another low risk area when it comes to natural disasters.

Other locations that have occurrences of natural disasters include the Great Lakes region, Texas and California.

 

Once you’ve found the perfect location for your data center, you’ll want to make sure that the workforce is competent, experienced and possibly even licensed depending on the types of programs that they’ll be using. As you’re researching possible locations for your data center, do some research on the knowledge and talent of the region’s workforce. Having the perfect location for a data center won’t do you much good if the employees don’t have a clue about what they’re doing or how to do it.

Data centers are all about location. Even if you have to spend more money than you’d like to find the perfect spot for your data center, it will undoubtedly be worth it in the long run.

 

Posted in Data Center Construction | Tagged , | Comments Off

Understanding Data Centers and Data Center Tier Certifications

datacenter726Data centers have been around for ages. In the beginning of time, they were like huge computer rooms and were very complex to maintain and operate. In fact, they sometimes required a special environment to operate them properly and a lot of cords and components to ensure that the system worked correctly. These systems usually required a lot of power and had to be constantly cooled to ensure that they would not overheat. They were also extremely costly. Although the data centers were used for various purposes back in the day, their main objective was to help with military efforts. Much has changed since those early stages of the data center. Today, they are more complex, have better technology, and require tier certification in order to operate them properly. Learn more about these data centers and the tier certification processes that go along with them.

 

Today’s Data Centers

 

The data centers used today are homes to computer systems. They also hold various components that complement these computers, including telecommunications and storage systems. These centers also have backup or redundant supplies of power. Furthermore, they have redundant connections to data communications. In addition, environmental controls, like fire suppression and air condition, to protect the data in the center. Many of these centers are extremely large in size and require the electricity of a small town. For this reason, they require knowledge to operate them properly. This is one reason for data tier certification.

 

What Is Tier Certification?

 

Data tier certification began a few decades ago in an effort to help data centers across the globe. This certification process helps to create standards that the world uses. They also offer ways to determine how these centers are performing and can help them measure whether or not they are receiving a return on their investment. Facilities can receive a “Tier Certification of Design Documents”. They can also receive a “Tier Certification of Constructed Facility”. This certification is only for the physical topology of an infrastructure of a data center. It deals with what directly affects the operations of the computer rooms. In addition, certifications are given at four levels or tiers:

 

• Tier 1 is for the basic Infrastructure of the site. This is a non-redundant certification.

• Tier 2 is for redundant components of the site infrastructure.

• Tier 3 is for site infrastructure that is concurrently maintainable.

• Tier 4 is for site infrastructure that is fault tolerant.

 

Why Certification?

 

Like many industries, including law, accounting, real estate, and many trades, certification is necessary for operation. In data centers, certification establishes standards across the world that hundreds of different companies and organizations rely on for their operations. The certification standards affect the infrastructures of various networks, including large corporations, governments, and other organizations. Through certification, problems with infrastructure are indentified and quantified. Then, they are improved. This promotes a better long-term exchange of information.

 

About Tier 1

 

The first tier in the data center certification is the most basic. It is simply a server room. In this type of facility, a single path connects the power and cooling to the equipment. None of the components used are redundant. Therefore, any power outages that are either planned or unplanned with have a negative effect on the data. Typically, at Tier 1 data center has 99.67% availability. Occasional downtime is expected each year. This is actually important for the function of the applications in the server room.

 

About Tier 2

 

The second tier for data certification has some redundant components; however, it still facilitates cooling and power though the same single path. With this type of system, certain components can be taken off-line. This is helpful for planned services, and it causes no disruption to any of the data processing equipment that is used. On the other hand, a power outage that has been unplanned or a disruption to the service path will result in problems with the data processing equipment. Generally, a Tier 2 data center has 99.75% availability. Therefore, it is a little bit more reliable than Tier 1. Furthermore, this type of data center is sufficient for many companies and organizations.

 

About Tier 3

 

The infrastructure for the third tier allows facilities to be available at nearly all times. This is because it provides more redundancy and reliability. The center uses redundant components and also relies on several separate cooling and power distribution paths. These all service the data processing equipment. Interesting, only one path works at a time. For this reason, regular maintenance and some unforeseen power outages will not effect the equipment in the center. Therefore, most Tier 3 facilities operate at 99.98% availability.

 

About Tier 4

 

Very few organizations need the reliability of Tier 4 infrastructures. Therefore, these standards are usually reserved for those who lack of available would have substantial impact. For example, many financial institutions operate at a Tier 4 level. This level provides multiple levels of redundant components. In addition, there are several independent cooling and power distribution paths. All of these paths are active and support the equipment in the processing center. Therefore, any equipment failure or power outage (regardless of the size or type) will not effect the equipment. Tier 4 centers are the most available and redundant around the world. They have been proven to be 99.99% available – nearly perfect.

 

What Is Required Of Data Centers?

 

Many businesses and organization rely on data centers daily. Therefore, it is not surprising that certain requirements are imposed on these centers to protect these organizations from risk. The “Telecommunications Industry Association” has determined certain standards including the size of rooms and the topology of data centers. There are also environmental requirements for these centers. The hope is to modernize many of the facilities so they are more energy efficient. However, this requires newer equipment. Furthermore, it is vital that all infrastructures work to standardize their process so that their systems are automated. Finally, it is especially important that these centers are secure to protect the data they house.

 

Although data centers and data center tier certification sounds confusing, it is really a complicated way of categorizing the levels of data centers across the globe.

 

Posted in Data Center Design | Tagged , | Comments Off

What is data virtualization?

datacentertierlevelsThe amount of data available in the world today is staggering, estimated by a 2011 Digital Universe Study from IDC and EMC to be around 1.8 zettabytes (1.8 trillion gigabytes), and projected to double every year. In addition, the study highlights the fact that the costs to create, capture, store, and manage data are about one-sixth of what they were in 2005, which means companies and enterprises large and small will continue to capture and store more and more data at lower costs.

 

So how much data is out there? According to data gathered from IBM, and compiled on the social media blog ViralHeat in October 2012, there are nearly 3 million emails sent every second around the world, 20 hours of video uploaded to YouTube every minute, and 50 million tweets sent every day. In addition, Google processes 24 petabytes (1 petabyte is equal to 1 quadrillion bytes, or 1 x 1015) of data every day.

 

With so much data around, it can be difficult to access the information you need, then harness it in a way that can benefit your organization. Data virtualization is the process of bringing together information from several different systems—including databases, applications, files, websites, and vendors—into a universal format that can be accessed from anywhere in the world, without the need to know where the original file is located or how it is formatted. Effective data virtualization tools transform several disparate systems into one cohesive and usable format, giving you the ability to access, view, and transform data that would have otherwise been impossible to aggregate.

 

Date virtualization systems use a virtual layer, also called an abstraction layer, between the original source of the data and the consumer, which eliminates the need to physically consolidate and store the information on your own servers. In addition, it allows for on-demand access to the most updated information from the data sources (including transaction systems, big data, data warehouses, and more) without the need to go through consolidation processes or batch processes within each data repository.

 

Regulating Data with Virtualization

 

Data is a wonderful tool for businesses, but with the volume of digital information that exists today, companies can quickly become overwhelmed if they do not have a way to manage that information. Many companies have multiple data repositories where they collect and store information internally, including individual files and computers, servers, and databases, as well as access to external information from data warehouses, transaction systems, and more. For large corporations, the information on their internal servers and computers alone could equate to millions of gigabytes of data.

 

In order to effectively use this information, there must be a way to aggregate all the information into one system that is useful and accessible to everyone. Prior to data virtualization, you had to access the direct source of the data, which presents some challenges. In cases where you remotely access the data, there could be downtime waiting to download the information you need, or your data could get messed up when you try to integrate it all into one system from several disparate sources. In addition, there are risks involved with allowing several people to access and manipulate the original source of data, opening the door to the possibility that someone could corrupt the original files. Since virtualization provides a map to the data through the virtual (abstraction) layer, downtime is virtually non-existent, you get access to the most up-to-date information, and you reduce or eliminate the risk of ruining the original files.

 

The Costs of Data Virtualization

 

In order to have an effective data virtualization system, companies need the right middleware platforms to provide support and functionality while reliably providing instant access to all the data available. These platforms include three key components:

• An integrated environment that grants access to all the key users, and outlines security controls, data standards, quality requirements, and data validation.

• The data virtualization server, where users input their queries and the system aggregates all the information into a format that is easy for the user to view and manipulate. This requires the ability to collect and transform the information from several different systems into a single format for consumption. These servers must also include validation and authentication to ensure data security.

• The ability to manage the servers and keep them running reliably all the time. One of the keys to quality data virtualization systems is access to high quality information in real time, which means there must be tools in place to support integration, security, and access to the system, as well as monitor the system logs to identify usage levels and key indicators to improve access.

 

While the costs to set up this type of system can be high initially, the return on investment a company can achieve through strategic use of the data gathered can more than outweigh the initial costs.

 

Case Studies in Data Virtualization

 

There are hundreds of examples of companies today, from large corporations to small- and medium-sized businesses, that are using data virtualization to improve the way they collect, maintain, and utilize information stored in databases and systems throughout the world.

 

For example, Chevron was recently recognized for implementing data virtualization in a project; by adding the virtual layer to several systems that had to be aggregated, project managers were able to cut the total time to migrate systems almost in half, and lower the risk of losing critical data.

 

Companies like Franklin Templeton, which rely on data to deliver results to investors, use data virtualization to manage databases more efficiently, eliminate data duplication within the system, and shorten the amount of time it takes to bring new products to the market, increasing their competitive edge.

 

For large corporations that aggregate data from several different data marts and high-volume warehouses, the ability to consolidate that information into a usable format that drives sales and customer retention strategies is a critical competitive advantage. Companies like AT&T are using data virtualization to consolidate hundreds of data sources into one system in real-time that inform everything from R&D to marketing

and sales.

 

Whether you are a small- or medium-sized business that is struggling with time-consuming routine IT tasks, such as manually managing several systems and databases, or you are a large corporation trying to access and aggregate billions of pieces of information every day, data virtualization can help you view and manipulate information that will give you the competitive edge you need. Every company has data, but without the ability to safely and reliably access and organize the key pieces that you need from all your disparate systems, you will never realize all the benefits that information can offer.

Posted in Facility Maintenance | Tagged | Comments Off

Data encryption involving authentication, authorizations, and identity management

 

data-center-requirementsThe Importance of Secure Data Protection For Individuals and Businesses

 

Protecting data is becoming increasingly important in our highly internet savvy world. With millions of people using the internet daily, and an unfortunate amount of them using it for less than honest purposes, securing your website data is a huge priority for consumers and business owners alike.

 

This is especially true if you have a business or organization that keeps a database of secure client information such as telephone numbers, social security information, credit card numbers, and home addresses. It’s important to make sure you use the most up-to-date types of data security and encryption processes to minimize the change of anything being intercepted as it is transferred over the internet.

 

Protecting Your Data With Encryption

 

There are a number of ways to protect online information, with data encryption being the most commonly used by to protect information transferred between servers and clients, or in data storage centers. A well trained data security team will be able to advise you on the best way to secure your information and keep it safe. Here are several important ways to protect your company data, user information and financial accounts.

 

Use of Encryption

 

Encryption transforms data, making it unreadable to anyone without the decryption key. By encrypting data as it is exchanged between web browsers and servers, personal information such as credit card numbers, social security numbers, and addresses can be sent securely over the internet with much less risk of being intercepted during the process.

 

Two types of protocols used during the encryption are:

● Secure Shell (SSH) Encryption Protocol – This process involves the encryption of all data between the browser and the server while they are communicating at the shell.

● Socket Layer (SSL) Encryption Protocol – This involves encrypting all data in the transaction between the web browser and the web server, before any data is transferred. This protects secure data like a “shell” covering up the data as it transfers across online connections.

 

Types of Data Encryption Used To Protect Your Information

 

Authentication is the process used to prove that a computer user is who they say they are. It identifies who the system (or person) is, and then verifies that they are “authentic”. This data encryption tool is used by servers to find out who exactly is accessing their website or online information. It’s also used by clients who need to be sure the server is the system it is claiming to be. The process of authentication generally involves the use of a username and password, or it can be accomplished through voice recognition, fingerprints, employee ID cards, or even something as complicated as retina scans.

 

Web servers issue authentication certificates to clients as well, which are proof that the system truly belongs to the entity it is claiming. These certificates are often processed through third party authentication, such as Thawte or Verisign. You can check which authentication is used by a company by looking on their website for a seal or link to the third party provider they use.

 

Authorization is usually coupled with the authentication process, determining whether or not the client has permission to access the resource or file they are requesting. By using authentication, the server identifies who you are, then checks a list of authorized users to see if you are allowed to visit the website, open the file or use the resource you are attempting to access. This may involve a password, or it may not.

 

Authorization usually only grants or revokes access based on your identity as you log in to the file or website. Most internet web pages are open to the public, and do not require authentication or authorization to access. Private sites, company restricted information, and other private data is generally encrypted with authentication and authorization tools.

 

Identity Management describes the process of managing the authentication and authorization of people within an organization. An identity management system keeps track of privileges across an entire entity, increasing security and productivity for a business. Identity management can be accomplished with active directories, identity providers, access control systems, digital identity managers, and password authentication. By keeping track of how users receive an identity, protecting that identity, and granting appropriate access, the identity management system saves money, repetitive tasks, and downtime of the system.

 

Back-up plan in case data is breached. How to assess the situation without a big loss

 

What do you do if, despite your best efforts, sensitive company or client data is breached? It’s important to have an emergency plan with an outline of the proper steps to take in this unfortunate situation. In order to act appropriately, be aware of government regulations and rules for how to handle this kind of situation.

 

According to the Better Business Bureau, some important steps to take to prepare for and react to a data breach situation includes:

1. Create a “Data Breach Notification Policy” to let your consumers know how you will handle the situation if data compromise has occurred.

2. Train your employees to identify possible breaches in data and how to report it.

3. When a data breach has occurred, immediately gather the facts so you know what was accessed, how it was accessed, and how you can prevent more data from being compromised.

4. Notify any financial institutions involved. For instance, if bank account numbers were accessed, notify the relevant banks immediately so they can watch accounts for suspicious activity. If credit card numbers were affected, credit companies can change card numbers and make old numbers ineffective. This will minimize damage.

5. Seek outside counsel from a lawyer, a risk consulting company, or a relevant government agency. They can help you identify the laws involved and whether you need to alert clients, consumers, or the government of the incident.

 

The Importance of Taking Precautions to Secure Data with Credentials

 

The importance of securing your data with authentication protocols and credentials cannot be overstated. Making sure that secure data is viewable and accessible by only those with the proper credentials is important for the management of any business. Find a data security partner who shares your vision for the protection of company documents, user identification information, and other private information. Take every precaution necessary to make sure your customers, employees, and your business data is protected against hackers, thieves, and those wishing to do harm to your business, clients, and employees.

 

Posted in Datacenter Design | Tagged , , , | Comments Off

The Benefits And Risks Of Cloud Computing

Business Data CentersCloud computing has become increasing popular over the past several years. It enables hosted services to be delivered over the internet, rather than physically storing information on computers. The data center is a critical part of service delivery. Cloud computing has benefits and risks.

 

Service Models

 

There are 3 different cloud service models: IaaS, PaaS, SaaS. IaaS (Infrastructure as a service) supplies computing resources such as network capacity, storage and servers. PaaS (Platform as a service) gives users access to software and services so they can create their own software applications. SaaS (Software as a service) connects users to the cloud provider’s own software applications.

 

Public or Private

 

There are several different types of clouds. A public cloud sells service to anyone on the internet. There are a number of well-known, established companies that provide public cloud computing services. A private cloud supplies hosted services to a defined group of people through a proprietary network. Another variation is a hybrid cloud. It combines two or more public or private clouds that are separate entities but intertwined by technology. A community cloud is designed to be used by multiple organizations in support of defined community. Community clouds can be maintained by a third party and located on or off the premises. No matter the type of cloud, the goal is to supply access to straightforward, scalable computing resources and IT services.

 

Low capital expenses

 

The computing and capacity needs of organizations are constantly in a state of flux. Cloud computing can be a very cost effective way to access additional computer resources from a state of the art data center. It provides flexibility that many businesses desire. There is no upfront investment in equipment. With cloud service, the needed equipment is already available and ready to use. Businesses can rent the latest equipment, such as servers, when required, rather than having to purchase or upgrade their own servers.

 

It’s elastic

 

Another benefit of cloud computing is that it is elastic. A user has access to as much or as little service that they may need at any point in time. It is exactly like turning on a faucet when you want a drink and turning it off when finished. The ease of scalability of cloud computing eliminates concerns a customer may have about things like having to provision their own additional capacity on short notice. A cloud provider handles these issues for their customers.

 

Provider managed

 

Cloud services can be fully managed by the provider. Many businesses find this attractive because it can help lower IT operating costs. Expensive IT staff to install, maintain and monitor equipment and applications is not needed if a business is using cloud computing.

 

Transparency

 

One of the negatives of cloud computing is that services are not typically delivered with a lot of transparency. It can be difficult to know what exactly is being done and how it is executed. Cloud customers are not able to monitor availability, or do anything to resolve a service interruption or an outage. Loss of data is another concern. Even if a cloud provider states they have redundant back-up capabilities, you will not know for sure unless there is a problem. It is always a good idea for a company to back-up its own data for extra protection.

 

Data ownership

 

Cloud customers frequently don’t own their own data. There are a number of public cloud providers whose contracts contain language clearly stating any data stored is owned by the provider, not the customer. Cloud vendors believe having ownership of the data provides them more legal cover if something were to go wrong. If a vendor has access to customer data, they are also able to search and mine the data to develop more potential revenue streams.

 

Shared access

 

A potential risk of cloud computing is that involves sharing access with other users. One of the reasons the price for cloud services can be so attractive is that vendors are realizing economies of scale to drive down the cost. Multiple customers have access to the same computing resources including memory, storage and CPUs. These customers are typically totally unrelated.

 

Advantages of building a private cloud

 

It may make the most sense for a company to build its own cloud rather than rely on other options. If a company prefers to manage all its computer resources and IT operations in house, then a private cloud is a good choice. This solution can also give a company complete control over its technology. A private cloud can be an optimal way to solve an organization’s business and technology challenges. It can provide IT as a service. The results can include lower costs, increased efficiency and business innovation. A private cloud can be used for employee collaboration from any location on any device. It will also help ensure the best possible network and application security.

 

Things to consider when building a cloud

 

There are many things to consider when building a private cloud. They include infrastructure, types of applications that will be run, access methods, traffic patterns and security. When determining infrastructure, it is important to have an experienced data center partner involved in the process. This will help to ensure the quality of service. Mobile device based access has become increasingly important. Virtual desktops are another important access component. Many companies need to have traffic endpoints that are location independent. Security is always a crucial piece of cloud design. The appropriate users and devices must be able to access the needed information at the right time, while also securing the system from any attacks or breaches in security. Different types of data and traffic possess disparate levels of importance. They all must be supported across the network. There is no one size fits all private cloud model. Every organization has different priorities and needs that should be taken into account and reflected in the cloud architecture.

 

Cloud computing is directly influencing the future of technology. There are several service and delivery models for cloud computing. Each one has its own pros and cons. Building a private cloud may ultimately be the best option for an organization that wants to have complete control and manage everything in house.

 

 

 

Posted in New Products | Tagged | Comments Off

What The 2013 Data Center Census Means For The Industry

UPS Maintenance

Uninterruptible Power Supply Maintenance

The 2013 data center census works to give a glimpse into the future of the data center industry. Although the census is not able to completely predict the future, it can give some telling indications about where the future of data storage is headed. The census works to identify the answers to questions about where data centers will be spending their money, what new technology will be developed, how the cloud will affect the need for data centers, and many other important questions for any individual working within the data center industry. Although it is impossible to predict exactly what will happen and foresee any disasters that may also affect the industry, the census gives some insight into how data centers will change in the 2013 year.

 

The Implications Of The Cloud

The cloud has become a popular way for companies to store important information directly to the web, rather than being forced to hire a data center to handle the storage of all important information. Concerns within the industry were that this new technology would make data centers obsolete and old fashioned, and that companies would turn to the cloud as the more effective way of handling information. As the data center industry worked to evolve with this new technology, many in previous years felt that a significant amount of their time and money would be spent developing new technologies that conform to the idea of the cloud. In reality, very few data centers actually invested money in this idea in 2012, choosing instead to focus assets on everyday items like cooling, cabling, and power supplies.

According to the data center census, 2013 may just change all that. From the information within the census, it can be assumed that architecture for cloud infrastructure will be a main focus of data centers throughout the world. It’s predicted that some countries will see close to a 138% increase in uptake in regards to cloud architecture. Those countries that have a lower percentage of uptake are also those with a higher uptake over the previous years, leading one to believe that they have already implemented new technology. No matter what country is being examined, it’s clear that the cloud and the changes necessary because of it will be an important part of every data center’s future plan.

 

Data Center Infrastructure Management

DCIM is a fairly new idea that that works to merge the fields of network management and physical management within the data center. This works to create new systems that make the center more energy efficient.

DCIM has been considered the solution to the problem of energy efficiency within the data center. Implementing new data center infrastructure management should ideally make each data center more efficient and more cost effective. In the previous year DCIM did not perform as well as expected, but it appears that 2013 may change all of that. Countries such as Latin America and Russia predicted a high uptake in DCIM throughout the 2013 year, although many who had the lowest uptake in 2012 also expected a higher uptake in 2013.

The most important pieces of information to take from the census are that the cloud will be an important aspect of every data center going forward, that data centers will spend a large amount of money working to make the centers cloud-friendly, and that DCIM will be implemented at a higher rate than before in order to make data centers more energy efficient.

The focus on energy efficiency and developing ways to make data centers more efficient may have come from the fact that the 2012 census brought staggering numbers of the total energy consumption by data centers used throughout the world. Creating data centers that are more energy efficient helps to save money and is easier on the environment, while creating outputs that are measurable and can be examined in every aspect of the center’s energy usage.

 

Census Details

The census looks at information given from over 10,000 participants all over the world regarding important and relevant topics within the industry. The census also focuses on the charity efforts and philanthropic efforts made by the industry, by giving five dollars to the Engineers Without Borders organization for each survey that is completed. The previous year’s census helped organizers to amass a fund of $40,000, which was then donated to UNICEF to help children throughout the world.

 

Practical Applications For The Census

 For any individual working in the data center industry, there are practical applications that can be made from the information gleaned by the census. With this high number of participants, it can be assumed that the information is fairly accurate in depicting what the next year will be like for the industry.

Data center professionals can recognize from the census that the competition is focused on new technology, and on creating an infrastructure that is compatible with the cloud and allows for customers to utilize this valuable new tool. Data centers that are hoping to stay relevant within the industry will be rewarded by moving assets toward the development of a new infrastructure that includes the cloud.

Data center professionals can also assume that energy efficiency will be a big topic this year in the industry. When a center is run more efficiently, and costs are lowered, the savings can either be passed onto the customer, or used to improve the service that the customer receives. Data center management must realize that the competition is focused on lowering energy costs both as a way to improve the way consumers look at the center, and to allow them to put money into more valuable developments and tools. Ignoring the need for a data center that consumes less energy can make a center seem outdated and inefficient to the average consumer.

 

The Forecast For The Future

 Each year, individuals within the data center industry can focus their efforts on important updates and changes that make the center more functional and more successful. In order to determine where money should be spent in order to accomplish these goals, it’s important to pay close attention to the census information that is released each year. This can give each data center important clues as to where the industry is headed, and how quickly they need to get there.

Posted in Construction Industry | Tagged , , | Comments Off

Setting Up A Disaster Plan For A Data Center

data_center_disaster_planIn terms of disasters, few people are prepared for the chaos that can come as a result of any type of disaster. Floods, fires, tornadoes, hurricanes, and even heavy rainstorms can damage structures and belongings beyond repair. Most times, there is little to no warning that a disaster will occur, and minimizing the damage becomes difficult without a disaster recovery plan in place. This precaution is especially important for a data center, where large amounts of expensive equipment and irreplaceable information may be stored. Creating a basic disaster plan for your data center is a simple process if you know where to start.

 

Assess The Risks

What types of risks does your data center face on a daily basis? A center in the middle of Arizona isn’t likely to deal with a hurricane, but a fire or monsoon is a likely possibility. California data centers may not see a heavy amount of snowfall, but must be prepared for floods and earthquakes. Before you can prepare for any disaster, you must determine which disasters your data center is at risk from.

Along with natural disasters, there are man made disasters that can happen with little warning. Fires may result from an electrical shortage, equipment may be damaged by a theft or burglary, and other number of man-made disasters may occur. Data centers in all parts of the world should be prepared for these untimely incidents.

 

Within an operational risk assessment, examine the following information:

• The location of the building

• Access routes to the building

• Proximity in relation to highways, airports, and rail lines

• Proximity to storage tanks for fuel

• How power to the data center is generated

• Details of the security system

• Any other critical systems that may shut down in the event of a disaster

 

Assessing the risks is the first step in creating a contingency plan that protects the building, the information, the equipment, and the employees when the unthinkable happens.

 

During the risk assessment, do the following things.

• Include all IT groups to guarantee that all departments have their needs met in the event of an emergency.

• Obtain a list of all data center assets, resources, suppliers and stakeholders.

• Create a file of all important documents regarding the infrastructure, such as floor plans, network diagrams, etc.

• Obtain a copy of any previous disaster plans used for the particular data center.

 

Once all relevant information has been gathered regarding the data center, the design process can begin.

 

Preliminary Steps For Disaster Planning

The first step in creating a disaster plan for a data center is to consult with all management within the center to flush out the threats that are most serious to the center. These can be human error, system failure, a security breach, fire, and many other things depending on the individual center.

The second step is to determine, with the help of other management professionals, where the most vulnerable areas of the data center are located.

Next, study the history of any malfunctions the data center has faced and how each disaster was handled.

It’s also important to determine exactly how much time the data center can handle being without power before the situation becomes critical.

Next, review the current procedures for how an interruption to the data center power supply should be handled, and obtain information regarding when these procedures were last tested by the appropriate individuals.

Single out emergency teams for the building, and review their training in regards to emergencies to determine if additional training or updates need to be implemented.

Finally, identify the response capabilities for emergencies for each of the center’s vendors.

 

Developing A Data Center Disaster Recovery Plan

When compiling information in regards to risk assessment, no stone should be left unturned. The more information, the more accurate and successful the disaster recovery plan will be. Disaster recovery plans cannot be created without a good level of organization and information, and will be extremely ineffective if information is inaccurate or incomplete.

The next part in a disaster recovery plan involves compiling a gap analysis report that determines the differences between the current emergency plan, and what the new emergency plan needs to be. During this process, all changes should be clearly identified and listed in order to more efficiently address potential problems. Include the total investment that is required to make the changes along with recommendations from the proper professionals on how to implement each change. Once the report is complete, have each member of management read the report and choose which recommended actions would be put into place. Each management member should have input into which changes are made, and coming to an agreement may require more time spent at the drawing board.

Once the recommendations are in place, and each member of management has agreed that the needs of their individual department are met, it’s time to implement each of your changes for your critical assets. Hardware and software, networks, and data storage should all be addressed within this step to ensure that equipment is protected and that information can be recovered in the case of a disaster. Once changes are implemented, tests should be run to determine if system recovery assets and plans are properly functioning.

If it is determined that the updates are functional and successful in recovering and saving equipment and data, it’s now time to update all documentation for disaster recovery in company handbooks or policy manuals. Because technology is constantly changing and the needs of data centers are always evolving, disaster plan updates should be made regularly. In order to do this successfully, there must be an accurate record kept of former procedures and how well they worked as intended.

 

The Next Disaster Recovery Plan Update

Once a new recovery plan is in place, there is no time to relax. Changes in the plan should constantly be on the minds of management personnel, and the next update to the system and process should be scheduled before the committee adjourns.

When designing a disaster recovery plan, keep the information as simple as possible in order to stay more organized, and to avoid going overboard and overlooking important minute details. It’s not necessary to completely overhaul a system to update a plan; constant changes should be made to protect the equipment and information housed by a data center.

 

Posted in data center maintenance | Tagged , , | Comments Off