Wearable Technology Fueled By The Cloud

cloud_computing_2013_censusTechnology continues to deeply integrate itself into our everyday lives.  Wearable technology is quickly becoming commonplace, something you see on everyone.  Business Insider reports on just how rapidly wearable technology is growing, “In just a few years, there could be more people using wearable tech devices than there are in the US and Canada. In a note to clients on Monday — alongside initiation of Fitbit coverage — Piper Jaffray’s Erinn Murphy and Christof Fischer stated that “wearable technology will be the next generation of devices to transform how individuals consume and use information.” Murphy and Fischer estimate the wearable tech category will grow from 21 million units in 2014 to 150 million units in 2019, a 48% compound annual growth rate (CAGR). This growth will largely be fueled by wrist wearables like smart watches and fitness bands.”  Typically, when you think of the cloud you probably think of data centers managing vast amounts of information, storing and analyzing data to better improve business practices and providing an important efficiency improvement for data storage.  While all of these things are accurate, those kinds of things do not really directly touch the consumer in a tangible way.  But, as technology continues to evolve and more and more wearable technology enters the marketplace, the cloud is quickly integrating itself into the consumer’s daily life.

When one looks at something like Google Glass, the Apple Watch, or the Fitbit, they probably do not immediately think of the cloud.  But, the cloud is an integral part of wearable technology.  Why?  because people are not simply interested in cool technology, they want that technology to actually improve their lives, make things more simple and provide a service.  This cannot be accomplished without the cloud because the cloud collects the data from these wearable technologies, stores the data, analyzes the data and then uses the data for the benefit of the wearer.  This is what makes wearable technology so exciting and desirable to consumers.  Wearable technology, in essence, performs a service because it collects the data and then stores, analyzes or uses it in a beneficial and helpful way.  These technologies are not simply for entertainment and personal use but also for the workplace as well.  Because wearable technology can also improve careers and facilitate better work it makes them even more valuable.  The better the cloud service, the more accurate the data collection and analysis is, the better the overall product will be, and thus, the more successful and marketable the product will be.  Therefore, it is critical that quality cloud service work closely with wearable technology as the world becomes more and more dependent on the services provided by and features of wearable technology.

Posted in Technology Industry | Tagged | Comments Off

Does Your Data Center Have a Disaster Recovery Plan In Place?

interxion-containment-overhFew things in recent history point to the need for a comprehensive disaster recovery plan for data centers than Hurricane Sandy.  When disaster struck many data centers were unprepared and ill-equipped which led to significant downtime that ultimately cost millions of dollars.  Data centers can only operate for so long under the false sense of security and protection before a disaster strikes and the sudden panic sets in.  A detailed disaster recovery plan must be in place, complete with multiple contingencies, before a disaster ever strikes so that, should it happen, immediate action can be taken.

To begin formulating a disaster recovery plan you must first identify all of your critical systems.  Once you have identified them, you can properly determine how to best protect them in the event of a disaster.  To properly prepare a detailed inventory of infrastructure, along with a comprehensive understanding of it must be routinely kept.  When DCIM is lagging and knowledge of infrastructure lacking or out of date a disaster will become a major problem for data centers.  Along with a detailed knowledge, a thorough backup must be in place as well.  Redundancy in a data center not only protects against lost data on a day to day basis but in the event of a disaster as well.  Additionally, by taking advantage of the cloud data centers can virtually protect information which is safe from disaster and a useful tool for data recovery.  Ready, a national public service campaign that is “designed to educate and empower Americans to prepare for and respond to emergencies including natural and man-made disasters,” describes what the critical elements of a disaster recovery plan are for any data center, ” Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:

  • Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)
  • Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)
  • Connectivity to a service provider (fiber, cable, wireless, etc.)
  • Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)
  • Data and restoration

Once you have identified critical system components and how to best protect them, a comprehensive DR (disaster recovery) Plan should be formally written up and kept safe.  All pertinent personnel should also be trained and prepared for how to act should a disaster occur.  By doing so, you will be able to best protect your data center and clients if a disaster occurs and maximize uptime.

Posted in Back-up Power Industry, Computer Room Design, data center equipment, Data Center Infrastructure Management, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , | Comments Off

Stop Manually Collecting Data For Your Data Center!

UofA_2011_825kVA_Delivery_011-001It is critical that any data center have a strong DCIM strategy in place.  Even in relatively small data centers the amount of infrastructure that must be tracked, managed, maintained and more is enough to overwhelm any data center manager.  There are a variety of ways to track and maintain infrastructure but traditionally it has been done manually.  But, is this method truly the most effective way to collect data in a data center?  And, if not, what is the solution?

One way that many data centers have traditionally collected data is to give each piece of infrastructure a barcode.  The barcode can then be scanned and inventoried to keep track of what is where.  But, the problem with this method is that it does not provide much data and the data it does provide is often outdated.  Additionally, it does not provide much insight as to how each piece of equipment within an infrastructure is using energy.  Without that information, how can a data center manager accurately determine where energy is being used appropriately, where it is being underused and where it is being overused?  If an analysis is only completed every 6 months or once per year, that is a lot of time that has been wasted when improvements could have been made, efficiency improved and money saved.  By implementing a more intelligent DCIM management tool it can provide up to data information about equipment and energy consumption so that it can offer real, actionable data that data center managers can use.  Alerts can be arranged based on predetermined thresholds, graphs and charts generated from gathered data and more accurate action can take place.   It is also a far more efficient means of collecting data in a data center rather than manual data collection.  This will save time and many headaches for data center managers, making an improvement any way you look at it.  One of the last and most important reasons to implement an automated data collection method instead of manually collecting data is that it can alert you to potential capacity problems far sooner than manual methods.  Rather than capacity problems sneaking up on data center managers, leading to major problems and the possible expense and frustration of having to relocate, automated DCIM will help data center managers see where capacity stands within a data center far before outgrowth ever becomes a problem so that any necessary condensing, rearranging or cloud storing can take place to save room.  Manual data collection may have been the way thing were done in the past but it is not the way of the future and data centers that make the switch now to automated DCIM will enjoy the variety of benefits it has to offer.

Posted in Computer Room Design, computer room maintenance, Data Center Infrastructure Management, DCIM | Tagged , | Comments Off

Will More Data Centers Switch to Racks-On-Chips?

datacenter45In today’s technology world everything is getting smaller and smaller.  And, with the cloud, some things are simply disappearing!  Many data centers struggle to make their capacity work without exceeding it but, as expansion occurs, infrastructure changes and more racks are needed, data centers often find themselves outgrowing their space.  This leaves them in the frustrating position that they must then relocate which can be tricky and often leads to downtime.  Racks-on-chips have become a hot topic of conversation because it  can help slow down overgrowth in a data center.  Racks-on-chips are essentially a condensed version of a server rack while maintaining processing power and capacity but packaged in a much smaller package – a chip.

The critical component of racks-on-chips that makes them possible is that the capacity of chips must be increased so that they can store as much as server racks can.  Racks-on-chips can be achieved but data centers must be retrofitted to make them possible because racks-on-chips have to be networked together with optical circuit switching and electronic packet switching.  New data center builds that are forward thinking will plan ahead and have these system in place so that racks-on-chips will be possible.  By switching to racks-on-chips the amount of server racks used in a data center can be dramatically reduced and once of the biggest benefits of this aside from freeing up physical space in a data center is that is also significantly reduces the amount of energy needed for heating and cooling.  This means that a data center using racks-on-chips will be far more energy efficient, which is not only more green but will save a significant amount of money.  While racks-on-chips are a potential ideal solution for data centers space and energy issues, Motherboard explains that it is still in the early stages and much still needs to happen for the benefits to be truly realized, “But the meat and potatoes of Yeshaiahu Fainman and George Porter’s server-rack-on-a-chip vision is really about taking the existing framework for a server rack and recreating it at the nano-level. They say that miniaturizing all server components so that several servers can fit onto a computer chip would increase processing speed. Making circuit systems to support all these mini-components using advanced lithography is already feasible, but scientists have yet to realize nano-transceivers and circuit-switchers—the key components that transmit data. And while silicon chips are increasing being used to transmit data-carrying light waves in fiber optic networks, efficiently generating light on a silicon chip is still early in its development. The researchers offer some solutions, like including light generating nanolasers in the chip design.”


Posted in Computer Room Design, computer room maintenance, data center equipment, Data Center Infrastructure Management, DCIM | Tagged | Comments Off

Data Center Capacity Management Reporting and Analysis

datacenter726DCIM, or data center infrastructure management, is incredibly important for any data center.  If infrastructure is not managed properly in a data center many problems can arise.  Proper management is challenging for even the most experienced data center managers because technology is constantly evolving, power density is constantly added to racks and cooling needs shift as well.  One of the best ways to improve data center infrastructure management is to utilize software to collect information rather than manually collecting information.  DCIM software is a central location to aggregate and analyze data that has been collected so that performance and uptime can continue to be improved.

To properly manage capacity, data about the following must be collected: power usage, rack space, cooling and more.  As this data is collected data center managers will be able to see how much rack space is being used so that they can determine whether or not they want to add more density to a rack.  With this data, managers will know if rack density maximum capacity is being used properly or too close to being exceeded.  If it is being underutilized too much energy may be used to cool it and it is likely better to increase density or condense server racks to maximize space and efficiency. This is important because proper management of rack density allows for improved energy efficiency and a reduction of downtime.  But, without a sophisticated DCIM software, a data center manager may not even realize if a rack density is being under or over-utilized.  Often, what data center managers experience is that, over the life of a data center, more systems are added, equipment is added or adjusted and suddenly there is a general lack of knowledge about power density.  DCIM software helps prevent a lack of knowledge from occurring by ensuring that data center managers, and support staff, all have the most current information about what is happening in a data center.  DCIM software will not only analyze rack utilization but power management as well.  As data is collected data center managers can see how and where power is being used.  This will help pinpoint areas in need of improvement and allow managers to make changes as quickly as possible.  DCIM software can even be set up with automatic notifications so that the most current information is being provided at all times.  With such current information it is easy to micromanage or become overly concerned with finite details but it is better to instead look at trends over time.  By analyzing charts and graphs to see trends you will be able to see if something was a fluke or if it is a consistent pattern.  Once consistent patterns are known it is easier and more effective to make permanent changes and improvements to a data center so that efficiency and data management can be optimized.

Posted in Back-up Power Industry, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM | Tagged , , , | Comments Off

The Strain of High Density Devices Without Proper Cooling Methods

DCIMMany data centers operate with high density server racks for a variety of reasons.  Perhaps they want to maximize their space by increasing the density of their racks so that they do not outgrow their data center walls.  Perhaps it is a strategy to concentrate power, and therefore heat, in an attempt to more specifically cool areas as an attempt at improved cooling efficiency.  No matter the reason , many data centers have extremely high PDUs, sometimes without even realizing it, and without realizing it, it can be incredibly challenging to appropriately cool a data center.  This dilemma is often the source of outages and downtime, two things no data center wants to experience.  No doubt about it, the pressure is on data center managers to run a high density data center that are also highly efficient and able to sustain their power needs.  No small or simple task by any means.

So, what is a data center manager to do?  How can you achieve high density, high efficiency and high efficacy all at once?  Is it even possible?  Yes, but data center managers must make it their priority to have a full working knowledge of their data center, power density capacities, best practices when it comes to cooling and more and then devise a strategy with contingencies so that if the unexpected arises there is a plan in place.

First and foremost, a data center manager must have knowledge of what a data center’s rack power density maximum capacities because without this knowledge first a data center will always be playing catch up.  Catch up leads to outages and downtime so it is simply unacceptable.  Once a data center manager has knowledge of power density maximum capacity they can work to ensure it is not exceeded.  This alone will prevent a significant amount of problems.  Additionally, by understanding maximum capacity a data center manager can make appropriate cooling arrangements.  They will also be able to accurately determine what backup power and redundancies are needed to mitigate downtime.  With a full working knowledge of data center power densities and cooling needs, alternative cooling methods can be explored to determine whether or not they would be more efficient or effective.  If power density maximums are being pushed or exceeded additional server racks or space may be needed because pushing or exceeding the limits may temporarily save space but it will create major problems going forward.  Outages, downtime, damage and more could be experienced if power density maximums are exceeded so the temporary solution was never really a solution at all.  Once a strategy and plan is in place, the entire data center team and all other appropriate personnel should be well informed and trained because without communication problems will continue to arise.  High density racks are a good strategy for data centers who want to maximize space and efficiency but for it to be effective a full working knowledge of data center abilities and capacities must be known and appropriate cooling methods must be executed.

Posted in Back-up Power Industry, computer room construction, Computer Room Design, data center cooling, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Power Distribution Unit | Tagged , , | Comments Off

Should Data Centers Attempt to Achieve a Single Pane of Glass for DCIM?

datacenter45Data centers run a lot of software, applications, and more in order to perform all of the necessary tasks a data center completes.  For a data center to run properly there is not just one program to get everything done, there are many.  And, over time, more and more are added as technology and data center priorities change.  Data center infrastructure management, or DCIM, is a top priority for data center managers because without access to and analysis of the most current data regarding data center infrastructure, programs and applications, devastating problems can and will arise.  A data center manager that is not well informed may not know rack density maximums are and may not realize they are going to exceed them until it is too late.  What happens when it is too late – power outages and downtime.  Downtime can be devastating for any size business so it must be avoided at all costs.  Due to the urgency and importance of effective DCIM, many data centers and managers are on a quest to get their DCIM on a “single pane of glass.”  If they can see everything they need to see on a single pane of glass then they do not need to change applications or go rooting around for various data, it is all in one easy to access place.  But, can it really be achieved?  Can everything a data center manager needs to know be located on a single pane or is this a fruitless quest?

A single pane of glass for DCIM can be achieved but, most likely, application consolidation may need to take place.  Consolidation is a cost-effective, efficient and effective way to achieve a single pane of glass approach.  There are service providers who can consolidate applications and achieve a unified framework, and thus, unified view of applications.  A single pane of glass may be able to fit everything that a data center manager needs to monitor operations but can they maintain and optimize their data center from a single pane?  If not, how effective is it really?  By hiring a service provider experienced in consolidation and optimization you can work closely with them for a smooth transition that does just lead to more headaches, confusion and miscommunication.   If the approach is achieved effectively, all of the software tools can utilize data to offer the most current information about the function of a data center but in the attempt to make the transition some data centers realize that the compilation of so much data into an effective database for analysis often leads to errors.  While a single pane of glass is possible it may be difficult to achieve but with the help of an experienced professional, time and patience your data center manager can get the view they hope to achieve to improve DCIM.

Posted in Data Center Infrastructure Management, data center maintenance, DCIM | Tagged , | Comments Off

Environmental Impact of Data Centers

Green Building Data CenterData centers perform an invaluable job but that comes at a cost.  Data centers use a significant amount of energy and, while many are making efforts to be more energy efficient, the fact of the matter is that there are still many data centers that consume a dramatic amount of energy each year.  This of course is not only costly but not friendly for the environment.  The digital world has some very real and tangible energy side effects.  While it may not all be felt immediately, there is an environmental impact occurring as a result of data centers.  Time discussed just how much energy is being used by data centers and what to expect going forward, ” IT-related services now account for 2% of all global carbon emissions, according to a new Greenpeace report. That’s roughly the same as the aviation sector, meaning all those Netflix movies the world is streaming and the Instagram photos they’re posting are the energy equivalent of a fleet of 747s rumbling for takeoff. Unless something is done to green the cloud, we can expect those emissions to grow rapidly—the number of people online is expected to grow by 60% over the next five years, pushed in part by the efforts of companies like Facebook to expand Internet access by any means necessary. The amount of data we’ll be using will almost certainly increase too. Analysts project that data use will triple between 2012 and 2017 to an astounding 121 exabytes, or about 121 billion gigabytes. ‘If you aggregated the electricity use by data centers and the networks that connect to our devices, it would rank sixth among all countries,” says Gary Cook, Greenpeace’s international IT analyst and the lead author on its report. “It’s not necessarily bad, but it’s significant, and it will grow.'”

The fact of the matter is that data centers consume energy, a lot of energy, and that will not change.  But there are ways to reduce energy consumption and find more clean ways to use energy that leave less of an environmental footprint.  Most data centers use a lot of energy but they also waste a lot of energy and that is simply unacceptable.  Energy efficiency must be made a priority in a data center and a plan of action must be immediately implemented on a full scale.  Consolidation must take place, fewer servers means less space and less to cool.  But, with a high density rack, cooling can be focused so that cooling efforts are not wasted.  Additionally, containment such as hot aisles and cold aisles is one way to help improve energy usage.  Another and more green option in terms of cooling is to house your data center in a cooler climate that can take advantage of natural cooling elements and reduce cooling needs within a data center.  Additionally, as more and more moves to the cloud, physical infrastructure can be reduced in data centers which helps reduce cooling needs.  Data centers need to examine their energy usage and look for ways to not only reduce but for green options that are more environmentally friendly so that a sustainable data center future can be achieved.

Posted in Data Center Design | Tagged , , , | Comments Off

What To Consider During a Data Center Migration

Spring Cleaning Checklist

Data center migration.  Those three words could probably send any data center manager running for the hills.  Alas, when a data center has outgrown its current facilities or has to be moved for any other reason, data center migration is a necessity.  A data center migration is a significant undertaking fraught with potential risks and hidden problems.  So, how does anyone ever accomplish it?  Through proper research, planning that is complete factors in contingencies, careful instruction to ensure that everyone is on the same page, and a lot of patience, a successful data center migration can be achieved.  When anticipating a data center migration there is no avoiding it, you simply have to face the fact that there will be surprises along the way.

For any data center, the primary concern with a migration is downtime.  Even seconds of downtime can be incredibly costly so reducing or completely eliminating downtime is the name of the game.         While certain things can be moved during off-peak hours, some things still may occur during peak business hours.  For this reason it is very important to keep everyone well informed and on the same page at all times.  End users, support teams and anyone impacted by the migration should be informed of the time table, schedule and anticipated plans regarding the migration.  Next, it is important to research what existing infrastructure there is, what legacy systems are in place, and what will be making the migration.  Once you have a thorough idea of what will be making the migration, it is important to anticipate what additional infrastructure and equipment will be needed in the new location so that a layout can be planned.  Plan a layout based on best practices in the new data center so that you can accommodate growth as needed.  Scalability is important for the lifespan of any data center so while you are making the move it is an ideal time to ensure the new data center will be scalable.  When anticipating scalability you must see beyond current needs and standards and look forward to future rack density expectations.  With all of these things being considered, you may want to also consider what option is best for efficient cooling.  Cooling a data center is often one of the biggest expenses a data center experiences so when determining layout you may want to think about implementing hot aisles/cold aisles and whether or not there are other layouts that may serve your data center best.  The final part of any data center migration is the migration itself.  For most migrations it is a good idea to use the help of an experienced project manager with knowledge and expertise about data center migrations to ensure everything goes smoothly and that once the migration has occurred, everything runs as it should.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , | Comments Off

How to Build An Immortal & Future-Proof Data Center

data_center_facebook_postDesigning and constructing a data center is no small undertaking.  Heating and cooling, energy efficiency, infrastructure management and scalability must all be considered.  A data center must be designed to meet current needs while being able to scale up or down in the future as needed.  The tricky part of that with a data center is that if you build a data center that is far bigger than current needs it may not be energy efficient.  But, if you build a data center that is just barely big enough and you expand over time you may quickly outgrow your space and find yourself looking to move which is costly and difficult.  So, how does one build the illusive immortal data center?  Do not fear, it can be achieved.

First thing is first, you have to determine the goals of your data center when designing an immortal data center.  One of the most important things to consider is scalability.  Without scalability, most data centers will outgrow themselves with time.  A data center must have dense server racks, do not run racks on only 50% capacity.  Increase the density of racks to maximize space.  But, you cannot increase rack density without having sufficient cooling options.  Some data centers prefer to employ hot aisles and cold aisles to help more efficiently heat and cool a data center.  By creating hot aisles and cold aisles, or specific zones for certain infrastructure, it can drastically improve energy efficiency and reduce energy expenses.  If hot aisles or cold aisles are not ideal for your data center, consider adding a high density room where cooling energy can be focused to the most demanding area and the rest of the data center can be kept at a warmer temperature.  This will help keep costs down while still efficiently cooling high density server racks.  Additionally, when designing an immortal, future-proof data center, it is wise to look at green and more efficient methods of cooling.  By implementing green methods you can save significantly over time while still effectively cooling a data center.  The last consideration for all data centers is the cloud.  The cloud is really the way of the future and many data centers are converting to more and more cloud usage to reduce physical infrastructure.  This helps free up additional room in a data center and reduce heating and cooling needs.  The cloud is infinitely scalable so it offers immense potential for the future and is a significant tool to keep any data center remain immortal.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design | Tagged , , , , | Comments Off