Tape drives for backup may have been a staple for decades but they are very quickly losing ground to disk storage. While tape has been around for so long, it has evolved over time and still can play a valuable role in certain instances. Tape drives offer a number of benefits, least of which is that it is less expensive than using disk backup. Because tape and disk drives are used for backup it seems only logical to choose the least expensive method, right? Well, it depends. To be fair, we should fully examine all of tape’s benefits and drawbacks. In addition to being the cheaper option, tape drives provide very high-capacity storage. Tape systems are also more energy efficient tan disk options because disk drives remain on and running at all times, even if all the stored data has been logged. With tape drives, once the data is stored it can be off and stored somewhere. This means that not only will tape drives consume less energy and require less cooling but they are also less prone to problems and breaks.
While it may sound like tape drives are clearly winning over disk drives, tape drives certainly have their drawbacks. If you need to retrieve data from your tape drive, be prepared for an arduous endeavor. While the newest tape drives have improved their ability to recover data, older ones remain difficult. And even still, tape drives are simply slower when retrieving data than disk drives. When time is money and uptime needs to be maximized, every second for recovery counts. Additionally, you cannot simply set a tape drive on a shelf and forget about it. They must be maintained in a safe and dust free environment or they may run into issues of their own.
So, what advantages do disk drives provide over tape and are they better? The debate rages on but to properly compare the two we must look at the advantages of disk drives. One of the biggest advantages is rapid recovery of data. Disk drives are often stored in house and very quick to recover necessary data so it is far faster than its tape counterpart. But, because disk drives are stored in house and often remain running they take up more space and use more energy. This means a greater overhead expense for businesses. But, the drawbacks exist for disk drives as well. Disk drives cost more up front and cost more to maintain which can be a tricky thing to negotiate with your CFO when discussing IT budget. Additionally, disk drives are more prone to a number of potential problems including breaking or overwriting and reformatting of data that could cause a lot of headaches. How you store your data, what data you are storing and what your specific needs are for recovery will greatly determine what option is best for you. No option is the clear winner, the debate will rage on, and the choice between disk and tape will remain a case-by-case unique decision.
Loss of Power
For any data center, maintaining uptime in a world fraught with potential hazards that could cause downtime is the highest priority. Today, most data centers try to take advantage of every possible option to ensure uptime is maximized in the event of a problem. An emergency-proof UPS, along with adequate redundancy, can help significantly improve uptime. But, in a world where even a few seconds of downtime can be extremely costly and problematic, even redundancy and a good UPS may still not be enough. The problem lies in the rate at which technology evolves. In most other industries, a backup plan or emergency supply can be determined once, maintained and executed should it ever be needed. But, in the technology industry, what would have worked 6 months ago for your data center may now be outdated or insufficient.
A data center uses power 24/7 which means that its UPS system is also being tasked with maintaining power 24/7. By reducing power usage and improving efficiency, even a small amount, it can significantly lighten the load of your UPS system. Truly, any data center can probably benefit from improving energy efficiency for more than one reason but helping your UPS system is an important reason. Data centers have a life cycle and rapidly change so it is important to constantly monitor and audit energy usage so that you can determine the best UPS system to ensure proper redundancy and power supply. As more and more power is needed in a data center, UPSs are often added to provide power supply protection. The thing is, UPS capacity is often not fully utilized so there is significant energy waste. If your UPSs are becoming outdated and you are constantly adding more and more backup power supplies it may be time to consolidate them and upgrade them to better improve data center energy efficiency and provide better backup power protection for your data center. It is always better to consolidate or upgrade your data center’s UPS before it fails, rather than after, because loss of data and uptime can be avoided. Additionally, when consolidation or upgrades are made there is often a significant increase in efficiency for the data center which means that the investment will more than pay for itself very quickly. While a full overhaul of your UPS may not be necessary, things will gradually begin to show their age and fail so the sooner you consolidate and upgrade, the better. Don’t overtax your UPS and cross your fingers, make the changes before problems arise so that uptime can be maximized, which ultimately, is the most important thing to clients.
The turn of seasons brings about many changes. We adjust the thermostats in our homes and, similarly we must also control the environment in a data center. But, it is not just about accommodating for the varying temperature and adjusting the thermostat. In a data center, humidity must also be accommodated for to keep equipment functioning properly. While temperature and cooling is often the focus, humidity control cannot be overlooked. There is a range of acceptable humidity at which most data centers should maintain but if they stray outside the range, problems can ensue.
Data centers must constantly monitor the humidity level in its room. When a data center is too humid condensation will inevitably show up. If condensation happens on the many electrical components in a data center they could possibly short out. Shorting out equipment = big problem for data centers. Conversely, when humidity is too low ESD or electrostatic discharge, can occur. And yes, it is as frightening as it sounds. ESD emits a static electricity shock and when that happens in a data center it could shock electrical equipment so much that it shuts down completely. The shock may even be so strong that it damages electrical equipment. All of these scenarios lead to downtime in a data center and as any data center manager knows, downtime costs money and leads to a lot of frustration, even if downtime only lasts a moment. For this reason it is critical that data centers implement a humidification system. Once humidification systems are implemented it is critical that they are maintained on a regular basis to ensure that they are working year round. Just like any other equipment in the data center, humidification systems can gradually wear down over time or encounter seasonal glitches, right as they are most needed. An annual, or better yet, bi-annual check of the humidification system will help ensure that it is working when it is needed most so that mission critical data center equipment can be protected. Humidity tends to decrease during the colder months and increase during the warmer months so appropriate adjustments for those seasonal changes can be made in advance. 40% humidity is around the recommended level of appropriate humidity for a data center but the acceptable range is much larger – between 20-80%. Most experts say that maintaining closer to 40% is truly ideal, though, and that by doing so equipment will be better protected and the heating and cooling system will not need to work as hard to maintain appropriate temperatures.
It is the classic dilemma for data centers – move or renovate? When technology moves a mile per minute and everything from applications to infrastructure is constantly evolving a data center must evolve along with it or it will find itself outdated, inefficient or in need of a move. But, how do you know when it is time to renovate? It is important for data center managers to keep a good grasp on energy efficiency and infrastructure with a well functioning DCIM plan. This way, data center managers will be ahead of the game and be able to anticipate major problems before they even arise. Moving a data center is a major undertaking so watch out for the signs below and consider renovating your data center rather than moving.
Three Signs It Is Time to Renovate Your Data Center
- Technology Lag
- Data centers are nothing if not technologically driven so if the technology in a data center is behind the times it is bound to create a variety of problems. From issues with application compatibility to infrastructure compatibility to all-around inefficiency a technology lag is a sure sign that it is time to renovate a data center.
- No Space
- One of the biggest problems that any data center can run into is a lack of space. Unfortunately, this happens all too often. You add a rack here or there, some more infrastructure and all the sudden you realize that you have run out of space or will very soon. You are faced with the classic data center dilemma – renovate or move? Moving is incredibly costly and tricky while also trying to maintain uptime. So, rather than move, many data centers should consider renovating so that they can continue to use their existing location while increasing their usable space.
- Energy Bills Are Skyrocketing and So Is the Temperature
- If you feel like you are walking on the surface of the sun while you walk around in your data center that is probably a big red flag. And, if this is happening you are probably noticing that energy consumption and thus, energy bills are increasing. When you have not implemented proper heating and cooling techniques for your data center to ensure the most energy efficiency possible it is time to stop what you are doing and do so. Data centers have a lot of server racks and server racks generate a lot of heat. Without proper cooling solutions and energy management a data center will not run properly so the sooner you renovate to improve heating and cooling the better.
Data center energy efficiency is at the forefront of hot topics for data centers. And, for good reason. Data centers use a truly astonishing amount of energy each year. The Natural Resources Defense Council (NRDC) noted just how much energy data centers use, “In 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity, equivalent to the annual output of 34 large (500-megawatt) coal-fired power plants (and, the NRDC notes, the equivalent of enough electricity to power all the households in New York City twice over). Data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020, the equivalent annual output of 50 power plants, costing American businesses $13 billion annually in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year.” Improving energy efficiency in a data center is incredibly important but can seem like a daunting task for many data center managers who often are not even certain where to begin, if they have the budget to make changes, or if the changes they make will truly make an impact.
One of the first, and most practical ways to begin a shift towards improving energy efficiency is to take a real look at energy usage. What is using the most energy, should it be using that much energy and can anything be eliminated? Often, energy is being wasted on ghost infrastructure or outdated energy-draining equipment. But, as many data center managers know, it can be difficult to keep track of all of the infrastructure in a data center or truly know what is using the most energy of inefficiently using energy. That is why a good DCIM plan is important so that data center managers can work with the most current information rather than outdated information and make well-informed decisions going forward. Once you have sufficiently audited data center energy usage and are able to make well-informed decisions for improvement you can move onto the next step. Next, make immediate changes to improve energy efficiency while also looking at long term improvements. Long term improvements are incredibly important and it is wise to look at how to remain sustainable in the future but while making decisions for the future you can make some immediate changes such as hot aisle/cold aisle arrangements or other containment options that may help improve energy efficiency. While other improvements are more difficult or costly to implement, containment arrangements can be made and executed relatively quickly and will make a big impact. While making immediate changes it is important to get budgetary approval for bigger changes and, once approved, begin moving forward with changes that will help your data center remain efficient in the future. This most likely means upgrading equipment to the most current, energy efficient options. All equipment has a lifespan and once they are getting a big old they will likely become energy inefficient. If you have budgetary approval to make improvements with heating and cooling options as well it is a great choice to improve your cooling ability within a data center because cooling is typically one of the biggest expenses in a data center. Lastly, explore green options like making a switch to more fully utilizing cloud storage or implementing cooling with outside air sources so that you can be as energy efficient as possible now and in the future.
Data center energy consumption is a major topic of conversation. Data centers are one of the largest consumers of energy in the country and in the world. Data center energy consumption can exceed that of power plants and, therefore, must be managed carefully and optimized whenever possible. But, the CFO may be difficult to get on board when it comes to energy efficiency measures. The financial investment necessary to improve energy efficiency in many data centers can often seem astronomical. While it may be a significant budgetary expense, it will be worth it. The task of the data center manager and IT head is to convince the CFO that it IS necessary, and why. Energy consumption always seems to outpace efficient energy management. It is a never-ending headache and frustration for data center managers because optimization is important for a data center to be more eco-friendly and save money. While data centers of the past may have been energy guzzlers, today’s modern data center must be optimized for ideal power usage and there needs to be room in the budget to make it happen.
To properly manage data center energy consumption, a thorough, evolving, and future-proof DCIM must be in place. The infrastructure of a data center is, after all, a big portion of data center energy use. The PUE must be analyzed and noted because it is an easy way to show a CFO that your data center is not efficiently using power. Increasing rack density and growing infrastructure tend to also increase cooling needs and suddenly, energy is not being managed properly and is being wasted. Rack density gets increased because of the growing needs of the data center and the amount of data that needs to be stored. If your equipment is out of date it is probably consuming far more energy than you realize. While updating equipment is a big financial endeavor on the front end, it will pay for itself significantly on the back end because your data center will run more efficiently and use less energy. The next important focus for data center managers when optimizing power usage is to look at the data center’s physical setup. Is it optimized to make cooling as efficient as possible? If your data center is housed in an older building that has not been properly renovated it probably is not optimized. Consider efficient cooling methods that can reduce energy used trying to cool an inefficient environment. Hot aisle/cold aisle technique is one way to improve cooling efficiency, other containment options such as a ceiling-ducted air containment, cold rooms and more are other options that data centers are taking advantage of to keep their data centers cool while still maintaining energy efficiency. The CFO may not have much to do with the day to day operations of a data center but a data center renovation or move cannot take place without the budgetary approval from a CFO and therefore DCIM analysis and management must be done often so that a detailed picture of data center energy use can be presented and efficient change can begin.
Technology continues to deeply integrate itself into our everyday lives. Wearable technology is quickly becoming commonplace, something you see on everyone. Business Insider reports on just how rapidly wearable technology is growing, “In just a few years, there could be more people using wearable tech devices than there are in the US and Canada. In a note to clients on Monday — alongside initiation of Fitbit coverage — Piper Jaffray’s Erinn Murphy and Christof Fischer stated that “wearable technology will be the next generation of devices to transform how individuals consume and use information.” Murphy and Fischer estimate the wearable tech category will grow from 21 million units in 2014 to 150 million units in 2019, a 48% compound annual growth rate (CAGR). This growth will largely be fueled by wrist wearables like smart watches and fitness bands.” Typically, when you think of the cloud you probably think of data centers managing vast amounts of information, storing and analyzing data to better improve business practices and providing an important efficiency improvement for data storage. While all of these things are accurate, those kinds of things do not really directly touch the consumer in a tangible way. But, as technology continues to evolve and more and more wearable technology enters the marketplace, the cloud is quickly integrating itself into the consumer’s daily life.
When one looks at something like Google Glass, the Apple Watch, or the Fitbit, they probably do not immediately think of the cloud. But, the cloud is an integral part of wearable technology. Why? because people are not simply interested in cool technology, they want that technology to actually improve their lives, make things more simple and provide a service. This cannot be accomplished without the cloud because the cloud collects the data from these wearable technologies, stores the data, analyzes the data and then uses the data for the benefit of the wearer. This is what makes wearable technology so exciting and desirable to consumers. Wearable technology, in essence, performs a service because it collects the data and then stores, analyzes or uses it in a beneficial and helpful way. These technologies are not simply for entertainment and personal use but also for the workplace as well. Because wearable technology can also improve careers and facilitate better work it makes them even more valuable. The better the cloud service, the more accurate the data collection and analysis is, the better the overall product will be, and thus, the more successful and marketable the product will be. Therefore, it is critical that quality cloud service work closely with wearable technology as the world becomes more and more dependent on the services provided by and features of wearable technology.
Few things in recent history point to the need for a comprehensive disaster recovery plan for data centers than Hurricane Sandy. When disaster struck many data centers were unprepared and ill-equipped which led to significant downtime that ultimately cost millions of dollars. Data centers can only operate for so long under the false sense of security and protection before a disaster strikes and the sudden panic sets in. A detailed disaster recovery plan must be in place, complete with multiple contingencies, before a disaster ever strikes so that, should it happen, immediate action can be taken.
To begin formulating a disaster recovery plan you must first identify all of your critical systems. Once you have identified them, you can properly determine how to best protect them in the event of a disaster. To properly prepare a detailed inventory of infrastructure, along with a comprehensive understanding of it must be routinely kept. When DCIM is lagging and knowledge of infrastructure lacking or out of date a disaster will become a major problem for data centers. Along with a detailed knowledge, a thorough backup must be in place as well. Redundancy in a data center not only protects against lost data on a day to day basis but in the event of a disaster as well. Additionally, by taking advantage of the cloud data centers can virtually protect information which is safe from disaster and a useful tool for data recovery. Ready, a national public service campaign that is “designed to educate and empower Americans to prepare for and respond to emergencies including natural and man-made disasters,” describes what the critical elements of a disaster recovery plan are for any data center, ” Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:
- Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)
- Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)
- Connectivity to a service provider (fiber, cable, wireless, etc.)
- Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)
- Data and restoration
Once you have identified critical system components and how to best protect them, a comprehensive DR (disaster recovery) Plan should be formally written up and kept safe. All pertinent personnel should also be trained and prepared for how to act should a disaster occur. By doing so, you will be able to best protect your data center and clients if a disaster occurs and maximize uptime.
It is critical that any data center have a strong DCIM strategy in place. Even in relatively small data centers the amount of infrastructure that must be tracked, managed, maintained and more is enough to overwhelm any data center manager. There are a variety of ways to track and maintain infrastructure but traditionally it has been done manually. But, is this method truly the most effective way to collect data in a data center? And, if not, what is the solution?
One way that many data centers have traditionally collected data is to give each piece of infrastructure a barcode. The barcode can then be scanned and inventoried to keep track of what is where. But, the problem with this method is that it does not provide much data and the data it does provide is often outdated. Additionally, it does not provide much insight as to how each piece of equipment within an infrastructure is using energy. Without that information, how can a data center manager accurately determine where energy is being used appropriately, where it is being underused and where it is being overused? If an analysis is only completed every 6 months or once per year, that is a lot of time that has been wasted when improvements could have been made, efficiency improved and money saved. By implementing a more intelligent DCIM management tool it can provide up to data information about equipment and energy consumption so that it can offer real, actionable data that data center managers can use. Alerts can be arranged based on predetermined thresholds, graphs and charts generated from gathered data and more accurate action can take place. It is also a far more efficient means of collecting data in a data center rather than manual data collection. This will save time and many headaches for data center managers, making an improvement any way you look at it. One of the last and most important reasons to implement an automated data collection method instead of manually collecting data is that it can alert you to potential capacity problems far sooner than manual methods. Rather than capacity problems sneaking up on data center managers, leading to major problems and the possible expense and frustration of having to relocate, automated DCIM will help data center managers see where capacity stands within a data center far before outgrowth ever becomes a problem so that any necessary condensing, rearranging or cloud storing can take place to save room. Manual data collection may have been the way thing were done in the past but it is not the way of the future and data centers that make the switch now to automated DCIM will enjoy the variety of benefits it has to offer.
In today’s technology world everything is getting smaller and smaller. And, with the cloud, some things are simply disappearing! Many data centers struggle to make their capacity work without exceeding it but, as expansion occurs, infrastructure changes and more racks are needed, data centers often find themselves outgrowing their space. This leaves them in the frustrating position that they must then relocate which can be tricky and often leads to downtime. Racks-on-chips have become a hot topic of conversation because it can help slow down overgrowth in a data center. Racks-on-chips are essentially a condensed version of a server rack while maintaining processing power and capacity but packaged in a much smaller package – a chip.
The critical component of racks-on-chips that makes them possible is that the capacity of chips must be increased so that they can store as much as server racks can. Racks-on-chips can be achieved but data centers must be retrofitted to make them possible because racks-on-chips have to be networked together with optical circuit switching and electronic packet switching. New data center builds that are forward thinking will plan ahead and have these system in place so that racks-on-chips will be possible. By switching to racks-on-chips the amount of server racks used in a data center can be dramatically reduced and once of the biggest benefits of this aside from freeing up physical space in a data center is that is also significantly reduces the amount of energy needed for heating and cooling. This means that a data center using racks-on-chips will be far more energy efficient, which is not only more green but will save a significant amount of money. While racks-on-chips are a potential ideal solution for data centers space and energy issues, Motherboard explains that it is still in the early stages and much still needs to happen for the benefits to be truly realized, “But the meat and potatoes of Yeshaiahu Fainman and George Porter’s server-rack-on-a-chip vision is really about taking the existing framework for a server rack and recreating it at the nano-level. They say that miniaturizing all server components so that several servers can fit onto a computer chip would increase processing speed. Making circuit systems to support all these mini-components using advanced lithography is already feasible, but scientists have yet to realize nano-transceivers and circuit-switchers—the key components that transmit data. And while silicon chips are increasing being used to transmit data-carrying light waves in fiber optic networks, efficiently generating light on a silicon chip is still early in its development. The researchers offer some solutions, like including light generating nanolasers in the chip design.”