Should Data Centers Use Ceiling-Ducted Air Containment?

DCIM

 

 

 

 

 

 

 

 

 

Data centers and all their power and energy usage generate a lot of heat.  One of the biggest expenses for any data center is the cost of cooling.  For this reason, data centers are always trying to develop new strategies to more effectively and efficiently keep their facilities cool while still performing their essential roles.  While each approach to cooling your data center offers unique advantages, each scenario is different and what is most efficient for one data center may not be most efficient for every data center.

Hot air and cold air containment are two options when trying to improve cooling efficiency in data centers.   Cold air containment strategies utilize a hot air cold air configuration in data centers but contain the cold aisles so that hot air and cold air do not mix which makes to increase the effectiveness of cooling in a data center.  But, even with hot air and cold air configurations, there are till additional ways to maximize efficiency.  Existing data centers can take advantage of the ceiling plenum to improve efficiency of cool airflow.  A dropped ceiling plenum could allow for a hot air plenum return system.  APC-Schneider Electric conducted studies and advised to following in regard to use of a ducted system, “We used Schneider Electric’s EcoStream CFD software to study the airflow pattern and pressure in the ceiling plenum for passive ducted systems, and came to the following conclusions:

1. Cooling performance is strongly linked to the ceiling plenum pressure.

2. Ducted HACS can adequately cool much higher densities than individually-ducted racks at a given ceiling vacuum pressure.

3. Average cooling performance is a fairly weak function of plenum depth and rack density, with deeper plenums and lower density yielding somewhat better performance.

4. Cooling performance is moderately affected by drop-ceiling leakiness with tighter being generally better though typical plenums perform much closer to well-sealed than leaky.

5. Cooling performance is a strong function of cooling-to-IT airflow ratio with higher values being better.

By utilizing a ducted air containment system in data centers, energy efficiency and reliability can be improved.  As far as data centers go, these are two of the most important things any data center can do.  APC-Schneider Electric concluded the following about ceiling-ducted air containment systems in White Paper 182, “Ducted air containment can simultaneously improve the energy efficiency and reliability of data centers. Since all ducted equipment, ducted cooling units, and the ceiling plenum function as a single entity, the use of CFD modeling is recommended for new deployments particularly when design constraints are close to the best-practice limits established here. In any case, deployment advice centers on ensuring an adequate and fairly uniform vacuum pressure in the ceiling plenum. This, in turn, can be achieved by providing sufficient ducted cooling airflow, creating a relatively “tight” ceiling system, employing deeper ceiling plenums, and sealing unnecessary leakage paths in racks and containment structures.”  These systems appear to work best in existing data centers and due to the uniqueness of each data center, the decision of whether or not to employ ducted air containment systems in data centers have to be taken on a case by case basis.

Posted in data center cooling | Comments Off

IT Departments & Facilities Really Can Work Together

Data Center

 

 

 

 

 

 

 

 

 

When it comes to energy usage, facilities and their IT departments are often at odds.  In most businesses today energy efficiency is the name of the game.  Many facilities have undertaken major initiatives to reduce energy usage through a variety of means such as more energy efficient HVAC and lighting usage.  IT departments tend to be the largest consumer of energy in any facility but because of the nature of the work that an IT department does, it is not always as simple to implement energy efficiency measures as it would be in another department within a facility.  While you can control lighting and environment in a facility or data center, there is quite a bit more to be done if you want to truly reduce energy consumption in an IT department while still functioning at full capacity.

For facilities and IT departments to better cooperate with a goal of improved energy consumption within an organization certain measure must be taken.  An IT department or data center has to undertake the improvement of energy consumption while still performing their primary task – IT operations reliability and maximized uptime.  The first way to achieve this is through the use of hot aisle/cold aisle rack arrangement.  This configuration manages air flow and minimizes cooling costs through  an ideal configuration of racks in an IT department or data center.  An additional consideration for IT departments and data center is a modular or scalable UPS (uninterruptable power supply)system that can be scaled according to specific needs.  With a modular UPS system you can easily and cost effectively increase your UPS power as needed over time which results in improved efficiency.  There are also interesting and unique ways to make better use of outside air and thermal storage for an energy efficient form of cooling.

When IT departments and data centers implement these data center best practices it helps drastically reduce data center energy consumption.  In addition to these measures, facilities can also upgrade outdated equipment that is not energy efficient for newer technologies that not only are capable of better data management but improved energy efficiency.  The facility’s DCIM (data center infrastructure management) blends information technology with facilities management and can monitor and asses data center’s critical systems, energy usage and help make future improvements.  All is not lost – IT departments and facilities do not have to be at odds.  Through collaboration and implementation of best practices, IT departments and data centers can reduce energy usage while still performing their primary function effectively.

Posted in Data Center Infrastructure Management | Comments Off

10 Surprises a Data Center Manager Might Encounter

interxion-containment-overh

 

 

 

 

 

 

 

 

 

As a data center or IT department manager, things are not always straight forward or uncomplicated.  In fact, over time, it is likely that a data center or IT department manager will encounter a surprise or two.  As time moves along, technology advances, density loads increase and the specific needs of data centers and IT departments evolve.  In order to minimize downtime and maximize the effectiveness and efficiency of a data center, it is ideal to anticipate potential surprises and try to mitigate those surprises or prepare potential solutions should a problem arise.  Emerson Network Power released a list of 10 common surprises data center and IT mangers encounter to help data centers and IT departments better anticipate and prepare.

Emerson Network Power’s 10 Common Data Center Surprises:

1. Those high-density predictions finally are coming true

  • Rack density is increasing and many data centers and IT departments are scrambling to find ways to handle such a high density.  With densities hovering around 7 kW per rack and expectations for them to grow each year (some are predicting 50 kW by 2025!), it is important to start preparing for higher density racks now rather than waiting until it is a real problem.

2. Data center managers will replace servers three times before they replace UPS or cooling systems

  • Bottom line – data center and IT department managers need to be prepared to scale to support future server needs.

3. Downtime is expensive

  • Downtime is not only frustrating but very expensive which means when downtime happens, no one is happy.  Gartner notes that a recent study reported that downtime is very expensive, “Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour.”

4. Water and the data center do not mix – but we keep trying

  • Water + expensive IT equipment = bad.  Data centers or IT departments that experience water damage or flooding damage experience major problems so every effort must be taken to prevent this from happening.

5. New servers use more power than old servers

  • New technology may be smaller which means it takes up less space in a data center or IT department but it has shown consume much more energy.  IT departments and data centers must prepare to support such high energy demands.

6. Monitoring is a mess

  • Managing different infrastructures is time consuming and confusing.  A good DCIM is a must.

7. The IT guy is in charge of the building’s HVAC system

  • There can be a communication gap between IT and Facilities.  This is not exactly news

8. That patchwork data center needs to be a quilt

  • Mix and match data center components are out – full integration for maximum capability and efficiency is in.

9. Data center on demand is a reality

  • It is no longer necessary to have complicated designs for a data center that take many long hours to bring to reality.  A data center can be scaled to the immediate needs and through many plug-and-play options a data center can be created in an instant, wherever there is available space, based on immediate needs.

10. IT loads vary – a lot

  • Have you heard of a website hosting a sale that draws so much traffic that it experiences a crash?  IT loads can vary significantly depending on the type of business, the time of day or the season.  A data center or IT department needs to prepare for drastic changes in IT loads so that there is capacity to handle such changes and prevent expensive and frustrating downtime.
Posted in Data Center Infrastructure Management | Comments Off

100% Network Uptime Is An Expectation of Smartphone Users

Advanced PDU

 

 

 

 

 

 

 

 

 

 

 

 

 

If you have ever needed to use your Smartphone and not been able to access your carrier’s network you know how incredibly frustrating it is.  And if you are lucky enough to have never experienced this, you have likely come to expect your carrier’s network to provide 100% uptime.  Network uptime is the ultimate goal of any data center and without network uptime businesses lose money and clients become incredibly frustrated.  We live in a world of instant gratification and knowledge constantly at our fingertips.  When something stands in our way or slows us down we no longer see it as frustrating but completely unacceptable.  And, as research shows, network uptime is not a simple luxury, it is an expectation.

In a study recently conducted by Vasona Networks, proof of an expectation amongst customers of 100% uptime was shown, “Sixty-four percent of consumers responding to the survey cited “good performance all the time” as a reasonable expectation from their mobile data service provider. Just 36 percent of subscribers still think it is reasonable for there to be “hiccups in performance,” “unavailability for extended periods” or “unavailability in certain places.” When asked to identify the principal cause of problems during use of an app, mobile service providers are the most commonly cited party for blame, with 40 percent selecting them. Thirty-nine percent blame the app maker and the remainder of consumers suspect their device or device operating system to be the cause. Data service quality is crucial for subscribers, with 29 percent citing “mobile Internet performance” as most important when choosing a provider… “Mobile Internet performance is becoming increasingly important for consumers and this survey indicates just how high a bar subscribers are setting for their service providers,” says John Reister, vice president of marketing and product management for Vasona Networks. “Our findings indicate that it is no longer sufficient for mobile operators to offer a good experience most of the time across most of their network. Today, if every cell isn’t delivering great performance, subscribers are being let down.”  Customers today do not simply expect network availability during business hours, or even waking hours, service must exist 24/7, 365 days of the year.

Not only will network downtime frustrate customers and possibly drive them to another provider but it comes with a significant cost to companies.  Gartner, an information technology research and advisory company makes note of the high cost of network downtime, ” This is a follow-up to last week’s post on network downtime as several have asked “What is the cost of  network downtime?” Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour (see Ensure Cost Balances Out With Risk in High-Availability Data Centers by Dave Cappuccio). However, this is just an average and there is a large degree of variance, based on the characteristics of your business and environment (i.e., your vertical, risk tolerance etc).”    That is no small price to pay.  Every network provider needs to spend more time focusing on network uptime so keep customers satisfied so that customers can be retained and financial loss prevented.

 

Posted in Customers | Comments Off

Anticipating Future Rack Density and How to Prepare for Changes

datacenter45

 

 

 

 

 

 

 

 

 

Much like a doomsday prepper stocking up on as much water and canned goods for the arrival of the end times, data center and IT professionals have been discussing and preparing for the increase of rack density that is coming in the years ahead.  For years and years now IT professionals have been predicting a massive increase in future rack density and…so far…they have been wrong.  Many server racks currently operate at around 3-5 kW per rack and some people are predicting that within a matter of years rack densities will each 50 kW.  But, as inflated predictions continue to plague the IT world more and more businesses and data centers find themselves with bloated, overbuilt data centers that end up being far more than what is actually needed and a complete waste.  A data center with a massive amount of unused capacity will, take up space and consume a completely unnecessary amount of energy that will simply go unused.  While it is important to build a data center that is scalable and future-proof, completely overdoing it with capacity is not necessarily the right way to go.

So, can you really predict future rack density?  Is it worth it to even try?  When talking about creating a future-proof data center, you do have to at least consider potential future density needs.  If rack density is going to dramatically increase data centers must be prepared to power and cool such a high density.  The lower the rack density the less energy and money it takes to power and cool it so considering a tripling (or more) of rack density can seem quite daunting.  Some recent surveys and reports from industry experts predict that by 2025 rack density could be around, or even exceed, 52 kW.  2025 is still 10 years away but that is not so far off that we should not start preparing now.

While there will certainly be some data centers operating at 25 kW or 50 kW by the year 2025, it is unlikely that this will be the average or even the trend.  The primary reason for this is technology.  New technology that will help lower rack density is likely to emerge in that time.  Even a change to a rack density of 25 kW will mean a significant change to the physical environment in a data center.  While rack density will most certainly increase in the next decade, so will utilization rates which means more efficient performance.  Additionally, in the next decade it is expected that there will be major changes in the way data centers are powered.  When it comes to the future of data center power, solar is the name of the game.  In addition to solar power, nuclear, gas and wind power will also be utilized to create a more energy efficient data center.  All of these things will impact the way data centers anticipate and manage an increase in rack density.  If there is one thing that is clear, scalability is critical to the preparation and management of increased rack density in the future.

Posted in data center equipment | Comments Off

Human Error the Cause of Data Center Downtime – Practical Tips for Minimizing

7

 

 

 

 

 

 

 

 

 

 

Ask any IT or data center manager what the most frustrating aspect of running a data center is and they will most likely tell you it is downtime.  Downtime is an outage of computer service, network connectivity or online service for a period of time.  The period of time could be as short as a few seconds with no limit of length.  While a few seconds of downtime may sound like no big deal to most people, data center managers and those who have worked in IT departments know differently.  Downtime is a big deal.  Downtime is frustrating for businesses, for customers, for IT departments and for data centers.  And, beyond frustration, downtime is costly.  A recent report from Gartner noted the high cost of downtime, “Based on industry surveys, the number we typically cite is $5,600 p/minute, which extrapolates to well over $300K p/hour.”  In order to prevent downtime, data center and IT department managers must try to understand the root cause of downtime.  The trouble is, there can be a variety of causes when it comes to downtime.  But interestingly, the most common cause of downtime is actually human error.  The good news about this is that human error, at least in many cases, is preventable.

When talking about human error, you might wonder just how much downtime does human error account for?  Data Center Knowledge notes just how much human error can influence downtime in data centers, “Data center downtime is often the result of equipment failure, or a chain reaction of unexpected events. But one of the leading causes of data center downtime is human error, as ComputerWorld reminds us in Stupid Data Center Tricks, which relays anecdotes of data center mishaps. The story notes a study by The Uptime Institute, which estimates that human error causes roughly 70 percent of the problems that plague data centers today. How can this problem be mitigated? “There is no doubt that human errors in the data center causes a great deal of downtime and some of these can be avoided by adhering to some simple steps,” said Ahmad Moshiri, director of power technical support for Emerson Network Power’s Liebert Services business.”  70 percent is a staggering statistic.  A lot of human error is avoidable and Data Center Knowledge offers some valuable, practical tips that all data centers can implement.

1. Shielding Emergency OFF Buttons

2. Documented Method of Procedure

3. Correct Component Labeling

4. Consistent Operating Practices

5. Ongoing Personnel Training

6. Secure Access Policies

7. Enforcing Food/Drinks Policies

8. Avoiding Contaminants

To truly minimize downtime that is caused by human error a lot of discipline will be necessary.  Practical steps are great in theory but they must be implemented with dedication and procedure for them to truly work.  With patience, persistence and time downtime as a result of human error can be greatly diminished.

Posted in data center maintenance | Comments Off

Tips for Maneuvering Your Way Through the Clouds

If you’re thinking of implementing cloud-based services with your regular data center services, you’ll want to make sure you know what you’re getting into so that you don’t get lost in the clouds. At Titan Power we realize how popular and powerful cloud computing is becoming, be we also know how confusing it can be for data center mangers who don’t know what they’re getting into or how to use cloud-based services to their fullest potential.

 

Know What You’re Buying

 

As you’re considering cloud-based services, be sure that they’ll be able to easily integrate with your data center’s existing infrastructure. Before agreeing to a uniform service, look over everything that’s included with the service and be clear on exactly how the service will be implemented with your current technology infrastructure. Ask questions and request an explanation for anything that isn’t clear.

 

Have a Solid Idea of Your Commitment

 

You’ll find that a majority of cloud-based services are bought on either a subscription or a pay-per-use basis. There will also more than likely be minimum requirements to consider. Take a look at those requirements and ask yourself if they’re a good fit for whatever cloud service you’ll be adding on.

 

Know That You’ll Have Access to the Cloud When you Need It

 

Whether the cloud service is for you, your clients or both, you’ll want to be sure that you can easily and quickly access the service when you need it. As you’re considering cloud service providers, ask whether or not they have secondary servers, regular back-ups and a solid disaster recovery plan. We also suggest that you be sure there’s sufficient up-time during access periods, which includes your peak operation hours.

 

Full Data Access

 

In addition to having constant and consistent access to your data, you should also ask if you’ll have your data in a form you can actually use. Another good question we recommend is to ask if your cloud data will be returned to you in a fully usable format if you decide to part ways with your cloud service provider.

 

Secure Cloud Storage

 

Know exactly who will have access to your data and where your data will be stored. Any security measures that cloud service providers use should be in full compliance with the most recent data security laws. It’s best that you review data security measures and ask to be made aware of any changes to those measures as well as data breaches. You should also find out about cost remedies in the event that a data breach takes place.

 

Just as not all data centers are created equal, the same is true of cloud-based services. Titan Power is here to help you smoothly and efficiently implement cloud computing with your data center, so don’t hesitate to contact us if you need help.

Posted in Data Center Infrastructure Management | Comments Off

Not Even the U.S. Department of Energy Is Safe From Hackers

At Titan Power, we take security seriously and design all of our data center infrastructures with your confidence in mind. Technology is always evolving, and while savvy business owners can surely capitalize on that trend, unfortunately so can hackers. In 2013, The U.S. Department of Energy (DOE) faced two cyber attacks that breached secure servers and compromised the personal information of DOE employees. This acts as a very real warning for other businesses, as even the most secure servers can still fall victim to cyber threats.

 

Which Details Were Stolen

 

Most of us would probably be happy to go a lifetime without receiving a “cyber incident” notice from our employers, especially U.S. government workers. In 2013, the Department of Energy faced not one by two security breaches. The second attack compromised the personal information of some 14,000 employees, including:

 

  • Names of current and past employees
  • Their social security numbers
  • Their payroll data.

 

Although authorities maintain that no classified data regarding the government was targeted or stolen, these types of cyber attacks clearly pose very serious threats to personal privacy. This is especially true when sensitive information like social security numbers are involved that can be the fast track to identity theft.

 

Who’s at Risk

 

Unfortunately, any business that has a human resources system can be at risk for cyber attacks. Hackers need only get access to your HR data management system to expose everything from social security numbers to physical addresses. For the savvy hacker, that’s a piece of cake. Prior to the second cyber attack on the Department of Energy, for example, there had been a string of security breaches involving several high profile companies. Some of those targeted included:

 

  • Apple
  • Facebook
  • Twitter
  • The New York Times
  • The Washington Post
  • The Journal.

 

Even more problematic is the fact that these hackers are often great at covering up their identities. It can be extremely difficult, if not impossible, to determine who is behind a cyber attack as the Department of Energy discovered firsthand.

 

What You Can Do

 

Unless you have the time, patience, and overhead to switch back to physical bookkeeping, unfortunately the threat of cyber attacks will always part of the way that businesses of today and tomorrow look at security. The best way you can safeguard your employees and their information is to engineer, update, and manage your data centers with security in mind.

 

Contact Us

 

At Titan Power, we take pride in training our technicians on the most current data security protocols and building them into our systems. If you’re worried about a potential cyber breach or are interested in updating your current data management system to a more secure server, call us today at (800) 509-6170 to see how we can help.

Posted in Data Center Security | Comments Off

What’s Driving the Surge in SMB Data Center Spending

At Titan Power, we take pride in delivering the latest solutions for data center design and management to businesses of all sizes, even when those solutions are constantly evolving. One of the biggest shifts we’re seeing in the industry right now is the switch from on-premises to cloud-based and third party serviced data center management on a massive scale. One study reported that spending on data center services was expected to grow by 18.39% this year among small to medium sized businesses, but industry insiders think that figure could go higher thanks to several driving forces.

 

Expansion of Offsite SMB Data Center Management

 

Data center solution providers believe SMB cloud opportunities are some of the most predominant factors driving the trend in third party data center spending. More and more SMBs are choosing to divest data management jobs that are just too costly in time. One insider remarked that the 18.39% figure was much too low, given that more and more businesses are choosing to leave data center management to third party pros and this is a trend that’s only expected to grow. In fact, spending on data center management could increase by as much as 50% because:

 

  • Growth is only projected to accelerate over the next 3 – 5 years
  • Physical servers will reach their end of life in this time
  • Organizations will choose to replace them with third party and cloud-based solutions.

 

The SMB Spending Shift is Already Taking Hold

 

Some companies and pioneers are already making monumental investments in data center infrastructure. Google is one such example. Less than two years ago, Google invested a record breaking $1.6 billion in data centers.

 

From Philosophical to Practical

 

Cloud versus on-premise data infrastructure has long been considered more of a religious debate than actual discussion. Now, we’re seeing practicality take over. More and more business owners agree that cost and growth scalability have the biggest impact on their decisions to switch to third party data center management.

 

What It All Means

 

The shift to cloud-based data center spending is no doubt an exciting development. For businesses that have the budget to do so, the change in data management means more time and overhead to focus on company growth rather than tedious server management tasks. Businesses need only seek the assistance of a savvy third party service provider that is skilled in cloud-based solutions, like Titan Power, to start reaping the benefits of off-site data center management.

 

Contact Us

 

Want to learn more about how Titan Power can help streamline your data center management? Call us today at (800) 509-6170 to see what we can do for you.

Posted in Technology Industry | Comments Off

Efficiently Reducing Costs While Improving Performance of Your Data Center

Cloud computing has quickly insinuated itself into the stream of day-to-day online services and activities. Here at Titan Power we want you to reap the full benefits of cloud computing all while improving the overall performance of your data center and successfully keeping your new costs as low as possible. Learn how to properly balance new practices for cloud computing with sensible tips.

 

Centralized Thinking

 

One of your first ideas for cutting costs in your data center while boosting efficiency for cloud computing may be to centralize data. Doing so will undoubtedly allow you to better maintain all of your equipment in a single location where you may not need as many employees to keep operations running smoothly. The only downside to this centralized method is that you’re sure to take a financial blow in terms of scalability. Improved performance demands improved equipment, which means more energy use and more expensive equipment.

 

Rather than follow the trend of centralized storage, you’ll be better off with distributed storage systems. Educate yourself on new technology that gives you the ability to improve your software performance, which you might find more efficient than centralized data storage.

 

Sluggish Performance

 

Data bottlenecks are common when it comes to cloud computing. There are more users than ever, each of whom is expecting better performance in addition to a service provider who can easily handle larger files of videos, photos and music files. In addition to handling larger files, your storage system is also expected to scale to every new user.

 

To keep bottlenecks at a minimum, you’ll want to avoid single points of entry at all costs. We also recommend that you skip adding caches and instead opt for a horizontally scalable system. Now you can easily divide data among each of your connections, which also gives you the option of choosing affordable low-energy hardware.

 

Supply and Demand

 

Much more will be expected of your data center in terms of storage once you start offering cloud-based services. In addition to dealing with data bottlenecks caused by data flowing through a single gateway of standard storage appliances, you may also have to decide whether or not to start charging users in order to meet the new financial demands of cloud computing.

 

You don’t have to feel as though you have no choice but to use new hardware that seems to offer a better performance. Look into commodity hardware instead. Commodity-component servers are more affordable, don’t use as much energy and can be just as good as some of the newer equipment on the market.

 

Take your time and focus on the smaller details when it comes to implementing cloud computing with your current data center services. Get in touch with us here at Titan Power for more tips for raising your data center into the clouds.

Posted in Data Center Build | Comments Off