header-about-us-sub

Controlling Rack Access for Data Center Security

AdobeStock_56769671Stringent security protocols are one of the most important aspects of properly running any data center.  With constant, round-the-clock advancements in technology, the focus of security protocols is often on things like cloud/cyber security, particularly because there have been any significant security breaches recently.  Cyber security is certainly important and nothing to ignore, but it is also important to not forget about physical security.  To provide the optimal and industry-acceptable level of security, data centers must provide security on multiple levels.  This will help dramatically reduce the risk of a security breach, allow data centers to remain compliant to certain industry regulations, and will provide peace of mind to customers that everything is being done to protect data integrity.  Ensuring proper physical security compliance will help data centers avoid costly data breaches, and the resulting penalties that may arise as well.

So often, physical security efforts are focused on access to data center grounds and to the facility itself.  These efforts, while valuable and necessary are not where physical security measures should stop.  Once inside the data center facility itself there should not be unrestricted access to server racks.  There are a wide variety of individuals that must pass through a data center on a daily basis, including internal engineers, external engineers, data center staff, cleaning staff and more.  Unfortunately, many data breaches are actually “inside jobs” and therefore security at the rack level is vitally important.

Colocation data centers must be particularly vigilant with rack level security because they often house multiple businesses’ security within the same data center and some of those businesses may even be in competition.  It may sound like there is a simple solution – locked doors or cages for server racks – right?  Unfortunately, wrong.  Traditional locks can only be so complex and if a threat is able to gain access to data center grounds or get inside a facility, they can likely handle those locks.  To meet industry standards and comply with federal regulations, it simply must go beyond that, as Schneider Electric points out, “Further increasing the pressure on those managing IT loads in such locations, regulations concerning the way data is stored and accessed extends beyond cyber credentialing, and into the physical world. In the US, where electronic health records (EHR) have become heavily incentivized, the Healthcare Insurance Portability & Accountability Act (HIPAA) demands safeguards, including “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.” Similar measures are also demanded, e.g., by the Sarbanes-Oxley Act and Payment Card Industry Data Security Standard (PCI DSS) for finance and credit card encryption IT equipment. In addition to building and room security, it has become vital to control rack-level security so you know who is accessing your IT cabinets and what they’re doing there.”

biometrics-154662_1280For best security, custom rack enclosures can provide peace of mind that they are far harder to access than standard, “off the shelf” enclosures.  Additionally, many data centers are opting for biometric security, pin pads (where codes are changed frequently) or keycards.  Biometric locks do not use traditional keys, rather, they scan things like fingerprints or handprints. Biometric locking systems have grown significantly in popularity because they provide truly unique access.  Keycards can get lost and pin codes can be shared but a fingerprint or handprint cannot be easily shared or duplicated so it is a far more sophisticated security measure. Many worry about the consistency, accuracy and performance of biometric security but it has become incredibly advanced, as Data Center Knowledge notes, “The time taken to verify a fingerprint at the scanner is now down to a second. This is because the templates – which can be updated / polled to / from a centralized server on a regular basis – are maintained locally, and the verification process can take place whether or not a network connection is present. The enrollment process is similarly enhanced with a typical enroll involving three sample fingerprints being taken on a terminal, with the user then able to authenticate themselves from that point onwards. This level of efficiency, cost effectiveness and all round reliability of fingerprint security means that a growing number of clients are now securing their IT resources at the cabinet level and integrating the data feed from the scanner to other forms of security such as video surveillance.”

These electric locks that restrict rack access provide multiple levels of enhanced security.  For example, with electric locks, when a user scans a fingerprint or inputs a code, a central server validates authenticity and then allows or restricts access. An additional advantage of using this method is that the electronic system will automatically generate a log that details who has accessed what, and when.  This electronic tracking is far more convenient, as well as far more accurate, than manual tracking of access.  These electronic systems can be directly connected to data center facility security systems so that, should there be a problem, systems can go into automatic lockdown and alarms can be sounded in an instant.  Also, there are video surveillance options that come along with electronic-based security and monitoring.  Video surveillance can be programmed to turn on when biometric scanning is being performed, when pin codes are being entered, when security cards are being swiped or more.  Additionally, video surveillance can be programmed so that, when someone is accessing a rack it automatically captures an image of who is accessing the rack and sends it to the data center manager.  The data center manager can then choose to watch the surveillance as it happens for an enhanced level of security. This level of security also may reduce the cost and need for a physical security guard, particularly when each rack is monitored by video surveillance. With this sort of security implemented at the rack level, there will be a detailed log of who is accessing what server and when, and should a problem arise, it will be immediately apparent at which server there has been a security breach.  Further, with advanced electrical-based locking systems, they can be pre-set to only allow access at certain times.  For example, if there should never be access “after hours” to certain racks, they can be set to only allow access for pre-determined times.

Another advantage of advanced electronic locking mechanisms is that they can be easily and effectively remotely monitored.  Having on-site security staff is beneficial but is not always possible and, as previously discussed, it is advantageous to have multiple levels of security which is why remote monitoring is important.  Many government and industry regulations now have strict security parameters that data centers must remain in compliance with or face strong penalties.  These security standards are set to help protect secure financial, health and other sensitive information and they require multiple levels of security and that includes rack level security.  To not protect rack level security means that many data centers will not be in compliance – a major (and costly!) problem.

While cost of implementation may seem prohibitive to some, many are now recognizing that the cost of a breach will likely be far higher.  The same level of security used for facility access points should also be used at the rack level when optimizing data center security protocols.  Whether you are retrofitting an existing data center or building a new data center, and whether your data center has 1 rack or 100 racks, they should each be secured separately at the rack level.  Cyber security is a growing and complex arena, easily grabbing the attention of both the customer and the data center facility manager but it is critically important that physical security not be neglected.  In an age where many businesses are foregoing their enterprise data center in favor of colocation, colocation providers must be stringent in their protection of their customer’s data – not just for peace of mind and best practices, but to remain compliant with federal regulations.  If you think you are immune to a data breach, IBM Security’s most recent study will not put you at ease because they found that the global risk for a data breach in the next 24 months at 26 percent.  And, the cost will not be small!  The average consolidated total cost of a data breach is $4 million.  While the cost to implement state-of-the-art rack level security will not be small, it is will continually pay for itself over time and will likely be far less than the cost of a security breach.

 

Posted in computer room construction, Computer Room Design, Data Center Construction, Data Center Design, data center equipment, Data Center Infrastructure Management, Data Center Security, DCIM | Tagged , , , , | Comments Off

Strategies For Monitoring UPS Batteries & Preventing Failure

Aside from security, maximizing uptime is likely the top priority of just about any data center, regardless of size, industry or any other factors.  Most businesses today run on data and that data is being facilitated by a data center.  Businesses, and their employees and customers, depend on data being available at all times so that business processes are not interrupted.  Every second a data center experiences downtime, their clients experience downtime as well.  Data center managers and personnel are on a constant mission to prevent downtime and they must be vigilant because downtime can occur for a variety of reasons but one has been and remains the #1 threat – UPS battery failure.

UPS (Uninterruptible Power Supply) is the redundant power supply that is supposed to back up a data center in the event of an energy problem such as power failure, or a catastrophic emergency.  Having an uninterruptible power supply is necessary in any size data center because no batteries last forever and, unfortunately, even the most observant and effective data center managers cannot prevent some power failures.  The UPS also contains a battery that will kick in should the primary power source fail so that a data center (and its clients) can experience continuous operation.  Unfortunately, the very thing that is supposed to provide backup power – the UPS – can sometimes fail as well.  Emerson Network Power conducted a 2016 study to determine the cost of and root causes of unplanned data center outages, “The average total cost per minute of an unplanned outage increased from $5,617 in 2010 to $7,908 in 2013 to $8,851 in this report… The average cost of a data center outage rose from $505,502 in 2010 to $690,204 in 2013 to $740,357 in the latest study. This represents a 38 percent increase in the cost of downtime since the first study in 2010…UPS system failure, including UPS and batteries, is the No. 1 cause of unplanned data center outages, accounting for one-quarter of all such events.”

batteries-lose-capacity-as-they-age-justifying-the-need-for-a-preventive-maintenance-program__emerson-power

Batteries lose capacity as they age justifying the need for a preventive maintenance program. Image Via: Emerson Network Power

In order to properly for a strategy for UPS failure prevention, it is important to look at why UPS failure occurs in the first place.  At the heart of the UPS system is its battery which powers its operation.  UPS batteries cannot simply be installed and then left alone until an emergency occurs.  Even if a brand-new battery is installed and the UPS system is never needed, the battery has a built-in lifespan and it will, over time, die.  So even if you think you are safe with your UPS system and your unused battery, if you are not keeping an eye on it, you may be in trouble when a power outage occurs.

Beyond basic life-expectancy in ideal conditions, UPS battery effectiveness may be reduced or batteries may fail for other reasons.  Ambient temperatures around the UPS battery, if too warm, may damage the UPS battery.  Another reason a battery may fail is what is called “over-cycling” – when a battery is discharged and recharged so many times that it reduces capacity of the battery over time.  Further, UPS batteries may fail due to incorrect float voltage.  Every battery brand is manufactured differently and has a specific charge voltage range that is acceptable.  If a battery is constantly charged outside the recommended charge voltage range – whether undercharging or overcharging – it will reduce the battery’s capacity and may lead to battery failure during a power emergency.

Fortunately, many of these UPS failures can be traced back to human errors that are preventable.  This means that data centers looking to prevent UPS failures and maximize uptime can do so by implementing and vigilantly following a UPS failure prevention strategy.  First, it is important to develop a maintenance schedule, complete with checklists for consistency, and actually stick to it.  Don’t let routine battery maintenance fall off of your priority list, while it may not seem urgent, it will feel very urgent if the power fails.

One of the first and most important things that a data center should implement in their strategy is proper monitoring of batteries.  Every battery will have an estimated battery life determined by the manufacturer, some even boast as long of a life cycle as 10 years!  But, as any data center manager knows, UPS batteries do not last as long as their estimated life cycle because of a variety of factors. Just how long they will actually last will vary which is why monitoring is incredibly important. Batteries must be monitored at the cell level on a routine schedule, either quarterly or semi-annually and it is important to also check each string of batteries.  By doing this on a routine schedule, you can determine if a battery is near its end of life cycle or has already reached its end of life cycle and make any necessary repairs or replacements.  If it appears a battery is nearing the end of its life cycle it may be best to simply replace it so as not to risk a potential failure.  In addition to physically checking and monitoring UPS batteries, there are battery monitoring systems that can be used.  While physical checks are still critical, battery monitoring systems can provide helpful additional support that may prevent a UPS failure.  Schneider Electric describes how battery monitoring systems can be a useful tool, “A second option is to have a battery monitoring system connected to each battery cell, to provide daily automated performance measurements. Although there are many battery monitoring systems available on the market today, the number of battery parameters they monitor can vary significantly from one system to another.

- A good battery monitoring system will monitor the battery parameters that IEEE 1491 recommends be measured. The 17 criteria it outlines include:

- String and cell float voltages, string and cell charge voltages, string and cell discharge voltages, AC ripple voltage

- String charge current, string discharge current, AC ripple current

- Ambient and cell temperatures

- Cell internal resistance

- Cycles

With such a system, users can set thresholds so they get alerted when a battery is about to fail. While this is clearly a step up from the scheduled maintenance in that the alerts are more timely, they are still reactive – you only get an alert after a problem crops up.”  Further, as your monitor your batteries it is important to collect and analyze the data so that you can make informed decisions about how to best maximize battery life.

Next, it is important to properly store your battery when not in use to maximize its lifespan which will help it function properly in the event of use.  A UPS battery must be charged every few months while in storage or its lifespan will be diminished.  If you cannot periodically charge your UPS battery while in storage, most experts recommend storing your battery in cooler temperatures – 50°F (10°C) or less – which will help slow down the degradation of your battery.

To keep your UPS battery functioning in optimal conditions, ambient temperature should not exceed 77 degrees Fahrenheit and should stay, generally, as close to that as possible.  It is important to not just prevent temperatures from exceeding that but prevent temperatures from frequently fluctuating because it will greatly tax UPS batteries and reduce their life expectancy.  It is important that your UPS is stored in an area of your data center where temperatures are carefully monitored and maintained to help promote proper function of your UPS in the event of an emergency.  Ideally, your UPS would be maintained in an enclosure with temperature and humidity control.

an-increase-in-the-number-of-annual-preventive-maintenance-visits-increases__emerson-power-network

An increase in the number of annual preventive maintenance visits increases. Image Via: Emerson Network Power Network

While routine maintenance will require attention and dedication, it is not without merit.  In fact, Data Center Knowledge notes that there are statistics that back up the argument that routine maintenance really does prevent UPS failure, “In one study of more than 5,000 three-phase UPS units and more than 24,000 strings of batteries, the impact of regular preventive maintenance on UPS reliability was clear. This study revealed that the Mean Time Between Failure (MTBF) for units that received two preventive maintenance (PM) service visits a year is 23 times better than a UPS with no PM visits. According to the study, reliability continued to steadily increase with additional visits completed by skilled service providers with very low error rates.” Data centers must implement their own unique UPS maintenance strategy, tailored specifically to individual needs, and remain vigilant in their follow through.  Implementing UPS maintenance best practices, including maintaining proper temperatures, maintaining proper float voltage, avoiding over-cycling, properly storing batteries, utilizing UPS battery monitoring systems, and performing routine visual inspections, will help significantly decrease the risk of UPS failure.

Posted in Back-up Power Industry, computer room maintenance, Data Center Battery, data center equipment, Data Center Infrastructure Management, data center maintenance, DCIM, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , , | Comments Off

Private Cloud vs. Public Cloud vs. Hybrid Cloud

Cloud computing, in one form or another, is here and it is not going anywhere.  It is for a good many reasons – it provides easy scalability, is less expensive than expanding infrastructure to add storage, is less expensive to maintain because it does require additional power and cooling, makes project deployment easier and quicker, and is easy to create redundancy and reliability.  While these benefits are significant, they are just the surface of the advantages of utilizing the cloud but there is debate over what type of cloud is best – public, private or hybrid.

Both the public cloud and private cloud offer a variety of advantages and drawbacks and what will work best for a data center will have to be decided on a case by case basis.  Cisco’s Global Cloud Index, which forecasts cloud usage for the years 2015-2020, provides some interesting insights into how cloud usage will transform data centers and enterprises going forward, “By 2020, 68 percent of the cloud workloads will be in public cloud data centers, up from 49 percent in 2015 (CAGR of 35 percent from 2015 to 2020)… By 2020, 32 percent of the cloud workloads will be in private cloud data centers, down from 51 percent in 2015 (CAGR of 15 percent from 2015 to 2020).”

Private cloud is essentially an internal, enterprise cloud that is privately managed and maintained.  The data center is responsible for hosting the private cloud within its own intranet and is protected by the data center’s firewall.  The private cloud provides all of the efficiency, agility, and scalability but also provides better control and security.  This is a great option for a data center that already has a robust infrastructure and enterprise set up, but it does demand more than the public cloud.  If a data center employs the private cloud, all management, maintenance and security falls squarely on the data center’s personnel.

One of the distinct advantages of the private cloud is how much control you have over how it works for your unique needs.  Private clouds can be configured to meet your needs, rather than you configuring your applications and infrastructure to meet the needs of the public cloud.  Many data centers have legacy applications that cannot always adapt well to the needs of the public cloud, but with the customizability of the private cloud, a data center can easily adapt the private cloud to meet the needs of the enterprise.

If your data center or enterprise prioritizes control, security, privacy and management visibility and worries about the security and privacy risks of shared resources in a public cloud, the private cloud may be the right fit for you because it will provide peace of mind that you know exactly where your data is and how it is being managed and protected at all times.  However, it is important to note that while having control of cloud management is seen as an advantage by many enterprises, the challenge of adequately managing the cloud can be significant, as noted by RightScale who conducted a study and survey of cloud computing trends for 2016, “26 percent of respondents identify cloud cost management as a significant challenge, a steady increase each year from 18 percent in 2013…Cloud cost management provides a significant opportunity for savings, since few companies are taking critical actions to optimize cloud costs, such as shutting down unused workloads or selecting lower-cost cloud or regions.”

The level of security provided by utilizing the private cloud may be particularly important for those enterprises involved in healthcare or banking/finance because of the strict regulations and requirements placed on security and privacy.  If you possess and work with data that is restricted by security and privacy mandates like HIPAA, Sarbanes Oxley, or PCI, you cannot use the public cloud to secure your data.  For such highly-sensitive information, you are required to store your data on the private cloud to remain compliant or else face high penalties.

The public cloud also provides the fundamental benefits of cloud computing, as well as its own advantages, but offers less control over maintenance, management and security.  Some enterprises may see the requirement of less management and maintenance as an advantage because they simply do not have the resources or personnel to manage the cloud themselves.  By opting for the public cloud, your data is stored in a data center and that data center is responsible for the management and maintenance of the cloud data.  For enterprises that do not have extremely sensitive data, the trade-off of security and control for less management and maintenance may be completely acceptable.  While you do not have as much control of management and security, data does remain separate from other enterprises in the cloud.

The public cloud does save on hardware and maintenance costs that would typically be incurred by your business.  You pay for the public cloud to use storage capacity and processor power so that you do not have to manage or pay for that capacity or power on your own.  Because you are paying for a service, it is easy to scale up or down quickly without much preparation or change on your end. The public cloud often functions on a “pay-per-use” model so you can quickly make changes, scaling up or down in literally a matter of minutes. For small businesses that do not work with highly sensitive data, the public cloud may be ideal.  But ultimately, it all comes down to how much control you need over the management and security of your data.

It is important to not forget that there is actually a third option – the hybrid cloud. The hybrid cloud is a blend of both private and public cloud, offering enterprises a solution that may provide the best of both worlds.  With the use of the hybrid cloud, enterprises can leverage the advantages of both the private and public cloud in partial ways that best suit the needs and resources of the enterprise.  By doing this, all sensitive data can be managed with the private cloud and the private cloud can be customized the suit any less-flexible applications.  Likewise, the public cloud can be used for information that is not as sensitive or governed by privacy and security mandates, and can also be used for on-demand scalability.

Hybrid cloud is a mix and match solution of the best elements of both private and public clouds for those enterprises with diverse needs.  This diversity is what will help many enterprises evolve and be flexible as IT innovations emerge.  What is interesting about a hybrid cloud solution is that it serves both large and small organizations well because of it offers flexibility, scalability, and security on an as-needed basis.  It allows organizations to slowly “dip their toes” in the public cloud pool while maintain control over sensitive data, via the private cloud, that they are not yet ready to put in the public cloud. RightScale’s survey of cloud computing trends for 2016 notes that hybrid cloud usage is on the rise, “In the twelve months since the last State of the Cloud Survey, we’ve seen strong growth in hybrid cloud adoption as public cloud users added private cloud resource pools. 77 percent of respondents are now adopting private cloud up from 63 percent last year. As a result, use of hybrid cloud environments has grown to 71 percent. In total, 95 percent of respondents are now using cloud up from 93 percent in 2015.”  Additionally, the hybrid cloud may be a very cost-effective solution, allowing enterprises to assign available resources to private cloud needs without having to retain vast additional resources that might be necessary if only using private cloud.

How enterprises use the cloud will depend heavily on resources, security and control needs, privacy restrictions, and scalability needs.  If you are struggling to decide what the right fit is for your enterprise, consider carefully what applications you intend to move to the cloud, how you currently use those applications, any regulatory concerns, scalability needs and your ability to adequately manage whatever your choice ultimately is.  If you have the infrastructure and resources to manage your cloud well, as well as security concerns, the private cloud may be the best option for your needs.  However, if you are a smaller organization with lower security concerns, offloading management responsibility by utilizing the public cloud may relieve a lot of the strain that a private cloud might place. Whether using public cloud, private cloud or a hybrid of the two, one thing is certain – almost everyone is using the cloud.  And, if they are not yet, they will likely be using it soon.

 

Posted in Cloud Computing, data center equipment, Data Center Infrastructure Management, Data Center Security | Tagged , , , , | Comments Off

Do You Have a DCOI Compliance Strategy?

Spring Cleaning Checklist

The recently established Data Center Optimization Initiative (DCOI) is an important mandate for federal data centers that encourages the sharing of information to encourage optimization of infrastructure and reduce inefficiency in data centers.  Nothing is more of a hot topic in data centers than the need to improve efficiency on all levels to remain sustainable and effective.  The White House describes the requirements of DCOI as follows, “The DCOI, as described in this memorandum, requires agencies to develop and report on data center strategies to consolidate inefficient infrastructure, optimize existing facilities, improve security posture, achieve cost savings, and transition to more efficient infrastructure, such as cloud services and inter-agency shared services.”

This initiative focuses on data center consolidation and optimization of existing data centers to reduce redundancy.  These measures will help make data centers more eco-friendly which benefits the environment but by improving efficiency also provides significant cost savings.  Further, DCOI recognizes and encourages the utilization of the cloud to improve efficiency and scale operations without expanding physical footprint.  Undoubtedly, plans and goals will have to be established to meet the demands of DCOI so early adoption is the best approach. Schneider Electric explains the importance of complying with DCOI, “ One of the key requirements for existing data centers is to “achieve and maintain” a PUE score of 1.5 or less…new, proposed data centers must be designed and operated at 1.4 or less with 1.2 being “encouraged”.  Another key requirement is deployment of data center infrastructure management tools (DCIM) in all Federal data centers since manual collection of PUE data will no longer be acceptable.  If Agency CIOs fail to achieve these scores and implement DCIM by September 30th, 2018, “Agency CIOs shall evaluate options for consolidation or closure…”. In other words, comply or be assimilated. Fortunately for these CIOs, legacy data centers often have plenty of room to improve infrastructure efficiency by reducing power and cooling energy losses to bring PUE scores within these limits.  In addition, DCOI targets are expected to result in the closure of approximately 52% of the overall Federal Data Center inventory1.  So it’s important to try to make as many improvements as is feasible even if you’re meeting the required 1.5 (or 1.4 for new) …i.e., increase your odds of survival by being as good as you can be. Agencies should start with an efficiency assessment of the site in question.  Find out where you’re at now and identify areas for improvement.”  DCIM technology should be implemented to monitor energy usage and improve energy consumption and every effort should be made by federal data centers to comply with DCOI going forward.

Posted in Cloud Computing, computer room construction, Computer Room Design, Data Center Build, Data Center Construction, Data Center Infrastructure Management, DCIM, Power Management | Tagged , , , , | Comments Off

3 Trends in Data Center Design

DCIMEvery year we see certain trends arise in data center design and construction and, as 2016 winds down; we are able to take a look back at the year and anticipate what may be ahead in 2017.  Data center design is constantly evolving as infrastructure changes and storage needs shift.  Data center design is an expanding facet of the data center industry because every data center must be flexible and always constantly capable of adaptation and updating to stay current.  Technavio points out just how important data center design is and will be looking forward, “This is why the global data center design market, which was valued at just $516 million in 2015, is expected to top $1.2 billion by 2020, growing at a cumulative average growth rate of 19.03%.” A data center that can scale on demand will be the data center that flourishes in 2017.

When it comes to data center design, one of the most common problems that data centers encounter is outgoing their existing space.  This is combatted in a number of ways including increasing rack density, colocation, and usage of the cloud. Colocation will be a big trend moving forward for many businesses that have simply outgrown their enterprise data centers, or do not want to take on the task of managing IT infrastructure and protecting data security in a world where threats are constantly evolving.  When businesses opt for colocation, it often opens the door to on-demand scalability and peace of mind that IT experts are managing data security.  Additionally, we will see an increased use of the cloud in data centers to meet growing data demands. CloudTweaks offers a concise explanation of why the cloud will be transformative in data center design moving forward, “While organizations continue to consolidate facilities to save money, their need for effective data management and storage have increased exponentially. The volume of digital data is growing at an unprecedented rate, doubling every two years. Today’s IT execs are under phenomenal pressure to deliver value, while maintaining cost and efficiency. This is where the cloud can be most effective. Through economies of scale, cloud vendors are able to deliver the same, if not better, performance than in-house data centers at a lower cost. Furthermore, the cloud provides a centralized computing system that enables data and applications to be accessible from anywhere, anytime, yielding operational efficiencies.”  Lastly, we will see a continued trend of making data centers more “green.”  This means making efforts in design to more efficiently cool data centers and more efficiently supplying necessary power.  These three trends are sure to be strong going into 2017 and will directly impact data center design.

Posted in Data Center Design, Datacenter Design | Tagged , | Comments Off

Securing the Cloud in the Modern Data Center

cloudcomputingAsk just about any client what one of the most important things they are looking for in a data center is and you will likely hear, “security” over and over again.  Securing the traditional data center is a challenge unto itself but now many data centers are hybrid of traditional storage and cloud storage which complicates security immensely.  Information Week describes the challenges faced in data center cloud security, a well as the strengths it will have moving forward, “Moving beyond traditional perimeter security into public, private, and hybrid cloud architectures stretches the capabilities of traditional security tools. It creates new security holes and blind spots that were not there previously. But cloud security is looking brighter by the day, and very soon cloud security tools will outmatch any type of non-cloud parameter security architecture. In many ways, cloud security is gaining in strength based on a seemingly inherent weakness. Cloud service providers are in a unique position to absorb vast amounts of data. Because large clouds are geographically dispersed in data centers around the globe, they can pull in all kinds of security intelligence as data flows in and out of the cloud. This intelligence can then be used to track security threats and stop them far more quickly.”

The problem is not a static one, it is a fast-paced, growing challenge.  The more heavily the cloud is used, the more information it stores, the more security is needed but also the more potential holes in security there are.  Security must scale at the same rate, or faster, than the growth of the data center.  Because it is relatively new, and rapidly evolving, there are not clear-cut standards for cloud security in place.  It is important for data centers to pay attention to what is happening in the industry and look to others, such as the U.S. government, for what is working in cloud security, which TechTarget elaborates on, “For example, cloud providers that handle confidential financial data should underscore their compliance with the Payment Card Industry Data Security Standard (PCI DSS) specification as proof of the integrity and security of their operations. PCI DSS does outline requirements related to cloud-specific aspects of security, stipulating that providers must segregate cardholder data and control access in addition to providing the means for logging, audit trails and forensic investigations. But the highly dynamic nature of most cloud-based applications — which often lack built-in auditing, encryption and key management controls — makes it expensive and impractical to apply the PCI standard to most cloud applications. Providers and enterprises seeking answers on cloud standards for security have found guidance from an unlikely source:  the U.S. government. Though not usually perceived as a leading-edge technology adopter, the public sector has been engaged in aggressive security standards development efforts to support its Cloud First initiative, which requires federal agencies to select a cloud service for new deployments when a stable, secure and cost-effective offer is available. The Federal CIO Council laid out 150 cloud security controls for its Federal Risk Assessment Program (FedRAMP), which provides a baseline for common security requirements that agencies can use to verify that a prospective cloud provider supplies adequate cloud application security. Compliance will be validated by third-party assessment organizations. Using cloud-specific security requirements created by the National Institute of Standards and Technology (NIST), FedRAMP offers agencies a common set of cloud standards they can use to sanction a cloud provider. If the particular agency has additional security requirements, then the provider can build on its baseline controls to address these needs.”  The  a cloud is a cost-effective way for many data centers to scale to meet customers’ needs but security protocols must be in place to ensure that, as scaling occurs, data continues to be properly secured.

 

 

Posted in Cloud Computing, Computer Room Design | Tagged , , | Comments Off

3 Trends in Data Center Cooling

Melting-ice-cubes.jpgData center power consumption is evolving all the time, becoming more efficient but, generally, growing.  While many data centers are making green initiatives and finding ways to make their energy usage as efficient as possible, data demands are constantly growing, rack density is being increased and the need for effective cooling growing along with it.  There are many approaches to data center cooling and even single data centers are implementing a variety of approaches to best cool their data center.  Large data centers from companies like Yahoo or Apple are setting the trend in green data center cooling initiatives and small data centers are not only taking note but are also implementing those trends in their own data centers.  Below are 5 exciting trends in data center cooling.

  1. Liquid Cooling
    • Using liquid to cool, instead of air, is a great way to cool higher density racks and can be used in a variety of ways in the data center. TechTarget elaborates on the use of liquid cooling in data centers, “Now, new technologies can put 250 kW in a single rack, using liquid immersion cooling to play an important role for certain systems, such as high-performance computing, Cecci said. The pluses of liquid cooling include the ability to deploy it in specific areas — by row and rack — and it is very quiet and reliable, with few moving parts. Despite its benefits, liquid cooling is not in many data centers today, he said. “Most of these technologies — we will see them in the next two to three years,” Cecci said.”
  2. CRAC
    • CRAC (computer room air conditioner) cooling systems has been used for a considerable amount of time in data centers. While they may be the old standard, CRAC has continued to evolve over time with new strategies to be an effective form of cooling in data centers, which TechTarget explains, “The easiest way to save money is to reduce the number of running CRAC units. If half the amount of cooling is required, turning off half the CRAC units will give a direct saving in energy costs — and in maintenance costs. Using variable-speed instead of fixed-speed CRAC units is another way to accomplish this, where the units run only at the speed required to maintain the desired temperature. The units run at their most effective levels only when they run at 100%, and some variable speed systems don’t run at a fully optimized rate when operating at partial load. Running standard, fixed-rate CRAC units in such a way as to build up “thermal inertia” can be cost- effective. Here, the data center is cooled considerably below the target temperature by running the units, and then they are turned off. The data center then is allowed to warm up until it reaches a defined point, and the CRAC units are turned back on.”
  3. Bypass Air
    • Bypass air is any air that is conditioned that does not pass through IT equipment before returning to the cooling unit. In essence, it is a waste of cooled air which has led many data centers to make efforts to reduce the problem.  Data Center Dynamics explains the problem and how data centers are fixing it to improve data center cooling, “The velocity of the cool air stream exceeds the ability of the server fans to draw in the cool air; as a result the cool air shoots beyond the face of the IT rack.  Cool supply air can join the return air stream before passing through servers, weakening cooling efficiency. Eager to combat the inefficiencies above and keep pace with steadily climbing data center temperatures, businesses often adopt hot aisle/cold aisle rack orientation arrangements, in which only hot air exhausts and cool air intakes face each other in a given row of server racks. Such configurations generate convection currents that produce improved airflow. Although superior to chaos air distribution, hot aisle/cold aisle strategies have proven only marginally more capable of cooling today’s increasingly dense data centers, largely because both approaches ultimately share a common, fatal flaw: They allow air to move freely throughout the data center. This flaw eventually led to the introduction of containment cooling strategies. Designed to organize and control air streams, containment solutions enclose server racks in sealed structures that capture hot exhaust air, vent it to the CRAC units and then deliver chilled air directly to the server equipment’s air intakes.”
Posted in Computer Room Design, data center cooling, Data Center Design | Tagged , , , | Comments Off

Importance of Renewable Energy in Data Centers

Advanced PDU

“Renewable energy.”  “Clean energy.”  These may sound like buzzwords – trendy little catchphrases meant to grab your attention and sound good but they are far more than buzzwords, they are the reality and the future of data centers.  As we make pushes to become sustainable in all industries one of the biggest focuses will likely be towards sustainable, renewable energy in the data center.  Large data centers use exponential amounts of energy, equivalent to the energy some small cities use, so it only goes to logic that there would be a push to make that energy usage as clean as possible.  So, just how critical will it be that data centers focus on sustainability through renewable and clean energy going forward? Very.  In fact, Data Center Knowledge notes that a study was completed and what consumers want now and going forward are data centers focused on sustainability, “A recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, found that 70 percent of these users consider sustainability issues when selecting data center providers. About one-third of the ones that do said it was very important that their data center providers power their facilities with renewable energy, and 15 percent said it was critical.Most respondents said their interest in data centers powered by renewable energy would increase over the next five years. More than 60 percent have an official sustainability policy, while 25 percent are considering developing one within the next 18 months.”

As data center space across the globe continues to rapidly grow, so will the amount of energy used.  That energy use is often not only bad for the environment but quite costly.  We have already seen large companies like Google and Apple focus on renewable energy and, as we often see, smaller data centers will likely follow in their footsteps.  Small and large data centers are undertaking renovations and making changes towards renewable energy because even the tiniest improvements in efficiency and sustainability are saving big bucks.  What do these renewable energy efforts look like?  There are a vast array of options and approaches but Data Center Frontier elaborates on a few, “In broad terms, “clean” or “green” energy comes from renewable sources such as the sun (solar), wind, the movement of water in rivers and oceans (hydroelectricity), biofuels (fuel derived from organic matter), and geothermal activity. Today, there are big trends showing that tech giants are moving towards renewable energy sources in their green data centers. Digital Realty, along with certain major technology companies and other pioneers, are showing that clean energy can be used to power even the largest and most high-performance data centers. And as more organizations consider moving from traditional to cleaner sources of power, they are also showing that renewable energy can be cost-effective.”  This is not a fleeting trend.  Renewable energy is here to stay, it is the future of data centers, and all data centers should b emaking efforts to make small and big changes towards renewable energy for the future.

 

Posted in Data Center Infrastructure Management | Comments Off

Are Data Center Silos Interfering With Growth?

Networking communication technology concept, network and internet telecommunication equipment in server room, data center interior with computers and Earth globe in blue light

Every business can experience information and data silos on some level, particularly when various applications and systems must communicate with each other.  But, these silos are particularly evident, costly, and problematic in data centers.  Data Center Knowledge offers one example of the type of silo that can occur in a data center, “Electrical and mechanical systems in data centers are a perfect example of legacy IoT, he says, operating in silos, isolated from the IT systems they support. That isolation is the decades-old legacy, used to this day as the only method of securing these critical systems from intrusion.”  Not only are there electrical and mechanical silos in the data center, but data/information silos as well.  As certain tools and information are used in various ways by assorted applications, data silos emerge and become increasing problematic.  So often, we see that when information silos occur, duplication of information and processes occur which takes up more space and thus uses more energy.  Silos in data centers are truly a drain on resources.

The problem of data center silos is further exacerbated by the fact that, often, one company may have multiple data centers in locations all over the world.  To avoid data center silos in both large and small data centers, there must be collaboration and open lines of communication.  Fortunately, many data centers are opting for convergence rather than expansion, finding ways to use existing space in a more effective and efficient way.  This alone, will help reduce information silos.  Data Center Knowledge explains how convergence is being actively applied in data centers and elaborates on the advantages to be found when data centers opt for convergence, “As we saw with many Datalink enterprise customers who moved from the silo model, IT data centers first began to incorporate server virtualization technology to logically represent multiple servers on one or more consolidated, physical systems with smaller data center footprints. The data centers also began to incorporate their own dedicated network and shared storage to support them.Suddenly, one physical server could be used to serve up the needs of multiple applications, which it often did with glowing results. But, a ripple effect of virtual server growth often expanded storage and network needs significantly. Suddenly, cost savings in one area could be offset by growing expenses in another…

The benefits for IT can lead to:

  • Less moving parts (and less individual vendor touch points) to manage or troubleshoot
  • Greater resource utilization at a lower cost
  • Faster application provisioning (one enterprise customer went from their prior three weeks to just 15 minutes to provision new applications)
  • Faster IT response to business priority changes or changing market conditions
  • Easier scaling and greater elasticity of the infrastructure
  • Related integration and cross-training of previously siloed IT teams, themselves, in order to align IT further to the business
  • A shorter pathway to on-demand services or private cloud environments to meet the IT needs of internal business units

A lower cost to support the growing data and application needs of the business (Another enterprise customer who runs its own SaS business found itself able to offer better quality services to current and new customers at a lower overall cost to itself.)”

Posted in Data Center Build, Data Center Design | Comments Off

Should Enterprises Keep IT In House or Outsource Colocation?

Data center with network servers in futuristic room.

Enterprise data centers may be a dying breed.  Today we are seeing more and more data centers opt for colocation over enterprise data centers because of the high cost and level of expertise needed to run an enterprise data center. Additionally, cloud service providers are mitigating the need for enterprise data centers.  Data Center Journal explains the basic appeal of colocation to enterprises, “Many businesses don’t have the time and money to invest in the equipment, technology, security and staff to run a full data center. For those businesses, colocation can help them optimize their department and free up resources, giving employees the time and bandwidth to focus on more strategic business tasks. Colocation facilities provide the space, cooling, power and security for your server, storage and networking equipment, while giving IT manager’s access to high bandwidth, low latency and always-on connections… Large colocation facilities also offer significant benefits of scale. By utilizing large power and mechanical systems, the facility can provide high uptimes and speed as well as the ability to efficiently appropriate additional resources and quickly grow alongside your company.”

Colocation is certainly not free, it comes at a cost (particularly at the beginning) and provides its own set of security risks.  But, while enterprise data centers will never completely disappear but they are certainly fading, and rapidly.  Data Center Knowledge discusses the trend of moving away from enterprise data centers towards colocation and cloud services, “As Liz Cruz, associate director with the market research firm IHS and the panel’s moderator, pointed out, hardware and infrastructure equipment sales into data centers are declining, while revenue colocation providers are raking in is growing in double digits, which means more and more companies choose outsourcing over their own data centers. Still, when she asked people in the audience to raise their hands if their companies had at least two-thirds of their IT capacity in colocation data centers, only a handful did. It’s cloud providers who are driving a lot of the revenue growth for colo companies – a lot more than enterprises, although enterprise data center spending is slowly waning. “Cloud providers are now the largest tenant of multitenant data center facilities,” Cruz said… For colocation providers, these hard-nosed enterprise users are not only a big growth opportunity; it’s a matter of longevity. The race to capture the hearts and minds of the enterprise is on, but they’re not only racing each other. They’re also racing the Amazon, Microsoft, and a few others. Most colo providers have embraced public cloud as reality and have been using their ability to provide direct network access to cloud services from their facilities as a way to attract enterprises, pitching customers on the hybrid cloud, where a physical footprint the customer has full control of is supplemented with public cloud services, all under one roof in a colocation facility.”  If the trend continues, as we suspect it will, colocation data centers will continue to grow and work towards integration with cloud services to draw more and more businesses away from enterprise data centers.

 

 

Posted in Cloud Computing, computer room maintenance, Data Center Build | Tagged , , , | Comments Off