header-about-us-sub

Protecting Data Center Hardware and Avoiding Unexpected Downtime

Data centerEnsuring hardware is protected from unexpected physical consequences is an essential and often overlooked part of maintaining a data center. The strategies below will reduce the risk of downtime and potential damage to hardware.

Power Management

The Uninterruptible Power Supply (UPS) provides power when utility services fail, keeping essential equipment functional at all times. In order to guarantee uptime and adequate power needs in a prolonged outage backup UPS systems should also be installed on essential systems. Any remote monitoring equipment should also be fitted with a UPS to allow for continued monitoring of data center operations. A Power Distribution Unit (PDU) should also be installed to the main power source to protect against critical loads. Power distribution to all systems should run through a UPS, connected to a PDU. Together the UPS and PDU ensure equipment is protected and active throughout any power fluctuations.

Cooling

Cooling equipment needs are dependent on the particular environment at hand, although equipment may handle a large range of temperatures such fluctuation should be avoided. To ensure components are adequately cooled or heated keep temperatures in a range of high 60’s to low 70’s. Finding and eliminating hotspots in racks with tray fans is a great way to cut down on unnecessary use of air conditioning, but with proper organization and management of equipment this should not be needed. Airflow considerations should be made around all equipment, but particular devices such as the UPS are more prone to potential degradation if exposed to excess heat. If the room is too compact to allow for cool airflow to the UPS it should be kept outside the area.

Organization

Organization and setup is an essential part of protecting critical equipment and avoiding long term problems. If pressed for time and proper organization during setup isn’t possible, then arrangements should be made for a fully configured system. Poorly arranged systems can lead to over cooling situations in which spot fans are unable to maintain ideal temperatures and over usage of air conditioning is needed. Considerations should also be made in regards to the potential for water damage from the environment. Avoiding any room with water pipes or any areas that could potentially flood, such as basements, is a must. If in a rainy climate having an umbrella installed over server racks can protect from any unexpected leaks from the roof.

In short, well prepared power management and cooling systems are the best insurance against any unexpected downtime.

 

Posted in data center maintenance | Comments Off

Why Hyper-Converged Infrastructure Is the Future of IT

Technology, Network Server, Data.

As the Internet of Things (IoT) continues to develop, the wisdom of converged infrastructure is making more and more sense. The basic idea behind it, developed during the early days of device integration, was the idea that bringing together device infrastructure within a single, stable ecosystem would provide the streamlining necessary to handle the data from multiple inputs across the whole system. As an approach, it worked well enough to provide much of the backbone for today’s networked home and business experience.

As businesses move toward more and more reliance on the Internet of Things to manage their daily affairs, though, the influx of data from those devices is becoming overwhelming. That is leading to a new concept, hyper-converged infrastructure, which promises to provide solutions for handling this ever-growing influx of information.

Why It Is Necessary

According to the research firm IDC, the amount of data produced but IoT endpoints, devices capable of communicating information back to a central server via the network, is doubling every 2 years. This means that by 2020 there will be 10 times as much data being processed worldwide as there was in 2013. The demands for data processing at that point will be such that each year, meaning there will be nearly as many points of information collected and communicated as there are stars in the known universe, annually. Nor is that expansion likely to slow, with the number of IoT endpoint devices being projected to grow from 10.3 billion to 29.5 billion in that same time frame. This kind of data glut necessitates a new way of sorting information if data processing is going to keep pace with the demand for it.

Features of Hyper-Converged Infrastructure

Where converged infrastructure brought together the server, storage, and networking components into a streamlined package to be more cost-effective and efficient, hyper-converged infrastructure goes a step further, combining converged infrastructure with a hypervisor and management software to create a system that manages its resources as effectively as possible in every moment. The savings, in terms of time committed, are substantial:

100 hours shorter set-up time on average

Up to 10 hours per month in management time reduced

When this is delivered in a self-contained enclosure to complete the hyper-converged infrastructure, it also eliminates the need for dedicated server rooms. The result is a lean, efficient server setup that is designed to handle a large influx of information, and that can be easily replicated as your data needs grow. That is what makes it the future of information technology in the workplace.

Posted in Data Center Design | Comments Off

Well-Maintained Cooling Equipment Helps Avoid Data Center Downtime

Trouble in data center

According to the Gartner Group, provider of technology related information and statistics, $5,600 a minute is the average cost of data center downtime, which amounts to a whopping $336,000 per hour. Obviously, this is devastating in terms of lost revenue and squandered productivity. In order to avoid downtime, data center customers say that regular service for the center’s cooling system is as important as keeping the UPS (uninterruptible power supply) humming.

Struggling to Keep Up

The cooling equipment has many mechanical parts, which will eventually break down and require replacement. A considerable amount of energy is also used, since the data center must meet the standards required by ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) to operate at a temperature of up to 81 degrees Fahrenheit. Consequently, in addition to being subject to equipment failure, an aging cooling system may not be able to keep up with the data center’s current energy requirements.

Regular Maintenance is Key

Many data centers contract with service providers in order to keep essential equipment in top shape. A cooling equipment professional, for example, can examine your system and identify trouble spots. Parts that will need eventual replacement include loose wires, dirty filters and humidifier bottles. Those that are approaching the end of their useful life can be changed out before they have a chance to break down.

Making the Most of Your Cooling System

Regular cooling system maintenance will benefit data center performance, which depends on this component’s efficiency. Because cooling equipment is so energy intensive, there have been some significant improvements in system design and efficiency in the past several years so that much less energy is used. Efforts as small as cleaning away leaves and other debris from condenser coils can be helpful, but replacing certain parts will also increase energy efficiency. When this is handled through a regular maintenance agreement, energy savings is often the result.

Choosing a Cooling Service Contractor

Because data center cooling systems are designed with many complex components, the individuals who service the equipment should be factory-trained, certified technicians. These professionals can properly diagnose any problems and resolve them before they have the opportunity to turn into outright disasters. Certified technicians will not only see that your cooling system operates at maximum efficiency, they will ensure that the product warranties remain effective. They can also recommend any changes or improvements that will enable your data center to perform at its best so as to avoid the threat of debilitating downtime.

 

Posted in data center maintenance | Comments Off

Is Liquid Cooling Servers the Next Big Thing in Data Center Development?

Networking communication technology concept, network and internet telecommunication equipment in server room, data center interior with computers and Earth globe in blue light

Liquid cooling CPUs to enhance performance by enhancing the disposal of heat waste is not a novel concept. In one form or another, it has been growing in popularity for years. Traditional liquid cooling systems are set up to service single microcomputer rigs and terminals, but recent innovations in cooling technology have begun to make this option more and more appealing to large-scale operations. To understand whether or not this technology is likely to catch on among data centers, it is important to first understand how it works and what kinds of liquid cooling are available.

Direct vs. Total Liquid Cooling (DLC vs. TLC)

There are basically two methods for liquid cooling computer components: direct and total liquid cooling. In the first model, direct liquid cooling, a fully sealed heat sink is placed on top of the CPU or server board that needs cooling. This sink is basically a metal plate under a tank filled with liquid, and as it absorbs excess heat, the liquid is circulated out of the computer, where the heat can be dissipated before the liquid returns to the computer.

The process is capable of absorbing 40 to 60 percent of the heat generated by CPUs, making it fairly efficient. Since the heat is dissipated by exposure to air, data centers using this technology do still need to be air conditioned, but the need for extensive fans to dissipate heat from solid heat sinks is reduced or eliminated.

The total liquid cooling approach is a bit different, because it involves no exposure to air-cooled components. Instead, the liquid used is mineral oil or another dielectric solution that is able to absorb heat from the server components, and the server is submerged in the liquid. In the case of data centers and other commercial-scale solutions, this means laying the array on its side in a tub of this solution.

Advantages to Total Liquid Cooling

TLC has the following advantages:

The ability to absorb up to 90% of the heat waste from systems

The ability to withstand power outages while still controlling heat

Maximum efficiency from your systems

Disadvantages to Total Liquid Cooling

The main disadvantage to this kind of cooling system is serviceability, since the liquid does create an obstacle to quickly changing components out or performing other forms of hardware maintenance. Each time the system needs to be serviced, the liquid must be removed and the components dried for handling.

Hybrid systems that encase individual servers may help mitigate this issue without decreasing efficiency, but the increased cost of setup complicates things. For now, the future of liquid cooling for data centers remains in question, but it certainly provides useful solutions to common data center issues.

Posted in Data Center Design | Comments Off

Micro Data Centers Provide a Pivot Point for Data Cost Controls

DCIM

For a number of years now, the move among IT departments at most major corporations has been toward consolidation, and much of the development of cloud infrastructure has revolved around making that kind of application offloading cost-effective for client corporations. There has been quite a bit of success with this approach, but as it matures it is revealing a few distinct areas where moving toward a consolidated cloud platform for data processing just does not work out as well as having distributed data centers. Traditionally, this trade-off has made the difference between cloud applications being accessible or not, but thanks to the advent of micro data centers, there is now a way to blend your approach.

Micro Data Centers for Content Distribution

The main area where micro data centers can be seen operating effectively as cost controls today is in content distribution. Companies that deliver a large amount of data to clients, as well as companies that need to process large quantities of data in their industrial operations, have already begun to see that the bandwidth expense that comes with remote processing is a major source of overhead for the data center.

By distributing a network of micro data centers, the applications and data that need to be provided to client computers can be distributed without a large overhead cost via internal or local networks, and the data center itself can manage any necessary communication and infrastructure back to the cloud, allowing for regular updates without the overconsumption of bandwidth. This allows for a variety of core benefits beyond just bandwidth use, including:

Better stability and speed for data processing

Redundant information storage

Distributed workloads to minimize the chances of company-wide downtime

Costs and Caveats

Using micro data centers to process data close to the point of use has always been the preferred method for controlling industrial applications because of the reduced latency in decision-making and applying controls. While there are some very enthusiastic supporters of mobile data centers, including both industry journalists and thought leaders, there is a trade-off. The costs of setting up micro data centers to take care of local processing tasks are still substantial, meaning that they only bring real savings when there is a large amount of bandwidth at stake.

If the costs for data center equipment and the movement toward manageable, hyper-converged systems continues, though, there is a chance that the trend toward centralization might just reverse itself over the next couple years, leading to a new era of powerful, compact, and widely distributed servers to handle remote operations and streamline IT infrastructures across various sectors of industry.

Posted in Data Center Build | Comments Off

Overcoming the 3 Biggest Barriers to Adopting Cloud Applications

Hand working with a Cloud Computing diagram

It is clear at this point that cloud computing is here to stay, with most industries not only moving toward the enthusiastic adoption of cloud applications for day-to-day operations, outsourced data processing, and the consolidation of mission-critical applications that allows a remote data center to put real CPU power behind any task your business needs to stay coordinated. Even with all the enthusiasm for centralized applications and their efficiency, though, there are still some significant impediments to getting cloud solutions adopted at many companies. Luckily, there are also key strategies for dealing with all three.

Security and Compliance

With the advent of privacy laws like HIPAA and other restrictions on the way private data is handled, the need to be sure that any outsourced processes or remote systems handling company information are secure has become of the main concerns among those who are slow to adopt cloud systems.

To allay these fears, concentrate on building an IT security plan that uses the best practices for reducing human error at the points of data entry and the most secure options for VPN tunnels, in-flight encryption, and other cutting-edge security refinements. With the right research and the proper development of a system, cloud security can maintain the same standards attained with local systems, especially when a hybrid approach is used to protect the information in your company’s private cloud from being accessed by through the public cloud hosting your applications.

Legacy System Support

Very few organizations manage to upgrade their entire system at once, and that’s because it generally isn’t a very good idea to do that. On top of the costs, new systems require training, and trying to train an entire workforce from the ground-up in new procedures would bring most companies to a screeching halt. This makes legacy support a necessary part of any IT department’s approach, and it has traditionally also been a barrier to moving toward cloud applications. Developing a plan to phase out these legacy systems as much as possible while migrating the core components that must be kept to cloud operations provides a way forward gradually, with the same kind of long-term planning used to overcome security concerns.

Costs and Budget Limitations

This is an area where the calculation may work out differently according to the structure and needs of individual companies. Generally, cost concerns have been the final hurdle to overcome when organizations move to the cloud, and that is because the cost of data center management and CPU use on-site is offset by the new cost of bandwidth brought on by cloud applications. A careful review of which processes are cost-efficient in the cloud and an approach that utilizes micro data centers to aid in the distribution of bandwidth-intensive processes smooths this out, though.

Posted in Cloud Computing | Comments Off

Cloud Control

Cloud computing flowchart with businessman

Many people who work with computers on a daily basis have developed at least a passing acquaintance with the cloud. Most IT professionals are usually more knowledgeable than the average person about using this relatively new technology, but the number one issue for everyone seems to be security. How safe is the information you send into the cloud and how much control do you have once it is there? In basic terms, the cloud offers a pool of resources, such as networks, servers and applications, that enable users to store and process information through data centers. Although it is still in its early stages, cloud computing offers speed and power with pay-as-you-go pricing, but despite the utility’s popularity, there are management concerns, especially as regards the subject of security.

Calming Security Fears

Technology is constantly evolving and when a new utility emerges there is always a learning curve. The best defense against wariness is knowledge. Some users worry about communication links being a security issue, for example. The more you learn about the cloud and how it works, the less apprehensive you will be about using it. Once you become an experienced cloud user and are working with a reliable administrator, security will become less of an issue. Any concerns you might have should be brought to the attention of the cloud vendor.

Managing Shadow IT

Shadow IT is a term used to describe unauthorized cloud usage. It started with employees using a company’s cloud apps, sometimes on their mobile phones, without their employer’s knowledge. This can be costly. A similar problem has developed whereby costs are ballooning because every department in the company has a cloud. This is where control becomes a major issue, and the obvious solution is to centralize authority.

Getting Centralized and Organized

The cloud can be extremely effective if company use is centralized. Normally administration would fall to a company’s IT department. Once central responsibility is assigned, the costs associated with the use of cloud resources can be better regulated. Companies need to know what kind of pricing is available and how they can manage it to best advantage. There will be cloud providers to work with and contracts to negotiate. Companies also need to know what kind of data is going into the cloud and whether they are running afoul of any regulations that might exit. The best way to put all of this together is to get centralized and organized. At that point, you will be in charge of cloud control.

Posted in Cloud Computing | Comments Off

Steps to Optimize the Flash Storage in Your Data Center

Data center with network servers in futuristic room.

Steps to Optimize the Flash Storage in Your Data Center

As flash-based storage becomes a larger and larger part of the marketplace for data storage solutions, new processes that take its unique properties into account are necessary. This is especially true at the enterprise level, where efficiency and optimal system design have a large effect on the performance and utility of data center resources. When you are integrating flash storage into your data center, these key steps will help you to find the right balance, whether your entire system is going to flash or you are adding a few new servers to an existing array.

Step 1: Reassess Your Metrics

Traditionally, data center performance has been measured in IOPS, or Input/Output Operations Per Second. With older, spindle-based hard drives, achieving a good benchmark number meant having a large assortment of drives that were capable of staging data for fast retrieval. Since flash storage has no moving parts and benefits from a different kind of physical architecture, the IOPS numbers go off the chart, and you need to use different metrics to assess your success.

Instead of focusing on IOPS, focus on these other key metrics that you are already assessing:

Bandwidth

Latency

Block size

That way, you are able to get a better picture of your data throughput, allowing you to see if access is streamlined and if your array is giving you the best possible performance.

Step 2: Get Inside the Array

Administrators moving to flash need to understand that read/write cycles will run up more CPU usage under this kind of storage. Partially, this is the cost of doing business at a higher speed. Partially, it is due to the nature of flash architecture. Either way, there are a few key changes you can make as a storage administratorto greatly improve your overall system performance:

Add more CPUs to handle the workload.

Check for unaligned write I/Os, especially if you have an older system and your hardware is not likely to automatically detect settings.

Step 3: Between the Array and the Server

The last step to take is to optimize server function and to ensure you have streamlined the process. If you are having server-wide issues, these steps are most likely to help:

Rearrange your hardware, to ensure you are complying with the rules of affinity.

Read into your operating system’s multi-pathing processes to find solutions.

Utilize hypervisor and in-guest tuning to attain optimum results

Taking these steps will ensure the best possible performance out of your flash storage.

Posted in Data Center Infrastructure Management | Comments Off

Why Colocation Data Centers Have Issues Scaling Biometric Applications

64607112

 

The first and probably most blanket answer to this conundrum is actually fairly simple: since biometric processes are inherently hardware-focused, they don’t really scale effectively, because large-scale networks wind up with a vast array of endpoint machines to coordinate and process information from. At the same time, though, biometric security applications are not only mission critical for a lot of collocated data centers, they are often the point of setting them up. There are few contributing factors to the current state of biometric applications that need to be addressed as part of the move to scalability, and they are a lack of standardized deployments, falling short on options to integrate access control, amd underdeveloped enterprise support

First: Dealing With Standardized Deployments

Implementation is really the issue when it comes to standardized deployments, and it is partially because there are not many well-documented and consistent approaches. That means that colocation providers looking to streamline their data processing and ensure scalability need to find a plan for replicating the same procedures with the same biometric identifiers and profiles across all the locations where they provide support to companies. This ensures streamlined processing and the ability to reduce the number of customer identities that have to be handled.

Second: Integrating Access Control

The users who work with biometric security systems the most are the security guards themselves, who are tasked with the implementation of biometric screening on the ground. Since their training is typically less technical than that of the IT staff but they form the front line for questions and concerns from users, integrating their access and maintaining the actual biometric systems that feed your access control the data it needs is paramount if the system is going to operate effectively. That means a combination of better training and regular system check-ups by qualified technical personnel to ensure that everything operates efficiently on both ends of the system.

Third: Underdeveloped Enterprise Support

Since biometric security systems are typically developed and sold from a mindset that is preoccupied with providing security and access control in a single location, the systems themselves often fail to anticipate the kind of large-scale deployment needed by enterprise customers. The result is that at the level of the hardware, network integration becomes difficult because it was not anticipated. Before a real path forward for easily scaled biometric solutions is possible, a biometric system supplier will need to step up with hardware that is specifically designed to work with and for those customers with a need to control multiple levels of access in multiple locations while coordinating information across systems.

Posted in Data Center Design | Comments Off

An Overview of Network Function Virtualization

AdobeStock_88603767

 

One of the fastest growing new technologies is network function virtualization, also known as NFV. First introduced in 2012, this modern approach to network services has the potential to revolutionize the field. Here’s a quick look at this networking strategy.

The Technology

Network function virtualization is when network node functions are virtualized and then linked together. The end result is a communication and networking service that is almost entirely software based, rather than being based on hardware. NFV has a wide variety of applications, ranging from digital security to interpersonal communications. Specific use cases include service chaining, group collaboration,voice over LTE technology, the Internet of Things, and advanced networking solutions.

The Creation

As an evolution of existing technology, NFV is still developing and changing rapidly. Currently a number of different organizations are trying to standardize the architecture of these systems, while open source communities are trying to create a software that will run on them. Since a variety of separate companies, groups, and bodies are in the process of creating NFV, the technology is expected to be usable end-to-end within the near future. Many forward looking organizations are paying close attention to these changes.

The Data Center

NFV is closely linked to another developing concept, the software-defined data center (also known as an SDDC). Many industry experts believe that SDDC represents the future of data management, and that NFV technology is a major stepping stone towards this paradigm. This is because SDDC requires a number of factors in order to move a function to the cloud, and NFV provides a way for many of these factors to be effectively realized.

The Benefits

One of the biggest potential advantages of NFV is that its high level of automation offers a great deal of agility. Unlike traditional network upgrades, which may take months to implement, an NFV adjustment can be completed very rapidly. In some cases connections can be made in mere seconds. Additionally, since the hardware used is highly uniform, the cloud can become a useful resource for many different types of applications. This allows organizations to receive the maximum possible benefit from their networks, and helps them adjust in a rapidly changing economy.

The Professionals

Smart IT professionals are keeping their eye on the gradual adoption and development for NFV, since it offers a numerous possibilities that could benefit their organizations. In fact, some IT managers are actively influencing the creation of NFV technologies by communicating with vendors and developers about their needs and use cases. Forward thinking professionals see a great deal of potential within network function virtualization.

Posted in Data Center Design, New Products | Comments Off