header-about-us-sub

Increasing Connectedness Puts Added Strain on Data Center Support Services

AdobeStock_78784722

 

The internet has connected us all. It has made it possible for people and computers anywhere in the world to communicate. We want to be able to work from home or our favorite coffee shop, requiring access to all the same tools from home that have from our desk at work. We want to be able to play a video game at home in our living room as a team with other players located all around the world, all through the internet. We want to bring our mobile devices to work and use them for work and play.

These things are all possible and rely on data centers to make them happen. The more connectedness we require, the greater the demands we place on our rapidly aging cloud computing centers and their power and cooling infrastructures.

Changing Needs in the Workplace

More workers are bringing their own mobile devices in to work, and employees increasingly want to be able to work from home. Workers contribute to the IoT, demand mobility on the job, and companies are increasingly using cloud computing to handle big data jobs and even desktop applications in some cases. These changes place increasing demand on the corporate networks and data center usage.

Needs Are Rapidly Outgrowing Resources

The more we are connected, the more we realize we can do with our connectivity and the more we demand of our computing resources. We ask for more of our infrastructure daily and are beginning to hit the hard limits of some of the older centers. This will require major redesign projects for existing centers, and many new facilities to be built from the ground up in the next decade.

New Facilities Must Be Designed With the Future in Mind

As new computing centers destined to be nodes in the cloud are developed, they need to be designed with plenty of room for growth. As is always the challenge in the industry responsible for supporting the growth of technology, IT service leaders must anticipate where technology may be headed in the next ten years, and try to provide the infrastructure to support it. A good place to start will be to provide plenty of room for expansion of power supply, and state-of-the-art, energy efficient, adaptable climate control systems for the computer rooms.

The changing demands of the workplace include more personal mobile devices in the workplace, more employees working remotely, and the increasing use of cloud computing for mission-critical applications. These changes all put stress on our aging data centers, which will soon need to be updated to accommodate our growing communication and computing demands.

Posted in data center maintenance, Power Management | Comments Off

How (And Why) IT Facilities Should Reduce Energy Consumption

AdobeStock_93793795

How (And Why) IT Facilities Should Reduce Energy Consumption

One of the single biggest users of electricity in a given organization is usually the IT department. Cutting back on this consumption is a high priority for many, but it can be difficult to find a way to do so without cutting back on performance. Here’s a closer look at why IT departments should reduce their energy usage—and how they can do it.

Going Green, Saving Money

Reduced electricity consumption is important for a number of reasons. One major one is simply that it helps out the bottom line; electricity costs money, and massive power consumption can lead to massive bills. Additionally, there is an environmental component, since much of the electricity produced today is taken from non-sustainable sources like fossil fuels. Cutting back on power consumption is good for the planet (and for public relations). But how can IT departments go about reducing their electricity use without compromising performance?

Integrate Operations

One excellent option is to integrate energy management. For many large scale operations, server rooms, data centers, and IT facilities are managed separately from the rest of the building. In many cases the IT manager, rather than facilities management, is responsible for things such as cooling, security systems, and power supplies. An integrated energy management system, however, gives the site manager control of these facilities along with the rest of the complex. This can lead to more unified and efficient management of processes, electricity, and security. Working in tandem with IT management, facilities managers can plan for growth and maximize existing resources.

Use of Space

Data centers can also ensure appropriate energy use with careful management of their available space. Often, businesses will build their IT centers with an eye towards the future, making them bigger than necessary in order to leave room to grow. While this is solid long-term planning, it means that parts of the data center may sit unused for years, requiring maintenance, lighting, cooling, and other expensive services with little to no benefit in exchange. Using this space for something else, or using prefabricated modules in order to allow for future growth patterns, is a good way to reduce this waste.

DCIM Software

Another important step that IT facilities can take is to implement the use of data center infrastructure management software, or DCIM. These programs collect data and monitor metrics such as resource usage, operational status, and electrical efficiency; they can even pinpoint problem spots (such as overloaded servers or inadequate cooling systems) and guide management towards more efficient solutions. Knowing what’s going on in a server facility makes it easier to identify problems and solutions.

While electricity usage can be high for IT centers, there are ways to reduce it. Smart planning can benefit both the planet and your organization’s budget.

Posted in Data Center Design, Facility Maintenance | Comments Off

Cooling Equipment Maintenance Service Contract is Critical to Data Center Uptime

Data center

 

Computer room air conditioning (CRAC) systems are critical to maintaining computing equipment efficiency and longevity, and therefore maintaining your data center’s CRAC system is critical as well. A recent focus group led by Schneider Electric showed that data center customers consider a CRAC system maintenance plan to be even more critical than an uninterrupted power supply (UPS) maintenance plan, as cooling systems have more moving parts that can wear out over time and are therefore more likely to have mechanical failures that need immediate expert attention.

There are strict industry standards governing the temperature and humidity to be maintained inside data center computer rooms to conserve power and prevent electrostatic discharge that can damage critical components and start fires. According to data center standards, the temperature in the computer room should be between 64 – 81° Fahrenheit and maximum dew point no higher than 59° Fto prevent electrostatic discharge

Regular Service Minimizes Down Time and Maintains EquipmentWarrantees

Performing routine maintenance tasks on schedule extends the life of your AC equipment by removing dust build up on components and in air filters, regularly spot checking and adjusting temperature and humidity levels throughout the space, and proactively identifying and replacing parts that are nearend of life before they break down unexpectedly and cause outages. This can prevent breaks in service and loss of productivity,and follow the maintenance schedule may be required for keeping your equipment warrantee valid.

Service Contract Means You Have an Emergency Plan in Place

In the event of a component failure, if you have a service contract in place you will have an immediate response to the problem from a trained professional who will diagnose the problem and immediately implement a solution, getting your data center back online in short order. This is critical for getting your customers’ applications up and running again as soon as possible and maintaining a good reputation as a reliable data center resource.

CRAC Professionals Can Help Optimize Your Cooling Equipment

Your power bill is an unglamorous but very real cost of doing business for an enterprise-class data center, and better managing that cost can improve your business’ bottom line. Professional CRAC service people can help you optimize the location of your cooling units and air humidifiers to provide complete and even climate control throughout the room. They can also help you set up a state-of-the-art electronic monitoring system to help you manage conditions in the room.

Simply put, having a service plan in place for your data center’s cooling equipment is the best insurance you can have against extended outages and costly down time.

Posted in Data Center Design, data center equipment | Comments Off

Hyperconverged Infrastructure – Industry Analyst Insights

AdobeStock_93793795

When it comes to addressing customer demands, many technological companies are turning to Hyperconverged Infrastructure (HCI). Somewhat primitive at first, these technologies have evolved over the past few years, allowing frustrated consumers to switch from supporting operational processes and traditional infrastructure. Over time, a plethora of IT companies have made the switch and now consider HCI to be a reasonable alternative to other methods. If you have ever wondered about the benefits and finer points of HCI, the following article will provide you with in-depth information about this relatively new, yet extremely powerful infrastructure technology.

 

Increased Efficiency

Hyperconverged platforms are created in a manner that allows users to store, compute, and network together. This can be done either virtually or physically, and the technology aims to make this type of networking extremely easy to use. HCI can also be integrated into consumable web-scale blocks – this lets infrastructure and operations leaders perform their jobs in a much quicker fashion. Hyperconverged systems currently provide infrastructure leaders with more savings and efficiencies than ever.

The Traditional Marketing Approach

In recent times, hyperconverged infrastructure has been marketed largely be small-scale startups across the country. These startups champion HCI’s ability to assist vendors with building a complete product portfolio. HCI can also help vendor’s provide a wider variety of products and services to their consumers. Studies have shown that HCI can work equally well for companies and organizations of all sizes.

Benefits of HCIs

First generation hyperconverged products have a reputation for employing an appliance approach. This approach requires systems to be implemented on a single server. This server is then scaled by stacking the systems on top of each other. New HCI systems allow users to scale storage resources and computers separately.

In terms of systems management, HCI strives to reduce the amount of work involved. Instead of establishing individual islands of technology, data centers need a product portfolio that will blend in well with broader management architecture. HCI allows system managers to do this, and the systems can be managed through a single console. This console is policy-driven and allows each company to add their own unique directions.

Newer HCI Systems also increase storage through the use of duplication and compression processes.

The Future of Networking

Hyperconverged Infrastructure is definitely a technological game-changer, but its true power lies in the fact that customers across the country are finding it far easier to store data and network. As time progresses, this all-in-one solution will proceed to grow in popularity, especially through the growing number of partnerships between technology companies and various vendors. Unlike the traditional storage methods, HCI will undoubtedly alter the networking landscape for years to come.

 

Posted in Hyper Converged Infrastructure | Comments Off

Is Hyper-Converged IT Ready to Deliver What You Need?

AdobeStock_87909563

The trend in technology these days seems to be to migrate all related applications to a single source. This seems intuitive given that everyone would love to be able to jump back and forth between system settings with the simple push of a button routed to one piece of equipment rather than the many connections shared between different components that they’re stuck dealing with now. You may be seeing this trend manifest itself in your own home. Today, smart TV’s come equipped with streaming video software. Your kid’s gaming console now supports online applications that let you shop, watch TV programming, and even browse the Internet. Yet for all these advances, you’re still having to swap out your cable remote for your TV remote or your console controller.

In the IT world, converged technology is closely comparable to what you likely have in your home right now: multiple controls controlling access to multiple components systems working in concert to give you what you need. The potential of a single-source solution is what’s being referred to when tech professionals start talking about hyper-converged IT. It represents the proverbial magic box, offering access to every application that you need with having to be support by ancillary components.

Waiting for Practicality to Catch Up with Potential

The draw of a hyper-converged box is readily apparent: it offers a more-effective yet less-expensive solution to data access and management. Yet while the theory of improved IT via a single source presents unending potential, the actual practice of combining several component systems into a single, physical piece of equipment presents its own unique challenges. While there are indeed hyper-converged systems available on the market, developers are still searching for cost-effective, easily-implemented solutions to the design challenges that the technology presents.

While you may understand the need for such solutions in order drive to the cost of equipment down, you’re also in the precarious position of needing technology that you can rely on today. When setting up your IT, you typically consider three things:

What’s the most up-to-date technology available?

What system is the most reliable?

Which is most cost-effective?

The emphasis on optimal performance in the here-and-now often requires that you go with the technology that offers the highest level or performance across all three areas, rather than simply focusing on one. In the case of converged vs. hyper-converged IT, the known reliability and performance of converged systems may ultimately be your best choice as you wait for hyper-converged technology to become better established.

The benefit of converged IT may only cause you to crave the added benefit that a hyper-converged box could theoretically offer all the more. However, at the end of the day, your business needs to rely on performance, not potential. While hyper-converged IT systems are beginning to make inroads into data center across the world, the converged technology that you currently rely on will likely still be your best option until the issues that can plague such systems can be fully resolved.

Posted in Hyper Converged Infrastructure | Comments Off

What Can Be Done to Cut Down on Data Center Outages?

AdobeStock_35534699

As more and more businesses depend on their data centers for continued operation, outages can cost thousands of dollars for every minute that the center is offline. Even though the number and duration of power outages has decreased significantly over the recent years, businesses continue to look for ways to minimize, if not completely eliminate, losses caused by forced downtime. While some circumstances are truly beyond anyone’s control and outages can happen despite your best efforts, their number can, indeed, be decreased by taking preventative measures.

In many cases of accidental outage, the cause was traced back to human error. Any company working on minimizing downtime is well advised to examine the main types of human error involved and take steps to minimize such occurrences. Important measures in error prevention include implementing design changes that make it easier to catch and stop mistakes before they result in power failure.

One type of error occurs when PDU cables are disconnected by accident. This can happen when the cables are already loose and an inadvertent touch disconnects them completely. One design feature that helps cables stay connected securely is a locking power cord that cannot accidentally come loose and disconnect. Investing in upgrading your PDU cables can prevent unwanted downtime and end up saving tens of thousands of dollars.

Color-coding PDU components and cables also helps workers keep track of all power feeds and avoid disconnecting the wrong cable. This is a simple change that will help prevent outages without involving major disruption and equipment replacement.

Other precautions concern planning the configuration of data center equipment in a way that will distribute power loads efficiently, instead of running the risk of overloading a particular line and experiencing an outage. One way to cut down on power inefficiencies and uneven distribution is to supply the same higher voltage power to each rack. In the long run, this approach reduces power supply issues.

Overheating equipment is another cause of power outages. Data center equipment should be arranged efficiently and provided with adequate cooling mechanisms that can cool down the facility evenly. Installing and monitoring cooling equipment can help avert a significant portion of outages.

Companies concerned with losing large amounts of money to forced downtime in their data centers can take action to substantially cut down on the number of outages. While not all circumstances are controllable and outages will still happen, smart design and equipment choices can go a long way towards minimizing outages that happen due to human error or equipment malfunction.

Posted in Data Center Design, data center maintenance | Comments Off

The Advantages of Open Source for Networked Data Centers

AdobeStock_87909563

From smart home applications to vast online commercial empires, today’s leading industries are increasingly dependent on extensive data networks. Unlike previous years, when data was generated and stored on using proprietary software developed for use only with specific hardware, today’s data must not only be stored but must also be capable of connecting and interacting with other data. Speed is also essential.

To keep up with rapidly evolving networking and data technologies, the ideal data center would use hardware that is easily upgradable and compatible with major, if not all available software. It would also have software that is not hardware-dependent and can evolve quickly and intuitively. The reality, however, is that many centers are struggling with legacy programs and outdated hardware that are inefficient, clunky and increasingly difficult to maintain.

One effective long-term solution that is being implemented by major data centers and network providers is promoting open source networking. The advantage of open source development is the resulting homogeneity and therefore compatibility of networks worldwide. Both software and hardware that are produced as open source projects result in a modular virtual construct  where each component can be used with any other component. Discarding the older system of using teams to design a system from top to bottom, this approach, known as disaggregation, has each team focusing on the production of a specific component, whether that is a hardware part or a piece of programming. This heightened focus on details results in a more streamlined process and higher quality components.

Open source design and production of hardware also offers the additional benefit of reducing manufacturing costs, as well as entirely obviating the need for some parts. Increased modularization of data networking systems facilitates the transfer of functions traditionally accomplished through hardware such as cables to wireless networking and streaming functions. The more compatibility there is between various systems, the easier it is for the systems to communicate and interact. Open source networking is also conducive to creating interfaces that are intuitive for the average consumer.

The one consideration holding some companies back from embracing open source is the potential loss of competitive advantage. “If we use the same technology and share our innovations with competitors, how can we convince consumers to choose our product over theirs?” they reason. When it comes to networked applications, however, consumers are now less likely to buy products that cannot interact with other systems. In addition, the ability to offer higher speeds, better quality and more flexibility to upgrade is seen as offering a greater advantage than any that might be lost through open source sharing.

The use of open source development for networked data centers is an innovative and effective solution to the problems centers face in keeping up with the rapid speed of today’s technological evolution. As networking of products continues to increase, open source helps to achieve greater compatibility and flexibility.

Posted in Data Center Design, Data Center Infrastructure Management | Comments Off

The Importance of Tape in an Increasingly Data-Rich World

AdobeStock_91464560

It isn’t talked about much, but tape is more important in today’s world of exploding data collection and storage than it was decades ago. Enterprise tape is now able to store 10 TB for eachcartridge at 360 MB/sec. Tape libraries have reached an astounding capacity of more than one exabyte. Magnetic particle technology allows an area data density of 123 billion bits per square inch, which means an LTO cartridge can store as much as 220 TB of uncompressed data. The need and demand for tape has been on the rise for many reasons:

 

Increase in regulation and improvements in technology have increased the demand to store data indefinitely. Tape is crucial to backup of data and recovery after a disaster, and plays a significant role in cloud storage.

Increased demand for more efficiency and greater capacity in data storage is driving the manufacture of tape media, drive and management software.

Tape is economical compared to other storage methods, and is becoming preferred for archiving, especially active archiving, to search, store and retrieve data easily.

New Uses for Tape in a Changing Tech Landscape

The space-saving benefits of tape are clear from its use with NAS. When data comes into a NAS disk cache it is written to the tape; once the cache is full it dumps the oldest files and uses metadata to link to the data on the tape. Searches conducted for files work just as before, only now when a read request occurs, the file is moved back from the tape to the disk cache. Users will not even realize that a different storage process is being used, which prevents costly user familiarization.

IT managers are looking for new ways to use tape to take advantage of the efficiency and operational value of its use. Venture capital is increasingly being funneled into this industry to discover new methods of use and innovative designs. This activity is likely to propel tape from just a medium of data backup to a primary form of storage for unprecedented amounts of data.

Analysts now believe that data storage will amount to 5,200 GB per person worldwide by the year 2020. As tape is cheaper and more energy efficient, it is predictable that this form of storage will be experiencing an exponential increase in use in the years to come. As new tape designs are researched and developed, this growth may even reach to the personal computer use market.

Posted in Cloud Computing | Comments Off

Spot Data Center Problems Before They Happen

AdobeStock_56769671

Given the importance that immediate access to information carries in today’s business world, being able to rely upon your data center for round-the-clock reliability is vital. Unfortunately, completely uninterrupted support is impossible. Far too many companies adopt a “wait-and-see” approach when it comes to dealing with problems that may arise from their data centers. If this is the method that you choose to follow, then you’re essentially guaranteeing yourself to face costly periods of downtime in the future.

 

 

Condition-Based Maintenance Programs

Rather than reacting to data center component failures after they happen, why not choose to monitor and address them before they even manifest themselves? Impossible? Not if you choose to implement a proactive approach to your data center management. Along with delivering impressive support and backup to end-users, the technology supporting today’s data centers also offers a certain level of self-diagnostic capabilities. Conditioned-based maintenance programs now allow you to receive real-time alerts of potential problems with your data center hardware.

Factors such as temperature fluctuations and the amount of energy exerted during peak load times can be easily measured and used in factored the toll that it exerts on your equipment. As the signs of wear-and-tear begin to show, your CBM lets you know not only that a problem may be pending, but also allows you to localize it so that your IT teams knows where to look. A quick replacement of the worn components then ensures that eventual failures can be avoided.

Other Preventative Maintenance Methods

Yet while CBM’s are changing the way that data centers are maintained, older centers that aren’t supported by such technology require increased effort on your part to avoid problems. Yet you needn’t worry; a few simple, intuitive practices can be just as effective in fulfilling preventative maintenance. These include:

Prioritizing your applications: Identify those applications that are most critical to your day-to-day operations, and then pinpoint the data center components that support them. Create a preventative care plan to ensure that those components are routinely inspected and serviced. This not only prevents downtime, but also allows you to optimize your maintenance budget.

Maintain detailed information on all your equipment: Learning the unique requirements of each of your data center components helps you know exactly what sort maintenance is required and at what time intervals. This allows you to create a care plan customized to the meet the needs of each of your components systems.

Take advantage of manufacturer service plans: Equipment manufacturers will also typically offer service plans with their equipment. Purchasing them allows you the advantage of having an expert evaluation of your equipment done annually. This takes much of the guesswork out of preventative maintenance.

There’s no avoiding the fact that your data center components have a shelf life. Yet that doesn’t mean that you have to accept downtime as an inevitability. By adopting a proactive approach to your data center management, you can reduce the risks of experiencing downtime by dealing with small problems before they have the chance to become larger issues.

Posted in data center equipment, data center maintenance | Comments Off

The Data Center Has Changed Forever

AdobeStock_35534699

Over the past few decades, virtually every single aspect of the data center has changed. This massive shift in architecture is largely fueled by the applications that run these centers. The previous standard of monolithic packages has largely been replaced by much smaller microservices, and the teams of professionals who manage and monitor these services are becoming increasingly diversified as well. Apps are no longer limited to fixed locations, but can now be easily distributed across a variety of clouds and data centers. Despite the fact that these changes have been exceptionally helpful to businesses and users, these rapid changes have also increased the rate of security threats.

Combating Security Threats

Security threats have the potential to undermine any data center, but new technologies are emerging in order to combat these threats and improve the user experience. This has led to the development of more comprehensive architecture at nearly all levels of the data center stack. These architectural elements include policy-based automation and management, hybrid cloud orchestration, and a multitude of infrastructure resources.

New Technologies

One of these new and increasingly popular resources is the HyperFlexTM system. It utilizes a unique hyperconverged infrastructure, which allows it to greatly simplify policy-based automation, storage, and computation.

Hybrid cloud orchestration offers a comprehensive solution for handling both on-premise and hybrid cloud workloads. Orchestration combines Cisco, ACI, and UCS platforms in order to optimize and automate hybrid cloud provisioning using only a single pane of glass.

Scale, security, and performance have also increased due to a switch from the original Nexus 9000 family. The new system gives customers a massive advantage over competitive technologies.

The changes listed above are part of a trend known as “disruptive innovation.” Disruptive innovation has the potential to fuel growth and creation simultaneously.

The Origin and the Future of Disruptive Innovation

Disruptive innovation originated with the emergence of convergence over IP networks. This eventually led to the establishment of new and improved benchmarks for resource use and cost savings. As time progressed, the Application Economy emerged, blending the scale, programmability, and security of the older networks with newer software-based solutions. This transition allows companies and individuals to “future-proof” their data centers using real-time analytics and cloud-scale capacity.

The Value of Partnerships

In short, this major jump in technology is largely owed to partnerships between various companies and software developers. These partnerships allow companies to craft solutions that define the future and improve upon past technological models. As time passes, the world can expect to see increasing diversification among data center technologies and the workers who create and manage them.

Posted in Data Center Design | Comments Off