header-about-us-sub

Blog

An Overview of Network Function Virtualization

AdobeStock_88603767

 

One of the fastest growing new technologies is network function virtualization, also known as NFV. First introduced in 2012, this modern approach to network services has the potential to revolutionize the field. Here’s a quick look at this networking strategy.

The Technology

Network function virtualization is when network node functions are virtualized and then linked together. The end result is a communication and networking service that is almost entirely software based, rather than being based on hardware. NFV has a wide variety of applications, ranging from digital security to interpersonal communications. Specific use cases include service chaining, group collaboration,voice over LTE technology, the Internet of Things, and advanced networking solutions.

The Creation

As an evolution of existing technology, NFV is still developing and changing rapidly. Currently a number of different organizations are trying to standardize the architecture of these systems, while open source communities are trying to create a software that will run on them. Since a variety of separate companies, groups, and bodies are in the process of creating NFV, the technology is expected to be usable end-to-end within the near future. Many forward looking organizations are paying close attention to these changes.

The Data Center

NFV is closely linked to another developing concept, the software-defined data center (also known as an SDDC). Many industry experts believe that SDDC represents the future of data management, and that NFV technology is a major stepping stone towards this paradigm. This is because SDDC requires a number of factors in order to move a function to the cloud, and NFV provides a way for many of these factors to be effectively realized.

The Benefits

One of the biggest potential advantages of NFV is that its high level of automation offers a great deal of agility. Unlike traditional network upgrades, which may take months to implement, an NFV adjustment can be completed very rapidly. In some cases connections can be made in mere seconds. Additionally, since the hardware used is highly uniform, the cloud can become a useful resource for many different types of applications. This allows organizations to receive the maximum possible benefit from their networks, and helps them adjust in a rapidly changing economy.

The Professionals

Smart IT professionals are keeping their eye on the gradual adoption and development for NFV, since it offers a numerous possibilities that could benefit their organizations. In fact, some IT managers are actively influencing the creation of NFV technologies by communicating with vendors and developers about their needs and use cases. Forward thinking professionals see a great deal of potential within network function virtualization.

Posted in Data Center Design, New Products | Comments Off

The Digital Revolution Means Big Changes for IT Service Management

IT  Services

With the modern day shift towards digitization of data and processes, IT departments worldwide are being called upon to rapidly grow their service catalog to include all the new technologies their users may benefit from. IT managers must stay abreast of cutting-edge technologies applicable to their fields, and to truly be an industry leader, IT managers should be proactively seeking out new applications that may give their company a competitive advantage.

The three main areas of growth for IT service management (ITSM) are service catalog updating, legacy IT service modernization, and discovery of new technologies relevant to the field. This is plenty to keep IT professionals busy and provides an interesting challenge to even the most forward-thinking IT managers.

Service Catalog Updates

Many corporations have traditionally used service catalogs that relied upon local desktop applications and data storage, with automated regular data back-up systems in place. Many ITSMs are moving towards cloud computing for data storage for many reasons, including the ease of regularly backing up data stored in a centralized location and the convenience of maintaining and upgrading applications running in a single environment where each user opens an instance of the same application, rather than each desktop having its own application and license to manage.

Legacy IT Service Modernization

Users become familiar with and dependent upon their favorite applications, and many resist change in this area. It is often better to prioritize user productivity and let that principle guide the progress of the modernization of services. Some workflows will immediately benefit from application replacement, and users can usually to recognize that fact. In the instances where replacement will have long-term benefits but short-term drawbacks like requiring users to learn how to use a new software suite, application replacement can be a hard sell.

Scouting New Technologies

ITSMs must be continually on the lookout for new applications and other industry trends that can give their company an edge over the competition. The goal is to provide maximum IT agility with minimum disruption to the company’s operations. ITSMs should attend industry conferences and keep abreast of industry publications, but would also do well to pay attention to the leading publications in other fields which may give them insight into which way IT may turn next, providing them with an edge over other companies.

By working to update the IT service catalog, continually striving to modernize legacy applications, and scouting new technologies, today’s ITSMs can contribute to the building of a cutting-edge business through restless innovation.

Posted in data center maintenance, Facility Maintenance | Comments Off

Strategic Management of Computing and Storage Use in a Hybrid IT Environment

Hand working with a Cloud Computing diagram

Cloud computing is the process of using an off-site network of computers or servers to process and store data. Cloud computing is often seen as the solution to all local computing and IT challenges as companies move entire workflows from the ground to the cloud. But the cloud is not appropriate for all applications and server needs, and most business users will find that a hybrid solution is right for them, using both local machines and the cloud to store and manipulate their data. This hybrid approach to IT requires careful planning of what applications to run in which environment. Here are a few points to consider before moving an on-site application to the cloud.

Monitoring

You will need to be able to get a global view of your applications and data in order to determine which applications and data streams belong in the cloud and which would be best on the ground. As each company’s setup is unique, so is their ideal monitoring system. IT management will need to track computing time and data storage needs for each application in each environment. Applications with high computing time and low data storage needs belong in-house rather than in the cloud, and vice versa.

Security

Data security is a concern for most corporations and something that should be considered when choosing a cloud computing host and when deciding where each of your applications will be run. Cloud computing facilities normally make their security policies clear, but corporations are ultimately responsible for the security of their own data, and if the cloud is hacked the corporation may still retain some responsibility for the breach of security. So depending on the level of security needed for your applications and their associated data, the cloud may not be the ideal choice.

Accessibility

Sending and retrieving data from applications in the cloud takes time, and communicating with the cloud can reduce users’ productivity depending on how often they need access to different applications. Frequently used applications may be better off on local machines for this reason. There is also the issue of outages with your cloud computing service. You have no control over this variable, and even with a well-maintained company outages can last several hours, causing you big headaches depending on your deliverable and what you use the cloud for.

Keeping monitoring, security, and accessibility in mind while determining what applications belong on the ground and in the cloud will serve IT professionals well, and allow them to manage and upgrade their hybrid IT environment with minimal outages or loss of productivity.

Posted in Cloud Computing | Comments Off

Preparing For Data Center Emergencies

Network switch and ethernet cables,Data Center Concept.

One of the most important aspects of any data center is its reliability. In order to keep everything running smoothly, it’s imperative that maintenance teams and operations managers prepare for potential emergencies, both taking steps to prevent them and making plans for when they occur. Here are a few tips to help you prepare for unexpected problems.

Use the Best

There are many reasons that a data center might run into trouble. Some of the highest risk failure and fault scenarios include generator failure, loss of UPS (uninterrupted power supply) backups, or the shutdown of major pieces of equipment such as chillers. One important step that facilities can take right from the start is to ensure that they use only the highest quality, most reliable equipment, in order to reduce the chances of such problems occurring in the first place.

Make Solid Plans

Another major precaution that should be taken is the development of emergency operating procedures (EOPs). These are step-by-step guidelines that can help employees on the scene isolate and resolve the problem both rapidly and safely. If necessary, these should also include escalation procedures that ensure workers with the right skill sets can be brought in when necessary.

Train Workers

Once EOPs are in place, it’s important to ensure that employees are properly trained on them. This includes not only new employees, but veterans who need their skills refreshed. Emergency procedures should be regularly reviewed, and drills should be conducted when possible. For maximum efficacy, drills should be as realistic and detailed as possible.

Monitor Carefully

It’s also important for data centers to be able to quickly detect when incidents occur. Unlike in other industries, an emergency situation might not be immediately obvious; a solid monitoring system must be in place to help determine when problems are occurring and how to resolve them quickly. In a related vein, plans should be in place so that key stakeholders, management personnel, and potentially affected parties are notified at the correct time that something has gone wrong.

Learn From Mistakes

Finally, each facility should ensure that they have procedures in place for reporting incidents and analyzing what went wrong. Without a full understanding of the root cause, complicating factors, and eventual solutions, it’s impossible to prevent a specific incident from occurring again. A failure analysis should be performed after an emergency in order to prevent future interruptions.

Data center managers, operators, and technicians should be well prepared for a wide range of emergencies—after all, careful preparation can both prevent problems and mitigate damage. With careful planning, there’s not need to panic if things go wrong.

Posted in Data Center Design, data center maintenance, Data Center Security | Comments Off

Increasing Rate of Data Production Prompts Google to Rethink Data Center Storage

Networking communication technology concept, network and internet telecommunication equipment in server room, data center interior with computers and Earth globe in blue light

The rate of change for data storage needs in the cloud is huge. In an example from a research paper from Google – YouTube users upload 400 hours of video every minute, requiring the addition of 1 million GB of data center storage per day. Currently, allstorage is in spinning disks, a decades-old but reliable format. But this storage format was designed for traditional servers, not the high volume data centers of today and tomorrow. Researchers in the field are seeking an answer to this problem, including collaboration between Microsoft and the University of Washington proposing the exploration of using DNA-type encoding for data storage, and Google has a proposal, as well.

View a Collection of Data Storage Devices as a Single Entity

Google’s first proposed change is to think about individual data storage devices as a single storage system and consider all of its properties as an aggregate. This approach calls for higher-level maintenance, re-balancing of data to make use of more disks including new ones, and higher-level data backup and repair functions.Taking this approach requires an initial outset of effort to redistribute data and implement new processes with periodic updates and re-distributions as new hardware is added and legacy machines are retired, but creates a more robust data storage system in the long run.

Redesigning the Disk

Google proposes to design a new storage format specifically for data centers, suggesting the redesign of the disk itself to optimize for weight, heat, vibration, and potential handling by robotic automation systems, and seeks to engage the entire industry in a conversation to develop agreed-upon specifications for a new industry standard data center storage format.

Mixing Old Tech with New Tech, changing the mixture over time in legacy systems

After a new disk format is decided upon and implemented, data centers will probably slowly phase in the new format, adding disks of the new style when more storage is needed and replacing legacy disks with the new format when they near end of life. This gradual implementation of new technology will benefit data centers, preventing them from paying to replace hardware that is still fully functional, and will also provide time to work out the bugs in the new format and preventing catastrophic failure of any data center thanks to the distribution of data over many disks of a variety of ages and designs.

Google’s proposal to update the storage format currently used by data centers around the world involves a change in the way data is distributed and managed along with thedevelopment of a completely new technology. It is a bold suggestion, and will require buy-in from most of the industry in order to be implemented.

Posted in New Products | Comments Off

Increasing Connectedness Puts Added Strain on Data Center Support Services

AdobeStock_78784722

 

The internet has connected us all. It has made it possible for people and computers anywhere in the world to communicate. We want to be able to work from home or our favorite coffee shop, requiring access to all the same tools from home that have from our desk at work. We want to be able to play a video game at home in our living room as a team with other players located all around the world, all through the internet. We want to bring our mobile devices to work and use them for work and play.

These things are all possible and rely on data centers to make them happen. The more connectedness we require, the greater the demands we place on our rapidly aging cloud computing centers and their power and cooling infrastructures.

Changing Needs in the Workplace

More workers are bringing their own mobile devices in to work, and employees increasingly want to be able to work from home. Workers contribute to the IoT, demand mobility on the job, and companies are increasingly using cloud computing to handle big data jobs and even desktop applications in some cases. These changes place increasing demand on the corporate networks and data center usage.

Needs Are Rapidly Outgrowing Resources

The more we are connected, the more we realize we can do with our connectivity and the more we demand of our computing resources. We ask for more of our infrastructure daily and are beginning to hit the hard limits of some of the older centers. This will require major redesign projects for existing centers, and many new facilities to be built from the ground up in the next decade.

New Facilities Must Be Designed With the Future in Mind

As new computing centers destined to be nodes in the cloud are developed, they need to be designed with plenty of room for growth. As is always the challenge in the industry responsible for supporting the growth of technology, IT service leaders must anticipate where technology may be headed in the next ten years, and try to provide the infrastructure to support it. A good place to start will be to provide plenty of room for expansion of power supply, and state-of-the-art, energy efficient, adaptable climate control systems for the computer rooms.

The changing demands of the workplace include more personal mobile devices in the workplace, more employees working remotely, and the increasing use of cloud computing for mission-critical applications. These changes all put stress on our aging data centers, which will soon need to be updated to accommodate our growing communication and computing demands.

Posted in data center maintenance, Power Management | Comments Off

How (And Why) IT Facilities Should Reduce Energy Consumption

AdobeStock_93793795

How (And Why) IT Facilities Should Reduce Energy Consumption

One of the single biggest users of electricity in a given organization is usually the IT department. Cutting back on this consumption is a high priority for many, but it can be difficult to find a way to do so without cutting back on performance. Here’s a closer look at why IT departments should reduce their energy usage—and how they can do it.

Going Green, Saving Money

Reduced electricity consumption is important for a number of reasons. One major one is simply that it helps out the bottom line; electricity costs money, and massive power consumption can lead to massive bills. Additionally, there is an environmental component, since much of the electricity produced today is taken from non-sustainable sources like fossil fuels. Cutting back on power consumption is good for the planet (and for public relations). But how can IT departments go about reducing their electricity use without compromising performance?

Integrate Operations

One excellent option is to integrate energy management. For many large scale operations, server rooms, data centers, and IT facilities are managed separately from the rest of the building. In many cases the IT manager, rather than facilities management, is responsible for things such as cooling, security systems, and power supplies. An integrated energy management system, however, gives the site manager control of these facilities along with the rest of the complex. This can lead to more unified and efficient management of processes, electricity, and security. Working in tandem with IT management, facilities managers can plan for growth and maximize existing resources.

Use of Space

Data centers can also ensure appropriate energy use with careful management of their available space. Often, businesses will build their IT centers with an eye towards the future, making them bigger than necessary in order to leave room to grow. While this is solid long-term planning, it means that parts of the data center may sit unused for years, requiring maintenance, lighting, cooling, and other expensive services with little to no benefit in exchange. Using this space for something else, or using prefabricated modules in order to allow for future growth patterns, is a good way to reduce this waste.

DCIM Software

Another important step that IT facilities can take is to implement the use of data center infrastructure management software, or DCIM. These programs collect data and monitor metrics such as resource usage, operational status, and electrical efficiency; they can even pinpoint problem spots (such as overloaded servers or inadequate cooling systems) and guide management towards more efficient solutions. Knowing what’s going on in a server facility makes it easier to identify problems and solutions.

While electricity usage can be high for IT centers, there are ways to reduce it. Smart planning can benefit both the planet and your organization’s budget.

Posted in Data Center Design, Facility Maintenance | Comments Off

Cooling Equipment Maintenance Service Contract is Critical to Data Center Uptime

Data center

 

Computer room air conditioning (CRAC) systems are critical to maintaining computing equipment efficiency and longevity, and therefore maintaining your data center’s CRAC system is critical as well. A recent focus group led by Schneider Electric showed that data center customers consider a CRAC system maintenance plan to be even more critical than an uninterrupted power supply (UPS) maintenance plan, as cooling systems have more moving parts that can wear out over time and are therefore more likely to have mechanical failures that need immediate expert attention.

There are strict industry standards governing the temperature and humidity to be maintained inside data center computer rooms to conserve power and prevent electrostatic discharge that can damage critical components and start fires. According to data center standards, the temperature in the computer room should be between 64 – 81° Fahrenheit and maximum dew point no higher than 59° Fto prevent electrostatic discharge

Regular Service Minimizes Down Time and Maintains EquipmentWarrantees

Performing routine maintenance tasks on schedule extends the life of your AC equipment by removing dust build up on components and in air filters, regularly spot checking and adjusting temperature and humidity levels throughout the space, and proactively identifying and replacing parts that are nearend of life before they break down unexpectedly and cause outages. This can prevent breaks in service and loss of productivity,and follow the maintenance schedule may be required for keeping your equipment warrantee valid.

Service Contract Means You Have an Emergency Plan in Place

In the event of a component failure, if you have a service contract in place you will have an immediate response to the problem from a trained professional who will diagnose the problem and immediately implement a solution, getting your data center back online in short order. This is critical for getting your customers’ applications up and running again as soon as possible and maintaining a good reputation as a reliable data center resource.

CRAC Professionals Can Help Optimize Your Cooling Equipment

Your power bill is an unglamorous but very real cost of doing business for an enterprise-class data center, and better managing that cost can improve your business’ bottom line. Professional CRAC service people can help you optimize the location of your cooling units and air humidifiers to provide complete and even climate control throughout the room. They can also help you set up a state-of-the-art electronic monitoring system to help you manage conditions in the room.

Simply put, having a service plan in place for your data center’s cooling equipment is the best insurance you can have against extended outages and costly down time.

Posted in Data Center Design, data center equipment | Comments Off

Hyperconverged Infrastructure – Industry Analyst Insights

AdobeStock_93793795

When it comes to addressing customer demands, many technological companies are turning to Hyperconverged Infrastructure (HCI). Somewhat primitive at first, these technologies have evolved over the past few years, allowing frustrated consumers to switch from supporting operational processes and traditional infrastructure. Over time, a plethora of IT companies have made the switch and now consider HCI to be a reasonable alternative to other methods. If you have ever wondered about the benefits and finer points of HCI, the following article will provide you with in-depth information about this relatively new, yet extremely powerful infrastructure technology.

 

Increased Efficiency

Hyperconverged platforms are created in a manner that allows users to store, compute, and network together. This can be done either virtually or physically, and the technology aims to make this type of networking extremely easy to use. HCI can also be integrated into consumable web-scale blocks – this lets infrastructure and operations leaders perform their jobs in a much quicker fashion. Hyperconverged systems currently provide infrastructure leaders with more savings and efficiencies than ever.

The Traditional Marketing Approach

In recent times, hyperconverged infrastructure has been marketed largely be small-scale startups across the country. These startups champion HCI’s ability to assist vendors with building a complete product portfolio. HCI can also help vendor’s provide a wider variety of products and services to their consumers. Studies have shown that HCI can work equally well for companies and organizations of all sizes.

Benefits of HCIs

First generation hyperconverged products have a reputation for employing an appliance approach. This approach requires systems to be implemented on a single server. This server is then scaled by stacking the systems on top of each other. New HCI systems allow users to scale storage resources and computers separately.

In terms of systems management, HCI strives to reduce the amount of work involved. Instead of establishing individual islands of technology, data centers need a product portfolio that will blend in well with broader management architecture. HCI allows system managers to do this, and the systems can be managed through a single console. This console is policy-driven and allows each company to add their own unique directions.

Newer HCI Systems also increase storage through the use of duplication and compression processes.

The Future of Networking

Hyperconverged Infrastructure is definitely a technological game-changer, but its true power lies in the fact that customers across the country are finding it far easier to store data and network. As time progresses, this all-in-one solution will proceed to grow in popularity, especially through the growing number of partnerships between technology companies and various vendors. Unlike the traditional storage methods, HCI will undoubtedly alter the networking landscape for years to come.

 

Posted in Hyper Converged Infrastructure | Comments Off

Is Hyper-Converged IT Ready to Deliver What You Need?

AdobeStock_87909563

The trend in technology these days seems to be to migrate all related applications to a single source. This seems intuitive given that everyone would love to be able to jump back and forth between system settings with the simple push of a button routed to one piece of equipment rather than the many connections shared between different components that they’re stuck dealing with now. You may be seeing this trend manifest itself in your own home. Today, smart TV’s come equipped with streaming video software. Your kid’s gaming console now supports online applications that let you shop, watch TV programming, and even browse the Internet. Yet for all these advances, you’re still having to swap out your cable remote for your TV remote or your console controller.

In the IT world, converged technology is closely comparable to what you likely have in your home right now: multiple controls controlling access to multiple components systems working in concert to give you what you need. The potential of a single-source solution is what’s being referred to when tech professionals start talking about hyper-converged IT. It represents the proverbial magic box, offering access to every application that you need with having to be support by ancillary components.

Waiting for Practicality to Catch Up with Potential

The draw of a hyper-converged box is readily apparent: it offers a more-effective yet less-expensive solution to data access and management. Yet while the theory of improved IT via a single source presents unending potential, the actual practice of combining several component systems into a single, physical piece of equipment presents its own unique challenges. While there are indeed hyper-converged systems available on the market, developers are still searching for cost-effective, easily-implemented solutions to the design challenges that the technology presents.

While you may understand the need for such solutions in order drive to the cost of equipment down, you’re also in the precarious position of needing technology that you can rely on today. When setting up your IT, you typically consider three things:

What’s the most up-to-date technology available?

What system is the most reliable?

Which is most cost-effective?

The emphasis on optimal performance in the here-and-now often requires that you go with the technology that offers the highest level or performance across all three areas, rather than simply focusing on one. In the case of converged vs. hyper-converged IT, the known reliability and performance of converged systems may ultimately be your best choice as you wait for hyper-converged technology to become better established.

The benefit of converged IT may only cause you to crave the added benefit that a hyper-converged box could theoretically offer all the more. However, at the end of the day, your business needs to rely on performance, not potential. While hyper-converged IT systems are beginning to make inroads into data center across the world, the converged technology that you currently rely on will likely still be your best option until the issues that can plague such systems can be fully resolved.

Posted in Hyper Converged Infrastructure | Comments Off