header-about-us-sub

Overcoming the 3 Biggest Barriers to Adopting Cloud Applications

Hand working with a Cloud Computing diagram

It is clear at this point that cloud computing is here to stay, with most industries not only moving toward the enthusiastic adoption of cloud applications for day-to-day operations, outsourced data processing, and the consolidation of mission-critical applications that allows a remote data center to put real CPU power behind any task your business needs to stay coordinated. Even with all the enthusiasm for centralized applications and their efficiency, though, there are still some significant impediments to getting cloud solutions adopted at many companies. Luckily, there are also key strategies for dealing with all three.

Security and Compliance

With the advent of privacy laws like HIPAA and other restrictions on the way private data is handled, the need to be sure that any outsourced processes or remote systems handling company information are secure has become of the main concerns among those who are slow to adopt cloud systems.

To allay these fears, concentrate on building an IT security plan that uses the best practices for reducing human error at the points of data entry and the most secure options for VPN tunnels, in-flight encryption, and other cutting-edge security refinements. With the right research and the proper development of a system, cloud security can maintain the same standards attained with local systems, especially when a hybrid approach is used to protect the information in your company’s private cloud from being accessed by through the public cloud hosting your applications.

Legacy System Support

Very few organizations manage to upgrade their entire system at once, and that’s because it generally isn’t a very good idea to do that. On top of the costs, new systems require training, and trying to train an entire workforce from the ground-up in new procedures would bring most companies to a screeching halt. This makes legacy support a necessary part of any IT department’s approach, and it has traditionally also been a barrier to moving toward cloud applications. Developing a plan to phase out these legacy systems as much as possible while migrating the core components that must be kept to cloud operations provides a way forward gradually, with the same kind of long-term planning used to overcome security concerns.

Costs and Budget Limitations

This is an area where the calculation may work out differently according to the structure and needs of individual companies. Generally, cost concerns have been the final hurdle to overcome when organizations move to the cloud, and that is because the cost of data center management and CPU use on-site is offset by the new cost of bandwidth brought on by cloud applications. A careful review of which processes are cost-efficient in the cloud and an approach that utilizes micro data centers to aid in the distribution of bandwidth-intensive processes smooths this out, though.

Posted in Cloud Computing | Comments Off

Cloud Control

Cloud computing flowchart with businessman

Many people who work with computers on a daily basis have developed at least a passing acquaintance with the cloud. Most IT professionals are usually more knowledgeable than the average person about using this relatively new technology, but the number one issue for everyone seems to be security. How safe is the information you send into the cloud and how much control do you have once it is there? In basic terms, the cloud offers a pool of resources, such as networks, servers and applications, that enable users to store and process information through data centers. Although it is still in its early stages, cloud computing offers speed and power with pay-as-you-go pricing, but despite the utility’s popularity, there are management concerns, especially as regards the subject of security.

Calming Security Fears

Technology is constantly evolving and when a new utility emerges there is always a learning curve. The best defense against wariness is knowledge. Some users worry about communication links being a security issue, for example. The more you learn about the cloud and how it works, the less apprehensive you will be about using it. Once you become an experienced cloud user and are working with a reliable administrator, security will become less of an issue. Any concerns you might have should be brought to the attention of the cloud vendor.

Managing Shadow IT

Shadow IT is a term used to describe unauthorized cloud usage. It started with employees using a company’s cloud apps, sometimes on their mobile phones, without their employer’s knowledge. This can be costly. A similar problem has developed whereby costs are ballooning because every department in the company has a cloud. This is where control becomes a major issue, and the obvious solution is to centralize authority.

Getting Centralized and Organized

The cloud can be extremely effective if company use is centralized. Normally administration would fall to a company’s IT department. Once central responsibility is assigned, the costs associated with the use of cloud resources can be better regulated. Companies need to know what kind of pricing is available and how they can manage it to best advantage. There will be cloud providers to work with and contracts to negotiate. Companies also need to know what kind of data is going into the cloud and whether they are running afoul of any regulations that might exit. The best way to put all of this together is to get centralized and organized. At that point, you will be in charge of cloud control.

Posted in Cloud Computing | Comments Off

Steps to Optimize the Flash Storage in Your Data Center

Data center with network servers in futuristic room.

Steps to Optimize the Flash Storage in Your Data Center

As flash-based storage becomes a larger and larger part of the marketplace for data storage solutions, new processes that take its unique properties into account are necessary. This is especially true at the enterprise level, where efficiency and optimal system design have a large effect on the performance and utility of data center resources. When you are integrating flash storage into your data center, these key steps will help you to find the right balance, whether your entire system is going to flash or you are adding a few new servers to an existing array.

Step 1: Reassess Your Metrics

Traditionally, data center performance has been measured in IOPS, or Input/Output Operations Per Second. With older, spindle-based hard drives, achieving a good benchmark number meant having a large assortment of drives that were capable of staging data for fast retrieval. Since flash storage has no moving parts and benefits from a different kind of physical architecture, the IOPS numbers go off the chart, and you need to use different metrics to assess your success.

Instead of focusing on IOPS, focus on these other key metrics that you are already assessing:

Bandwidth

Latency

Block size

That way, you are able to get a better picture of your data throughput, allowing you to see if access is streamlined and if your array is giving you the best possible performance.

Step 2: Get Inside the Array

Administrators moving to flash need to understand that read/write cycles will run up more CPU usage under this kind of storage. Partially, this is the cost of doing business at a higher speed. Partially, it is due to the nature of flash architecture. Either way, there are a few key changes you can make as a storage administratorto greatly improve your overall system performance:

Add more CPUs to handle the workload.

Check for unaligned write I/Os, especially if you have an older system and your hardware is not likely to automatically detect settings.

Step 3: Between the Array and the Server

The last step to take is to optimize server function and to ensure you have streamlined the process. If you are having server-wide issues, these steps are most likely to help:

Rearrange your hardware, to ensure you are complying with the rules of affinity.

Read into your operating system’s multi-pathing processes to find solutions.

Utilize hypervisor and in-guest tuning to attain optimum results

Taking these steps will ensure the best possible performance out of your flash storage.

Posted in Data Center Infrastructure Management | Comments Off

Why Colocation Data Centers Have Issues Scaling Biometric Applications

64607112

 

The first and probably most blanket answer to this conundrum is actually fairly simple: since biometric processes are inherently hardware-focused, they don’t really scale effectively, because large-scale networks wind up with a vast array of endpoint machines to coordinate and process information from. At the same time, though, biometric security applications are not only mission critical for a lot of collocated data centers, they are often the point of setting them up. There are few contributing factors to the current state of biometric applications that need to be addressed as part of the move to scalability, and they are a lack of standardized deployments, falling short on options to integrate access control, amd underdeveloped enterprise support

First: Dealing With Standardized Deployments

Implementation is really the issue when it comes to standardized deployments, and it is partially because there are not many well-documented and consistent approaches. That means that colocation providers looking to streamline their data processing and ensure scalability need to find a plan for replicating the same procedures with the same biometric identifiers and profiles across all the locations where they provide support to companies. This ensures streamlined processing and the ability to reduce the number of customer identities that have to be handled.

Second: Integrating Access Control

The users who work with biometric security systems the most are the security guards themselves, who are tasked with the implementation of biometric screening on the ground. Since their training is typically less technical than that of the IT staff but they form the front line for questions and concerns from users, integrating their access and maintaining the actual biometric systems that feed your access control the data it needs is paramount if the system is going to operate effectively. That means a combination of better training and regular system check-ups by qualified technical personnel to ensure that everything operates efficiently on both ends of the system.

Third: Underdeveloped Enterprise Support

Since biometric security systems are typically developed and sold from a mindset that is preoccupied with providing security and access control in a single location, the systems themselves often fail to anticipate the kind of large-scale deployment needed by enterprise customers. The result is that at the level of the hardware, network integration becomes difficult because it was not anticipated. Before a real path forward for easily scaled biometric solutions is possible, a biometric system supplier will need to step up with hardware that is specifically designed to work with and for those customers with a need to control multiple levels of access in multiple locations while coordinating information across systems.

Posted in Data Center Design | Comments Off

An Overview of Network Function Virtualization

AdobeStock_88603767

 

One of the fastest growing new technologies is network function virtualization, also known as NFV. First introduced in 2012, this modern approach to network services has the potential to revolutionize the field. Here’s a quick look at this networking strategy.

The Technology

Network function virtualization is when network node functions are virtualized and then linked together. The end result is a communication and networking service that is almost entirely software based, rather than being based on hardware. NFV has a wide variety of applications, ranging from digital security to interpersonal communications. Specific use cases include service chaining, group collaboration,voice over LTE technology, the Internet of Things, and advanced networking solutions.

The Creation

As an evolution of existing technology, NFV is still developing and changing rapidly. Currently a number of different organizations are trying to standardize the architecture of these systems, while open source communities are trying to create a software that will run on them. Since a variety of separate companies, groups, and bodies are in the process of creating NFV, the technology is expected to be usable end-to-end within the near future. Many forward looking organizations are paying close attention to these changes.

The Data Center

NFV is closely linked to another developing concept, the software-defined data center (also known as an SDDC). Many industry experts believe that SDDC represents the future of data management, and that NFV technology is a major stepping stone towards this paradigm. This is because SDDC requires a number of factors in order to move a function to the cloud, and NFV provides a way for many of these factors to be effectively realized.

The Benefits

One of the biggest potential advantages of NFV is that its high level of automation offers a great deal of agility. Unlike traditional network upgrades, which may take months to implement, an NFV adjustment can be completed very rapidly. In some cases connections can be made in mere seconds. Additionally, since the hardware used is highly uniform, the cloud can become a useful resource for many different types of applications. This allows organizations to receive the maximum possible benefit from their networks, and helps them adjust in a rapidly changing economy.

The Professionals

Smart IT professionals are keeping their eye on the gradual adoption and development for NFV, since it offers a numerous possibilities that could benefit their organizations. In fact, some IT managers are actively influencing the creation of NFV technologies by communicating with vendors and developers about their needs and use cases. Forward thinking professionals see a great deal of potential within network function virtualization.

Posted in Data Center Design, New Products | Comments Off

The Digital Revolution Means Big Changes for IT Service Management

IT  Services

With the modern day shift towards digitization of data and processes, IT departments worldwide are being called upon to rapidly grow their service catalog to include all the new technologies their users may benefit from. IT managers must stay abreast of cutting-edge technologies applicable to their fields, and to truly be an industry leader, IT managers should be proactively seeking out new applications that may give their company a competitive advantage.

The three main areas of growth for IT service management (ITSM) are service catalog updating, legacy IT service modernization, and discovery of new technologies relevant to the field. This is plenty to keep IT professionals busy and provides an interesting challenge to even the most forward-thinking IT managers.

Service Catalog Updates

Many corporations have traditionally used service catalogs that relied upon local desktop applications and data storage, with automated regular data back-up systems in place. Many ITSMs are moving towards cloud computing for data storage for many reasons, including the ease of regularly backing up data stored in a centralized location and the convenience of maintaining and upgrading applications running in a single environment where each user opens an instance of the same application, rather than each desktop having its own application and license to manage.

Legacy IT Service Modernization

Users become familiar with and dependent upon their favorite applications, and many resist change in this area. It is often better to prioritize user productivity and let that principle guide the progress of the modernization of services. Some workflows will immediately benefit from application replacement, and users can usually to recognize that fact. In the instances where replacement will have long-term benefits but short-term drawbacks like requiring users to learn how to use a new software suite, application replacement can be a hard sell.

Scouting New Technologies

ITSMs must be continually on the lookout for new applications and other industry trends that can give their company an edge over the competition. The goal is to provide maximum IT agility with minimum disruption to the company’s operations. ITSMs should attend industry conferences and keep abreast of industry publications, but would also do well to pay attention to the leading publications in other fields which may give them insight into which way IT may turn next, providing them with an edge over other companies.

By working to update the IT service catalog, continually striving to modernize legacy applications, and scouting new technologies, today’s ITSMs can contribute to the building of a cutting-edge business through restless innovation.

Posted in data center maintenance, Facility Maintenance | Comments Off

Strategic Management of Computing and Storage Use in a Hybrid IT Environment

Hand working with a Cloud Computing diagram

Cloud computing is the process of using an off-site network of computers or servers to process and store data. Cloud computing is often seen as the solution to all local computing and IT challenges as companies move entire workflows from the ground to the cloud. But the cloud is not appropriate for all applications and server needs, and most business users will find that a hybrid solution is right for them, using both local machines and the cloud to store and manipulate their data. This hybrid approach to IT requires careful planning of what applications to run in which environment. Here are a few points to consider before moving an on-site application to the cloud.

Monitoring

You will need to be able to get a global view of your applications and data in order to determine which applications and data streams belong in the cloud and which would be best on the ground. As each company’s setup is unique, so is their ideal monitoring system. IT management will need to track computing time and data storage needs for each application in each environment. Applications with high computing time and low data storage needs belong in-house rather than in the cloud, and vice versa.

Security

Data security is a concern for most corporations and something that should be considered when choosing a cloud computing host and when deciding where each of your applications will be run. Cloud computing facilities normally make their security policies clear, but corporations are ultimately responsible for the security of their own data, and if the cloud is hacked the corporation may still retain some responsibility for the breach of security. So depending on the level of security needed for your applications and their associated data, the cloud may not be the ideal choice.

Accessibility

Sending and retrieving data from applications in the cloud takes time, and communicating with the cloud can reduce users’ productivity depending on how often they need access to different applications. Frequently used applications may be better off on local machines for this reason. There is also the issue of outages with your cloud computing service. You have no control over this variable, and even with a well-maintained company outages can last several hours, causing you big headaches depending on your deliverable and what you use the cloud for.

Keeping monitoring, security, and accessibility in mind while determining what applications belong on the ground and in the cloud will serve IT professionals well, and allow them to manage and upgrade their hybrid IT environment with minimal outages or loss of productivity.

Posted in Cloud Computing | Comments Off

Preparing For Data Center Emergencies

Network switch and ethernet cables,Data Center Concept.

One of the most important aspects of any data center is its reliability. In order to keep everything running smoothly, it’s imperative that maintenance teams and operations managers prepare for potential emergencies, both taking steps to prevent them and making plans for when they occur. Here are a few tips to help you prepare for unexpected problems.

Use the Best

There are many reasons that a data center might run into trouble. Some of the highest risk failure and fault scenarios include generator failure, loss of UPS (uninterrupted power supply) backups, or the shutdown of major pieces of equipment such as chillers. One important step that facilities can take right from the start is to ensure that they use only the highest quality, most reliable equipment, in order to reduce the chances of such problems occurring in the first place.

Make Solid Plans

Another major precaution that should be taken is the development of emergency operating procedures (EOPs). These are step-by-step guidelines that can help employees on the scene isolate and resolve the problem both rapidly and safely. If necessary, these should also include escalation procedures that ensure workers with the right skill sets can be brought in when necessary.

Train Workers

Once EOPs are in place, it’s important to ensure that employees are properly trained on them. This includes not only new employees, but veterans who need their skills refreshed. Emergency procedures should be regularly reviewed, and drills should be conducted when possible. For maximum efficacy, drills should be as realistic and detailed as possible.

Monitor Carefully

It’s also important for data centers to be able to quickly detect when incidents occur. Unlike in other industries, an emergency situation might not be immediately obvious; a solid monitoring system must be in place to help determine when problems are occurring and how to resolve them quickly. In a related vein, plans should be in place so that key stakeholders, management personnel, and potentially affected parties are notified at the correct time that something has gone wrong.

Learn From Mistakes

Finally, each facility should ensure that they have procedures in place for reporting incidents and analyzing what went wrong. Without a full understanding of the root cause, complicating factors, and eventual solutions, it’s impossible to prevent a specific incident from occurring again. A failure analysis should be performed after an emergency in order to prevent future interruptions.

Data center managers, operators, and technicians should be well prepared for a wide range of emergencies—after all, careful preparation can both prevent problems and mitigate damage. With careful planning, there’s not need to panic if things go wrong.

Posted in Data Center Design, data center maintenance, Data Center Security | Comments Off

Increasing Rate of Data Production Prompts Google to Rethink Data Center Storage

Networking communication technology concept, network and internet telecommunication equipment in server room, data center interior with computers and Earth globe in blue light

The rate of change for data storage needs in the cloud is huge. In an example from a research paper from Google – YouTube users upload 400 hours of video every minute, requiring the addition of 1 million GB of data center storage per day. Currently, allstorage is in spinning disks, a decades-old but reliable format. But this storage format was designed for traditional servers, not the high volume data centers of today and tomorrow. Researchers in the field are seeking an answer to this problem, including collaboration between Microsoft and the University of Washington proposing the exploration of using DNA-type encoding for data storage, and Google has a proposal, as well.

View a Collection of Data Storage Devices as a Single Entity

Google’s first proposed change is to think about individual data storage devices as a single storage system and consider all of its properties as an aggregate. This approach calls for higher-level maintenance, re-balancing of data to make use of more disks including new ones, and higher-level data backup and repair functions.Taking this approach requires an initial outset of effort to redistribute data and implement new processes with periodic updates and re-distributions as new hardware is added and legacy machines are retired, but creates a more robust data storage system in the long run.

Redesigning the Disk

Google proposes to design a new storage format specifically for data centers, suggesting the redesign of the disk itself to optimize for weight, heat, vibration, and potential handling by robotic automation systems, and seeks to engage the entire industry in a conversation to develop agreed-upon specifications for a new industry standard data center storage format.

Mixing Old Tech with New Tech, changing the mixture over time in legacy systems

After a new disk format is decided upon and implemented, data centers will probably slowly phase in the new format, adding disks of the new style when more storage is needed and replacing legacy disks with the new format when they near end of life. This gradual implementation of new technology will benefit data centers, preventing them from paying to replace hardware that is still fully functional, and will also provide time to work out the bugs in the new format and preventing catastrophic failure of any data center thanks to the distribution of data over many disks of a variety of ages and designs.

Google’s proposal to update the storage format currently used by data centers around the world involves a change in the way data is distributed and managed along with thedevelopment of a completely new technology. It is a bold suggestion, and will require buy-in from most of the industry in order to be implemented.

Posted in New Products | Comments Off

Increasing Connectedness Puts Added Strain on Data Center Support Services

AdobeStock_78784722

 

The internet has connected us all. It has made it possible for people and computers anywhere in the world to communicate. We want to be able to work from home or our favorite coffee shop, requiring access to all the same tools from home that have from our desk at work. We want to be able to play a video game at home in our living room as a team with other players located all around the world, all through the internet. We want to bring our mobile devices to work and use them for work and play.

These things are all possible and rely on data centers to make them happen. The more connectedness we require, the greater the demands we place on our rapidly aging cloud computing centers and their power and cooling infrastructures.

Changing Needs in the Workplace

More workers are bringing their own mobile devices in to work, and employees increasingly want to be able to work from home. Workers contribute to the IoT, demand mobility on the job, and companies are increasingly using cloud computing to handle big data jobs and even desktop applications in some cases. These changes place increasing demand on the corporate networks and data center usage.

Needs Are Rapidly Outgrowing Resources

The more we are connected, the more we realize we can do with our connectivity and the more we demand of our computing resources. We ask for more of our infrastructure daily and are beginning to hit the hard limits of some of the older centers. This will require major redesign projects for existing centers, and many new facilities to be built from the ground up in the next decade.

New Facilities Must Be Designed With the Future in Mind

As new computing centers destined to be nodes in the cloud are developed, they need to be designed with plenty of room for growth. As is always the challenge in the industry responsible for supporting the growth of technology, IT service leaders must anticipate where technology may be headed in the next ten years, and try to provide the infrastructure to support it. A good place to start will be to provide plenty of room for expansion of power supply, and state-of-the-art, energy efficient, adaptable climate control systems for the computer rooms.

The changing demands of the workplace include more personal mobile devices in the workplace, more employees working remotely, and the increasing use of cloud computing for mission-critical applications. These changes all put stress on our aging data centers, which will soon need to be updated to accommodate our growing communication and computing demands.

Posted in data center maintenance, Power Management | Comments Off