header-about-us-sub

Data in Motion vs. Data at Rest

Business Data Centers
The tech industry loves to uses catchy phrases to describe various processes, innovations and aspects in data centers.  Every now and then, we think it is important to narrow in on those phrases and explore what they mean and how they impact data center operations.  One of those phrases is “data in motion vs. data at rest.”  Data in motion vs. data at rest are somewhat self-explanatory but their nuances and impact on data centers are not.  Data in motion is data that is actively being used by data centers, it is data in transit.  Data at rest is data that is not being actively used but is stored in a data center. These two different types of data present unique security challenges.  For example, data in motion may be in transit on the internet and that presents different security challenges than data at rest that, while not actively in use, may contain sensitive customer information.

When it comes to securing data of any kind there are a variety of ways to prevent security breaches and cyber-attacks.  Encryption is certainly a must and data at rest must be secured on a number of levels because it may be stored in multiple places, including databases, storage networks, file servers, or virtually in the cloud.  Data in motion is exposed to cyber-attacks at a number of points while in transit.  DataMotion explains what risks may be encountered and describes some best practices when it comes to securing data, “Data is at rest when it is stored on a hard drive. In this relatively secure state, information is primarily protected by conventional perimeter-based defenses such as firewalls and anti-virus programs. However, these barriers are not impenetrable. Organizations need additional layers of defense to protect sensitive data from intruders in the event that the network is compromised. Encrypting hard drives is one of the best ways to ensure the security of data at rest. Other steps can also help, such as storing individual data elements in separate locations to decrease the likelihood of attackers gaining enough information to commit fraud or other crimes…Data is at its most vulnerable when it is in motion, and protecting information in this state requires specialized capabilities. Our expectation of immediacy dictates that a growing volume of sensitive data be transmitted digitally— forcing many organizations to replace couriers, faxes, and conventional mail service with faster options such as email. Looking ahead, it will also become increasingly important for the encryption service your organization uses to cover mobile email applications. The Radicati Group1 predicts that 80% of email users will access their accounts via mobile devices by 2018, but more than 35% of organizations currently using email encryption say their users currently lack the ability to send secure messages from their mobile email client.”  Securing data will be a challenge that will never leave, data centers must look forward, anticipate potential future threats, and use multi-level encryption to ensure that data in motion and data at rest remain protected.

 

 

Posted in Data Center Infrastructure Management, Data Center Security | Tagged | Comments Off

The Internet of Things and How It Impacts Data Centers

97806468

From time to time we see new “catchphrases” or terminology pop up in the tech world and suddenly they are being used everywhere.  One of these phrases is “the internet of things.”  As we see our world become increasing automated, digitized and more, we see that many aspects of our day to day life are now controlled by or taking place on the internet.  Forbes explains exactly what “the internet of things” means, “Simply put, this is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from cellphones, coffee makers, washing machines, headphones, lamps, wearable devices and almost anything else you can think of.  This also applies to components of machines, for example a jet engine of an airplane or the drill of an oil rig. As I mentioned, if it has an on and off switch then chances are it can be a part of the IoT.  The analyst firm Gartner says that by 2020 there will be over 26 billion connected devices… That’s a lot of connections (some even estimate this number to be much higher, over 100 billion).  The IoT is a giant network of connected “things” (which also includes people).  The relationship will be between people-people, people-things, and things-things.”

The reality is, whether people like it or not, the internet of things is taking over and our world is being powered by the internet.  This impacts our day to day life in different ways but one thing data centers know is that it means more data.  Everything that is becoming part of the internet of things involves data.  That coffee machine that is connected to the internet is going to require data communication and storage on some small level.  And, if a jet engine is connected, it is probably using a lot of data.  As more things become part of the collection of the internet of things, data center demands exponentially increase. Data centers must begin to prepare now because as tie marches on, the internet of things will only increase. Data Center Dynamics points out just how much this will impact data centers going forward, “The internet of things will force enterprise data center operators to completely rethink the way they manage capacity across all layers of the IT stack, according to a recent report by the market research firm Gartner… Where this becomes problematic for data centers is management of security, servers, storage and network, Joe Skorupa, VP and distinguished analyst at Gartner, said. “Data center managers will need to deploy more forward-looking capacity management in these areas to be able to proactively meet the business priorities associated with IoT,” he said in a statement.  For data center networks, the internet of things will basically mean a lot more incoming traffic. WAN links in data centers today are designed for “moderate” bandwidth requirements of human interaction with applications.Data from multitudes of sensors will require a lot more bandwidth than current capacity… Of course a lot more data will mean a lot more storage will have to be provisioned in data centers. In addition to pure capacity, companies will have to focus on being able to get and use data generated by the internet of things cost effectively.Because of the volume of data and the amount of network connections that carry it, there will be more need for distributed data center management and appropriate system management platforms.”  While end-users may take for granted the convenience of the internet of things, data centers do not have the luxury of taking it for granted.  They must be vigilant in preparation and expansion to ensure they can accommodate the dramatically growing data needs that the internet of things presents.

Posted in Internet of Things | Comments Off

The Cost of a Data Center Security Breach

datacentersecurity2

If there is one thing that a data center is concerned with, aside from maximizing uptime, it is security.  In today’s world we constantly hear news stories about security breaches exposing businesses and individuals to danger such as identity theft, information loss, other theft, and more.  Security breaches are not just an embarrassing frustration; they are a costly one as well.  Large businesses can obviously suffer significant losses but the losses experienced by small and medium-sized businesses are significant as well. Security Intelligence describes the growing risk of security breaches, “Every corner of the organization — from human resources to operations to marketing — is generating, acquiring, processing, storing and sharing more data every day. Cybersecurity threats have conditioned organizations to defend the full depth of this sensitive information and infrastructure from a global threat landscape…IBM and Ponemon Institute are pleased to release the “2015 Cost of Data Breach Study: Global Analysis.” According to our research, the average total cost of a data breach for the participating companies increased 23 percent over the past two years to $3.79 million.”  This growing problem and increasing cost is a clear signal that data centers and business must all pay careful attention to security measures to ensure that data is properly protected.

While upping your security protection will certainly involve an up-front investment, if you are protecting critical information such as health, financial, social security, or other top secret records, the cost of a breach will be far more than the cost of protection.  For example, cloud security may be ideal for less critical information (i.e. social media) but higher security protection is better for more sensitive information. If you have a smaller business and data center, you may think the risk of a security breach is smaller, statistics are showing that, in general, security breaches are a growing reality for many.  Data Center Knowledge points out the frequency of security breaches,” Roughly half of businesses in the U.S. (49 percent) and globally (52 percent) assume that their IT security will be breached sooner or later. This is a recognition of reality, as 77 percent of U.S. businesses and 82 percent globally have experienced between 1 and 5 separate data security incidents in the last year.”  Data Center Knowledge also notes that smaller and medium businesses that experience a security breach typically incur a loss, on average, of $86,500.  And the cost of liability for data breaches is growing, emphasizing the importance of protecting their customer’s and user’s private information, which Data Center Knowledge points out, “There’s legislation brewing that would make organizations far more accountable for breaches of personal information and require them to pay actual damages to individuals, something he thinks will reverse the trend toward cloud and colocation back to in-house.”

Posted in Data Center Security | Comments Off

Advantages of Data Center Consolidation

Trouble in data center

In today’s data center world, there is a lot of discussion over increasing rack density, utilizing the space you have without having to relocate, and more.  Working with the space you have to accommodate growing data centers needs and increasing infrastructure demands may require some creative thinking but consolidation can be extremely beneficial.  Data center square footage does not come at a cheap price and running large data centers or multiple data centers uses a lot of energy and manpower.  Because of this, many data center managers are looking more closely at ways they can consolidate within their data center, or within their network of data centers, to save on the cost of overhead and energy use.

First, it is important to look at organizations that have multiple data centers.  This can happen as a result of businesses acquiring other organizations that have existing data centers in place, or it can happen from gradual expansion of needs.  During growth, it can seem or even actually be less expensive to simply keep those additional data centers open but, in the long run, it will not be.  Separate data centers require separate energy usage, separate rent/mortgage, separate personnel, separate infrastructure and more.  Those things add up over time and often what businesses find is that there are unnecessary redundancies that can be improved and solved with consolidation.  The obvious concern with consolidation is downtime.  Downtime can lead to loss of critical data, loss of money, and general frustration.  Data Center Knowledge explains why consolidation is often the better choice, and what three areas to look at when beginning to consolidate, “In many cases, creating better efficiency and a more competitive data center revolves around consolidating data center resources. With that in mind, we look at three key areas that managers should look at when it comes to data center consolidation. This includes your hardware, software, and the users… There are so many new kinds of tools we can use to consolidate services, resources, and physical data center equipment. Solutions ranging for advanced software-defined technologies to new levels of virtualization help create a much more agile data center architecture… The software piece of the data center puzzle is absolutely critical. In this case, we’re talking about management and visibility. How well are you able to see all of your resources? What are you doing to optimize workload delivery? Because business is now directly tied to the capabilities of IT, it’s more important than ever to have proactive visibility into both the hardware and software layers of the modern data center.Having good management controls spanning virtual and physical components will allow you control resources and optimize overall performance… Data center consolidation must never negatively impact the user experience. Quite the opposite; a good consolidation project should actually improve overall performance and how the user connects. New technologies allow you to dynamically control and load-balance where the user gets their resources and data. New WAN control mechanisms allow for the delivery or rich resources from a variety of points. For the end-user, the entire process is completely transparent. For the data center, you have less resource requirements by leveraging cloud, convergence, and other optimization tools.”  Every data center that has grown over time or has a network of data centers should carefully consider where consolidation can occur to save money, improve efficiency and improve overall quality of service.

Posted in Data Center Design, Data Center Infrastructure Management | Comments Off

Is Wan Optimization the Future of Data Centers?

AdobeStock_93793795

WAN, wide-area networks, may have not been prioritized in the past but more and more data center managers are closely looking at WANs as the future of data centers.  WAN optimization involves a series of techniques such as data duplication, traffic shaping, data caching, compression, network monitoring, and more in an effort to speed interconnectivity. TechTarget explains the importance of focusing on WAN moving forward, “A data center interconnect has historically replicated data from a primary data center to a disaster recovery site or backup data center. However, virtualization and cloud computing are transforming the role of a data center inter-connect, and wide area network (WAN) managers must adjust their approach to these increasingly critical WAN links… WAN managers need to understand the changing environment within data centers and prepare for an increased demand on the WAN links that interconnect multiple data centers… WAN optimization makes transfer protocols more efficient and reduces the volume of traffic through compression and deduplication.”

Every data center needs a WAN in place that is strategic and unique, carefully configured to meet the data center’s specific needs.  As remote access needs and national capability needs increase connectivity and speed demands shift and become more and more important.  When WAN is executed properly, bandwidth limitations are mitigated and access to applications improved.  We have previously discussed the shift towards data center consolidation in an effort to improve efficiency while lowering the costs of overhead and personnel while optimizing infrastructure and securing physical assets but data center consolidation means consolidating IT infrastructure as well.  With so many data centers consolidating IT infrastructure there are fewer small data centers which means further distance between the end-user and the data center and that can mean poor application performance from latency and network congestion.  With fewer but larger servers, traffic is increased and WAN optimization becomes all-the-more important.   Data center consolidation can move forward effectively through WAN optimization.  WAN optimization will only continue to grow in importance moving forward, as Data Center Knowledge notes, “This means that while the CIO is trying to exercise tighter control over the corporate wide-area network (WAN), users are expecting looser controls and the ability to access anything, anywhere, anytime with scant regard for security or the impact on network performance. Look into the usage logs of most corporations today and you will find hours spent on Facebook, Twitter and YouTube, for example.This usage is expensive. The study further concluded that social media networks could potentially be costing Britain up to $22.16 billion.The solution CIOs desire is a fully integrated single platform that delivers complete WAN optimization capabilities, the insight to allow management to keep its eye on exactly what traffic is traversing the network, and the flexibility to dynamically optimize it when and if required.”

Posted in Data Center Design, data center equipment | Comments Off

Data Centers Must Protect Against Arc-Flash

UPS maintenanceWhen you think about “protection” in a data center, you probably think about protecting critical data, protecting infrastructure, protecting uptime, etc.  But, it is also important to think about protecting data center workers.   Whether a data center is small or large, due to the large amount of electrical equipment, there are certain safety measures that must be taken to ensure worker safety.  One concern that data centers must protect against is “arc-flash.”  Data center workers are in a conundrum of sorts – to work on, or perform maintenance on, certain electrical components without risk of arc-flash, electrical power to the components must be turned off.  But, often, retaining uptime means that various electrical components cannot be shut off.  DataInformed explains what arc-flash is, and why it is such a significant concern in data centers, “An important electrical risk in the data center is arc-flash incidents. Arc-flash incidents, which are caused by arcing from an electrical fault, potentially creating a blast similar to an explosion, happen between five and 10 times a day in U.S. industry and result in one death every single workday.  Although data center design, permitting and construction are in adherence to modern electrical safety requirements, data center workers must be trained and competent, and must maintain compliance with all OSHA requirements to keep electrical safety in the data center at its current high standard.”

Not only is maximizing safety to protect against arc-flash important for peace of mind for both employer and employee, but it will help a data center remain OSHA compliant which reduces liability and cuts down on costs.  The specifics of how a data center will implement protection against arc-flash are complex and highly individualized.  Data centers that do implement best security practices, though, will ultimately improve safety and uptime.  When designing infrastructure and preparing a data center it is critical that an arc –flash analysis be completed before a data center is up and running at full capacity.   Data Center Knowledge elaborates on what is involved in an arc-flash analysis or study, ““An arc flash study looks at all the electrical components, from the source at the power company, the whole way through to the plugs that you plug into your IT equipment,” Furmanski told us in an interview.  “They look at how all the circuit breakers are set up — it’s called a coordination study — and they look at the power going through.  They punch in all these formulas to figure out, will these breakers move fast enough if there’s an electrical short, or will they move too slowly and let the capability of an arc flash be created?”  If your data center has not recently had an arc-flash analysis, or you are not sure if it ever has, it is incredibly important to complete one as soon as possible to maximize worker safety and uptime.

Posted in Computer Room Design, computer room maintenance, Data Center Construction, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM, Facility Maintenance | Tagged , , , | Comments Off

3 Data Center Outages That Are Preventable

Trouble in data center
Data centers function with a continuous goal of maximizing uptime.  It is important to avoid outages at all cost while constantly trying to improve energy efficiency and maximize data storage and speed.  There are a variety of factors that influence data center outages but the bottom line is that, from time to time, they do happen.  The problem is that, when outages occur, they are not only frustrating; they can result in data loss and significant financial loss.  So, what is a data center to do?  Are these outages simply unavoidable, aggravating occurrences?  No.  In fact, Emerson Network Power notes just how preventable these outages can be, “According to the 2013 Study on Data Center Outages by the Ponemon Institute, sponsored by Emerson Network Power, 71% of survey respondents said some or all of unplanned outages experienced within the last 24 months were preventable.”  Below, we discuss 2 common types of data center outages that are, by and large, preventable.

  1. Human Error
    • Human error is, unfortunately, one of the most highly cited reasons for data center outages. This can be avoided with simple measures such as shielding “emergency off” buttons.  Emergency Power Off buttons are often not labeled correctly or protected properly and by simply shielding and labeling them, data center outages can be avoided.  Additionally, well-communicated operating instructions and procedure methods can help reduce errors that occur from lack of information or knowledge.  Finally, what may seem like a no-brainer – strict food and drink policies.  Even a small liquid or food spill on critical equipment could lead to an outage so it is important to have strict regulations in place.
  2. UPS/Battery Failure
    • Power supplies can fail for a number of reasons – age, local power outages, storms, surges, and more. For this reason it is critical that an uninterruptible power supply be used but, perhaps even more importantly, it is necessary to have redundancy.  Have a power supply that is adequate size for your entire capacity and power load, as well as a backup power supply that is also adequate and be certain to perform proper UPS and battery maintenance routinely.  Green House Data describes the importance of a proper DCIM, “As data centers become more and more dense, they are drawing more power at each rack. Don’t allow your UPS design to fall below your average IT load. A Data Center Infrastructure Management (DCIM) platform can help you evaluate power draw throughout a given period. Redundant UPS systems are also a necessity to achieve the goal of 100% uptime.”

 

Posted in computer room maintenance, Data Center Battery, Data Center Design, data center equipment, Data Center Infrastructure Management, data center maintenance, DCIM, Facility Maintenance, Uninterruptible Power Supply, UPS Maintenance | Tagged , , , , , | Comments Off

The Future of the Data Center: Scale

97806468What will the data center look like in 5 years or even 10 years?  It may sound impossible to predict but experts are weighing in and providing their predictions for the future of data centers.  The storage systems and servers of today will be a distant memory.  Cloud computing will take on a whole new life.  While 5 or 10 years may sound far off, it is important for the data centers of today to start anticipating these changes and preparing for the future so that they can stay ahead of the game and not fall by the wayside.  Storage needs are changing daily so it is easy to understand that they will be significant in the future.  Many experts see data centers making the switch to being scale data centers by 2025.  Data Centers Knowledge elaborates on what “scale data centers” are, “Scale data centers are data centers designed the same way web giants like Google, Microsoft, and Facebook design their facilities and IT  systems today. Intel isn’t saying most data centers will be the size of Google or Facebook data centers, but it is saying that most of them will be designed using the same principles, to deliver computing at scale.”

Delivering computing at scale is not a simple concept or an easily achievable task but it is necessary to meet the expected demands of technology and users of the future.  Data Center Knowledge goes on to explain the future demands that will necessitate scale data centers, “Things like the three major forms of cloud computing (IT infrastructure, platform, or software delivered as subscription services), connected cars, personalized healthcare, and so on, all require large scale. “If you’re doing a connected-car type of solution, that’s not a small-scale type of deployment,” Waxman said. “If you’re doing healthcare and you’re trying to do personalized medicine, that’s a large-scale deployment.’”  As data volumes increase, data centers must be able to scale non-disruptively.  For data centers, infrastructure must be carefully managed to be capable of scaling up on demand.  The costs to meet these demands can be managed more easily by gradually scaling up data centers.  Schneider Electric also notes that scale will be the future of data centers, ““We’ll see a dominance of at scale wholesale data centers, with a movement towards at scale cloud providers and the verticalization and specialization of the smaller providers in between,” he says. “There will also be a secondary movement to the edge.” He defines “at scale” as at least 15MW or more, a size needed to support cost effective IoT and big data deployments — two of the drivers changing the market according to Doug. “Big data, derived in large from the IoT, is helping shape the way companies develop, improve and bring products to market and serve consumers and customers,” said Doug, “Ultimately, all that data resides in a data center where there must be enough power to process and analyze it.”

Posted in Cloud Computing, Computer Room Design, Data Center Build, Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM | Tagged , , , | Comments Off

Is It Time to Adjust the Temperature of Your Data Center’s Chilled Water Temperature?

Ways to Cool a Data CenterData center cooling is a topic that could be discussed endlessly.  What works best for one data center may not work well for another depending on a variety of factors including data center location, size of data center and type of building.  Cooling with water is an eco-friendly and exceptionally effective means of cooling and what many are finding is that chilled water may be even more effective.  It remains the goal of most data centers to effectively cool while also being efficient and eco-friendly.  When using a chilled water system, a water chiller is used to produce chilled water which is then pumped into the CRAH (computer room air handler) and then then circulates around chilled coils and cools the air in the computer room by removing the heat from a room.  It circulates out and then gets chilled again and sent back through the system, making it a very efficient means of cooling a data center.

In the event of an outage, air cooled chillers can actually return to operation more quickly, making redundancy easier to achieve as well.  Additionally, chilled water cooling is easily scalable and adaptable to the ever-changing needs of a data center.  In an effort to improve efficiency, many data centers are more closely examining just how cool the chilled water cooling system needs to be.  If it can be adjusted by even a degree or two, a significant improvement in energy efficiency can be made.   Schneider Electric further examines the advantage of opting to adjust chilled water cooling temperatures in data centers, “In a nutshell, that means many data centers don’t need to be as cool as they used to. Most data centers will find temperatures of 24°-25°C (75°-77°F) will suffer no difference in reliability vs. cooler temperatures… If temperatures inside the data center are higher than in the past, that means the temperature of the chilled water used to cool it – known as the set point for the chillers – can also be higher. As it turns out, that has a profound effect on cooling system efficiency. Raising the chilled water set point from the usual values of 7° to 10°C used in comfort cooling chilled water plants up to 18° to 20°C or higher can result in an operational expense savings of about 40%. That’s because less energy is required to cool the water year-round. In summer, higher evaporating temperatures mean compressors don’t have to work as hard, resulting in improved efficiency. In cooler months, users benefit from many more hours of economizer or “free cooling” operation. A higher set point also results in a capital expense savings of some 30% because chillers don’t have to be as large as at traditional temperatures.”  With a re-examination of what temperature your data center needs to maintain to maximize uptime, data centers may be able to adjust their chilled water cooling temperature to save a significant amount of expense and dramatically improve data center energy efficiency.

Posted in computer room construction, Computer Room Design, data center cooling | Tagged , | Comments Off

Active Security Monitoring

AdobeStock_93793795

Security Risks

In the wake of many high profile data breaches, from government institutions to retailers, there is an evolving environment in the data management world. An environment that requires more active security policy to be established in order to reduce the amount of time that sensitive data is unknowingly exposed to malicious sources. Having strictly preventive security policy although at times effective opens the door to a flood of destructive malware as relaxed policy in regards to monitoring of data movement can allow compromised systems to be unpatched for indeterminate periods of time, unnecessarily exposing data systems.

Active Monitoring Systems

Although preventative maintenance is an essential part of security, actively monitoring data systems can result in quicker detection of penetration by malicious software, these breaches may go unnoticed for long periods of time if only preventative security measures are taken in the data center. It’s a given that systems should should be monitored on a daily basis, but dealing with a large flood of data and knowing how to prioritize it is near impossible for a large data center. Especially in the face of remote access by authorized staff from various locations, of whom may unknowingly bring security risks into the operational environment. With such big data coming in so quickly from a variety of secure and insecure networks the only answer to monitoring such a large scale system of data transfer and accompanying network activity is software based analytics. Big data can be sorted and actively monitored in a meaningful manner through the analytics derived from computational algorithms, algorithms of which can sort malicious activity based on potential risk to reduce false positives or non factor threats that will be blocked by preventative security systems, giving security personnel a more focused view of malicious activity in the network. Any and all detection can be stored and logged for future reference to increase efficiency of automated detection systems. Software based analysis and monitoring of network activity can help identify issues as they stream in, with sorting of priority and potential risk security personnel are able to catch threats immedietly. This reduces liability as security breaches are detected on the fly, reducing exposure of sensitive data and the time of which malicious software has access to said data systems.

Posted in Cloud Computing | Comments Off