header-about-us-sub

Should Data Centers Attempt to Achieve a Single Pane of Glass for DCIM?

datacenter45Data centers run a lot of software, applications, and more in order to perform all of the necessary tasks a data center completes.  For a data center to run properly there is not just one program to get everything done, there are many.  And, over time, more and more are added as technology and data center priorities change.  Data center infrastructure management, or DCIM, is a top priority for data center managers because without access to and analysis of the most current data regarding data center infrastructure, programs and applications, devastating problems can and will arise.  A data center manager that is not well informed may not know rack density maximums are and may not realize they are going to exceed them until it is too late.  What happens when it is too late – power outages and downtime.  Downtime can be devastating for any size business so it must be avoided at all costs.  Due to the urgency and importance of effective DCIM, many data centers and managers are on a quest to get their DCIM on a “single pane of glass.”  If they can see everything they need to see on a single pane of glass then they do not need to change applications or go rooting around for various data, it is all in one easy to access place.  But, can it really be achieved?  Can everything a data center manager needs to know be located on a single pane or is this a fruitless quest?

A single pane of glass for DCIM can be achieved but, most likely, application consolidation may need to take place.  Consolidation is a cost-effective, efficient and effective way to achieve a single pane of glass approach.  There are service providers who can consolidate applications and achieve a unified framework, and thus, unified view of applications.  A single pane of glass may be able to fit everything that a data center manager needs to monitor operations but can they maintain and optimize their data center from a single pane?  If not, how effective is it really?  By hiring a service provider experienced in consolidation and optimization you can work closely with them for a smooth transition that does just lead to more headaches, confusion and miscommunication.   If the approach is achieved effectively, all of the software tools can utilize data to offer the most current information about the function of a data center but in the attempt to make the transition some data centers realize that the compilation of so much data into an effective database for analysis often leads to errors.  While a single pane of glass is possible it may be difficult to achieve but with the help of an experienced professional, time and patience your data center manager can get the view they hope to achieve to improve DCIM.

Posted in Data Center Infrastructure Management, data center maintenance, DCIM | Tagged , | Comments Off

Environmental Impact of Data Centers

Green Building Data CenterData centers perform an invaluable job but that comes at a cost.  Data centers use a significant amount of energy and, while many are making efforts to be more energy efficient, the fact of the matter is that there are still many data centers that consume a dramatic amount of energy each year.  This of course is not only costly but not friendly for the environment.  The digital world has some very real and tangible energy side effects.  While it may not all be felt immediately, there is an environmental impact occurring as a result of data centers.  Time discussed just how much energy is being used by data centers and what to expect going forward, ” IT-related services now account for 2% of all global carbon emissions, according to a new Greenpeace report. That’s roughly the same as the aviation sector, meaning all those Netflix movies the world is streaming and the Instagram photos they’re posting are the energy equivalent of a fleet of 747s rumbling for takeoff. Unless something is done to green the cloud, we can expect those emissions to grow rapidly—the number of people online is expected to grow by 60% over the next five years, pushed in part by the efforts of companies like Facebook to expand Internet access by any means necessary. The amount of data we’ll be using will almost certainly increase too. Analysts project that data use will triple between 2012 and 2017 to an astounding 121 exabytes, or about 121 billion gigabytes. ‘If you aggregated the electricity use by data centers and the networks that connect to our devices, it would rank sixth among all countries,” says Gary Cook, Greenpeace’s international IT analyst and the lead author on its report. “It’s not necessarily bad, but it’s significant, and it will grow.'”

The fact of the matter is that data centers consume energy, a lot of energy, and that will not change.  But there are ways to reduce energy consumption and find more clean ways to use energy that leave less of an environmental footprint.  Most data centers use a lot of energy but they also waste a lot of energy and that is simply unacceptable.  Energy efficiency must be made a priority in a data center and a plan of action must be immediately implemented on a full scale.  Consolidation must take place, fewer servers means less space and less to cool.  But, with a high density rack, cooling can be focused so that cooling efforts are not wasted.  Additionally, containment such as hot aisles and cold aisles is one way to help improve energy usage.  Another and more green option in terms of cooling is to house your data center in a cooler climate that can take advantage of natural cooling elements and reduce cooling needs within a data center.  Additionally, as more and more moves to the cloud, physical infrastructure can be reduced in data centers which helps reduce cooling needs.  Data centers need to examine their energy usage and look for ways to not only reduce but for green options that are more environmentally friendly so that a sustainable data center future can be achieved.

Posted in Data Center Design | Tagged , , , | Comments Off

What To Consider During a Data Center Migration

Spring Cleaning Checklist

Data center migration.  Those three words could probably send any data center manager running for the hills.  Alas, when a data center has outgrown its current facilities or has to be moved for any other reason, data center migration is a necessity.  A data center migration is a significant undertaking fraught with potential risks and hidden problems.  So, how does anyone ever accomplish it?  Through proper research, planning that is complete factors in contingencies, careful instruction to ensure that everyone is on the same page, and a lot of patience, a successful data center migration can be achieved.  When anticipating a data center migration there is no avoiding it, you simply have to face the fact that there will be surprises along the way.

For any data center, the primary concern with a migration is downtime.  Even seconds of downtime can be incredibly costly so reducing or completely eliminating downtime is the name of the game.         While certain things can be moved during off-peak hours, some things still may occur during peak business hours.  For this reason it is very important to keep everyone well informed and on the same page at all times.  End users, support teams and anyone impacted by the migration should be informed of the time table, schedule and anticipated plans regarding the migration.  Next, it is important to research what existing infrastructure there is, what legacy systems are in place, and what will be making the migration.  Once you have a thorough idea of what will be making the migration, it is important to anticipate what additional infrastructure and equipment will be needed in the new location so that a layout can be planned.  Plan a layout based on best practices in the new data center so that you can accommodate growth as needed.  Scalability is important for the lifespan of any data center so while you are making the move it is an ideal time to ensure the new data center will be scalable.  When anticipating scalability you must see beyond current needs and standards and look forward to future rack density expectations.  With all of these things being considered, you may want to also consider what option is best for efficient cooling.  Cooling a data center is often one of the biggest expenses a data center experiences so when determining layout you may want to think about implementing hot aisles/cold aisles and whether or not there are other layouts that may serve your data center best.  The final part of any data center migration is the migration itself.  For most migrations it is a good idea to use the help of an experienced project manager with knowledge and expertise about data center migrations to ensure everything goes smoothly and that once the migration has occurred, everything runs as it should.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , | Comments Off

How to Build An Immortal & Future-Proof Data Center

data_center_facebook_postDesigning and constructing a data center is no small undertaking.  Heating and cooling, energy efficiency, infrastructure management and scalability must all be considered.  A data center must be designed to meet current needs while being able to scale up or down in the future as needed.  The tricky part of that with a data center is that if you build a data center that is far bigger than current needs it may not be energy efficient.  But, if you build a data center that is just barely big enough and you expand over time you may quickly outgrow your space and find yourself looking to move which is costly and difficult.  So, how does one build the illusive immortal data center?  Do not fear, it can be achieved.

First thing is first, you have to determine the goals of your data center when designing an immortal data center.  One of the most important things to consider is scalability.  Without scalability, most data centers will outgrow themselves with time.  A data center must have dense server racks, do not run racks on only 50% capacity.  Increase the density of racks to maximize space.  But, you cannot increase rack density without having sufficient cooling options.  Some data centers prefer to employ hot aisles and cold aisles to help more efficiently heat and cool a data center.  By creating hot aisles and cold aisles, or specific zones for certain infrastructure, it can drastically improve energy efficiency and reduce energy expenses.  If hot aisles or cold aisles are not ideal for your data center, consider adding a high density room where cooling energy can be focused to the most demanding area and the rest of the data center can be kept at a warmer temperature.  This will help keep costs down while still efficiently cooling high density server racks.  Additionally, when designing an immortal, future-proof data center, it is wise to look at green and more efficient methods of cooling.  By implementing green methods you can save significantly over time while still effectively cooling a data center.  The last consideration for all data centers is the cloud.  The cloud is really the way of the future and many data centers are converting to more and more cloud usage to reduce physical infrastructure.  This helps free up additional room in a data center and reduce heating and cooling needs.  The cloud is infinitely scalable so it offers immense potential for the future and is a significant tool to keep any data center remain immortal.

Posted in Computer Room Design, Data Center Build, Data Center Construction, Data Center Design | Tagged , , , , | Comments Off

Importance of Circuit Protection in Data Centers

ASU-Polytechnic-Fire-Suppression-012Data centers use a lot of power, have a lot of equipment and run around the clock.  With so much going on in data centers, circuit breakers have a lot to manage and can easily become overloaded without proper attention, maintenance and regulation.  When a circuit breaker gets overloaded, an arc flash can occur.  If you are not familiar with what an arc flash is, you should be.  Why?  They are very dangerous and pose significant risk to data centers.  The Workplace Safety Awareness Council explains what an arc flash is and why it should be avoided at all costs, ” Simply put, an arc flash is a phenomenon where a flashover of electric current leaves its intended path and travels through the air from one conductor to another, or to ground. The results are often violent and when a human is in close proximity to the arc flash, serious injury and even death can occur.”  This is an understandable concern for data center managers because an arc flash could endanger data center employees and possibly damage data center equipment and infrastructure.

To best achieve circuit breaker protection and prevent arc flashes from occurring you have to begin by protecting the circuit breaker from overload.  You have to assess the load, and then size the load to what each circuit breaker can manage.  With monitoring and the use of today’s technology the load the circuit breaker is experiencing can be assessed, monitored and tweaked as needed. While this is a small solution, it is not a complete solution because it does protect from significant arc flashes.    Next, protection must be implemented for branched circuit breakers to ensure that there is no upstream or downstream problems if a circuit breaker is tripped.  Through the use of tools like ZSIs, or zone selective interlocking systems, circuit breakers can better communicate with each other.  Upstream and downstream circuit breakers communicate with each other so that, should a problem occur downstream, it can send a message to upstream circuit breakers to wait until the problem has been cleared.   When you better manage circuit breakers you protect your data center from downtime as well as other significant concerns such as injury to workers and damage to property.  All of these things cannot have a price tag put on them, they are truly invaluable.  In addition to implementing effective systems to protect circuit breakers from becoming overloaded or experiencing arc flashes, it is important to properly train personnel so that they know how to assess and maintain circuit breakers and know what to do should a problem occur.

Posted in Data Center Battery, Data Center Design, data center equipment, Data Center Infrastructure Management, Datacenter Design, Power Distribution Unit, Power Management | Tagged , | Comments Off

How To Create a Successful Corporate IT Department

googledatacenter1Corporate IT departments are complex and critical to the success of any business.  In today’s world, technology is constantly evolving, more and more is being done in the online environment and customers are demanding constant uptime.  A corporate IT department, run by a CIO, needs to stay on the cutting edge and meet demands to stay effective and successful.    The successful CIO will undoubtedly be a trendsetter by nature, driven by technology and all of its constant changes.  To be anything else will leave the CIO, and the entire IT department, reactive in its approach.

Because IT departments demand a lot of time and money for any business, their expenses and budget are often under great scrutiny.  Infrastructure changes often which leads to constant expense and, in the corporate world, the expense could be significant, particularly for companies with hundreds or thousands of employees.  One step to achieving a successful corporate IT department is to try to limit IT infrastructure expenses to around 50% of the budget, as opposed to the more common 70% of the budget.  In today’s workforce most companies are dealing with a wide-range of technology knowledge amongst their employees but, the great majority, are relatively tech-savvy.  This means that when a corporate IT department does not perform as it should, employees are more than aware.  But, that does not mean that there are not individuals in the workforce that are significantly less tech-savvy.  So, how does a corporate IT department manage such a wide employee knowledge and skill base?  CIOs have an important task when choosing a product suite for their business to suit the needs of their employee demographic while pushing technology and infrastructure forward to stay current.  But, when choosing infrastructure the budget must be an important consideration as well so that infrastructure costs do not become astronomically out of proportion to the rest of the budget.  As more and more IT departments and data centers move towards automation and cloud computing, infrastructure can be greatly reduced which frees up more money in the IT department budget.  This is something that not only the CIO can be happy about but that the CFO and rest of the management will be happy about.  To truly have the most successful IT department in a corporate setting, the CIO must lead the ship and carefully direct the path of the department through careful consideration of all products, well-executed management and training of corporate personnel, an eye on the bottom line of the budget, and, ultimately, a technology-driven, cutting edge perspective that is focused on innovation.

Posted in Data Center Infrastructure Management, DCIM, Technology Industry | Tagged | Comments Off

Data Center Automation

cloudcomputingData centers are supposed to be technology driven and on the cutting edge of technological advances.  But, when technology changes in the blink of an eye, data center infrastructure and programs can very quickly become outdated.  A data center is the core of any IT department and when things are outdated they no longer run smoothly or independently and become quite problematic.  As a way to mitigate these issues and hopefully avoid them altogether, many data centers are making the move toward automation to support cloud computing.  When automation works as it should, processes run smoothly and, ideally, on their own, to perform desired tasks and keep a data center functioning as it should.

When it comes to automation, testing, more testing and, yes, even more testing is critical.  Everything must be checked to ensure that it is running properly before full implementation.  When automation is done correctly, it allows a data center to be more scalable so that load can be actively balanced and more efficient which increases response time.  Because efficiency is the name of the game, more and more data centers are finding ways to reduce and better improve infrastructure and ultimately reduce overhead and overall data center expenses.  Cloud computing, coupled with automation, provides an opportunity to do so, lowering expenses and remaining more competitive without sacrificing quality of service.  By automating a data center personnel are able to have control of all resources at both the software and hardware level.  Data Center Knowledge provides a great description of the different layers of automation available and what that means for data centers:

The automation and orchestration layers

  • Server layer. Server and hardware automation have come a long way. As mentioned earlier, there are systems now available which take almost all of the configuration pieces out of deploying a server. Administrators only need to deploy one server profile and allow new servers to pick up those settings. More data centers are trying to get into the cloud business. This means deploying high-density, fast-provisioned, servers and blades. With the on-demand nature of the cloud, being able to quickly deploy fully configured servers is a big plus for staying agile and very proactive.
  • Software layer. Entire applications can be automated and provisioned based on usage and resource utilization. Using the latest load-balancing tools, administrators are able to set thresholds for key applications running within the environment. If a load-balancer, a NetScaler for example, sees that a certain type of application is receiving too many connections, it can set off a process that will allow the administrator to provision another instance of the application or a new server which will host the app.
  • Virtual layer. The modern data center is now full of virtualization and virtual machines. In using solutions like Citrix’s Provisioning Server or Unidesk’s layering software technologies, administrators are able to take workload provisioning to a whole new level. Imagine being able to set a process that will kick-start the creation of a new virtual server when one starts to get over-utilized. Now, administrators can create truly automated virtual machine environments where each workload is monitored, managed and controlled.
  • Cloud layer. This is a new and still emerging field. Still, some very large organizations are already deploying technologies like CloudStack, OpenStack, and even OpenNebula. Furthermore, they’re tying these platforms in with big data management solutions like MapR and Hadoop. What’s happening now is true cloud-layer automation. Organizations can deploy distributed data centers and have the entire cloud layer managed by a cloud-control software platform. Engineers are able to monitor workloads, how data is being distributed, and the health of the cloud infrastructure. The great part about these technologies is that organizations can deploy a true private cloud, with as much control and redundancy as a public cloud instance.
  • Data center layer. Although entire data center automation technologies aren’t quite here yet, we are seeing more robotics appear within the data center environment. Robotic arms already control massive tape libraries for Google and robotics automation is a thoroughly discussed concept among other large data center providers. In a recent article, we discussed the concept of a “lights-out” data centerin the future. Many experts agree that eventually, data center automation and robotics will likely make its way into the data center of tomorrow. For now, automation at the physical data center layer is only a developing concept.
Posted in Data Center Design, data center equipment, Data Center Infrastructure Management, DCIM | Tagged | Comments Off

Introduction to VMWare & It’s Common Terms and Acronyms

cloud_computing_2013_censusAre you familiar with VMWare?  If not, you should be.  VMWare is a cloud computing and virtualization software that is exclusively compatible with X86 hardware.  In IT, by going virtual, IT related expenses can be greatly reduced and with VMWare multiple virtual machines can be run on a single real machine.  VMWare is a leader in the virtualization field and with many IT departments choosing to utilize their technology, it is important to understand common VMWare terms and acronyms so that everyone can be well-informed and on the same page.  Recently, TechRepublic shared a list of 50 VMWare terms and acronyms with which everyone in IT should familiarize themselves.

 

  1. VM: Virtual Machine. Okay, that’s easy enough!
  2. ESXi: The vSphere Hypervisor from VMware. For extra trivia points, know that Elastic Sky was the original proposed name of the hypervisor and is now the name of a band made up of VMware employees.
  3. vmkernel: Officially the “operating system” that runs ESXi and delivers storage networking for VMs.More info.
  4. VMFS: Virtual Machine File System for ESXi hosts, a clustered file system for running VMs.
  5. iSCSI: Ethernet-based shared storage protocol.
  6. SAS: Drive type for local disks (also SATA).
  7. FCoE: Fibre Channel over Ethernet, a networking and storage technology.
  8. HBA: Host Bus Adapter for Fibre Channel storage networks.
  9. IOPs: Input/Outputs per second, detailed measurement of a drive’s performance.
  10. VM Snapshot: A point-in-time representation of a VM.
  11. ALUA: Asymmetrical logical unit access, a storage array feature. Duncan Eppingexplains it well.
  12. NUMA: Non-uniform memory access, when multiple processors are involved their memory access is relative to their location.
  13. Virtual NUMA: Virtualizes NUMA with VMware hardware version 8 VMs.
  14. LUN: Logical unit number, identifies shared storage (Fibre Channel/iSCSI).
  15. pRDM: Physical mode raw device mapping, presents a LUN directly to a VM.
  16. vRDM: Virtual mode raw device mapping, encapsulates a path to a LUN specifically for one VM in a VMDK.
  17. SAN: Storage area network, a shared storage technique for block protocols (Fibre Channel/iSCSI).
  18. NAS: Network attached storage, a shared storage technique for file protocols (NFS).
  19. NFS: Network file system, a file-based storage protocol.
  20. DAS: Direct attached storage, disk devices in a host directly.
  21. VAAI: vStorage APIs for Array Integration, the ability to offload I/O commands to the disk array.
  22. SSD: Solid state disk, a non-rotational drive that is faster than rotating drives.
  23. VSAN: Virtual SAN, a new VMware announcement for making DAS deliver SAN features in a virtualized manner.
  24. vSwitch: A virtual switch, places VMs on a physical network.
  25. vDS: vNetwork Distributed Switch, an enhanced version of the virtual switch.
  26. ISO: Image file, taken from ISO 9660file system for optical drives.
  27. vSphere Client: Administrative interface of vCenter Server.
  28. vSphere Web Client: Web-based administrative interface of vCenter Server.
  29. Host Profiles: Feature to deploy a pre-determined configuration to an ESXi host.
  30. Auto Deploy: Technique to automatically install ESXi to a host.
  31. VUM: vSphere Update Manager, a way to update hosts and VMs with latest patches, VMware Tools and product updates.
  32. vCLI: vSphere Command Line Interface, allows tasks to be run against hosts and vCenter Server.
  33. vSphere HA: High Availability, will restart a VM on another host if it fails.
  34. vCenter Server Heartbeat: Will keep the vCenter Server available in the event a host fails which is running vCenter.
  35. Virtual Appliance: A pre-packed VM with an application on it.
  36. vCenter Server: Server application that runs vSphere.
  37. vCSA: Virtual appliance edition of vCenter Server.
  38. vCloud Director: Application to pool vCenter environments and enable self-deployment of VMs.
  39. vCloud Automation Center: IT service delivery through policy and portals,get familiar with vCAC.
  40. VADP: vSphere APIs for Data Protection, a way to leverage the infrastructure for backups.
  41. MOB: Managed Object Reference, a technique vCenter uses to classify every item.
  42. DNS: Domain Name Service, a name resolution protocol. Not related to VMware, but it is imperative you set DNS up correctly to virtualize with vSphere.
  43. vSphere: Collection of VMs, ESXi hosts, and vCenter Server.
  44. SSH to ESXi host: The administrative interface you want to use for troubleshooting if you can’t use the vSphere Client or vSphere Web Client.
  45. vCenter Linked Mode: A way of pooling vCenter Servers, typically across geographies.
  46. vMotion: A VM migration technique.
  47. Storage vMotion: A VM storage migration technique from one datastore to another.
  48. vSphere DRS: Distributed Resource Scheduler, service that manages performance of VMs.
  49. vSphere SDRS: Storage DRS, manages free space and datastore latency for VMs in pools.
  50. Storage DRS Cluster: A collection SDRS objects (volumes, VMs, configuration).
Posted in Technology Industry | Tagged | Comments Off

Data Center Capacity Management

IMG_0746Data center capacity management is a hot topic among data center managers and staff.  Effectively managing data center capacity is a challenging undertaking and one that is constantly evolving which keeps everyone on their toes.  Without a capacity management plan that involves the entire data center team, as well as regular auditing of infrastructure, usage and capacity, a data center could quickly run into problems.

One of the best ways to help keep data center capacity management under control is to be well informed.  How does a data center manager and team go about doing this?  Regular data center capacity audits.  As soon as you begin audits you will establish a baseline and as you evaluate your audits each month you will begin to see trends, ways to improve and be able to anticipate whether or not you may exceed capacity, and when.  Your baseline will include information about how your storage, server, and network infrastructure are used.  Within your data center infrastructure management (DCIM) everything is monitored and data is collected to allow for improved operational support and more efficient capacity planning.  Capacity management is becoming more and more important as an emphasis is put on lowering data center expenses and improving efficiency.  The logical step for many data center managers is to add more to their existing load within a data center rather than expanding or changing locations.  But, the more you add the more your infrastructure must be capable of handling, and handling efficiently.  When you can assess trends and more accurately predict future capacity needs it allows for better budgeting accuracy.  This will help data center managers better manage expenses, investments, changes in infrastructure and more.  This approach improves operational transparency and allows data center managers to anticipate future needs rather than react to them as they occur.

Of course, another part of capacity management is to establish plans.  It is great to use data collected from analyzing usage, manage capacity and plan for future capacity needs but, beyond that, data center managers must formulate various plans for different potential scenarios and inform all data center employees of their roles in said plans.  You cannot plan for just one scenario because, unfortunately, in an ever-evolving world, scenarios can change very quickly.  It is simply best practice to plan for multiple contingencies and involve the entire team so that when it is time to execute plans it can be done efficiently and effectively.  This form of anticipation, planning and execution is exactly the kind of capacity management that prevents downtime from occurring and allows a data center to run as it should at all times.

Posted in Data Center Design, data center equipment, Data Center Infrastructure Management, data center maintenance, Datacenter Design, DCIM | Tagged , , , , | Comments Off

Should You Relocate Your Data Center?

Data Center Best LocationA lot of data centers are doing it but is it right for you?  Relocation – how do you really know when it is time?  For data centers, there may be some tell-tale signs that it is time to relocate.  And, if your business is being relocated a data center or computer room may simply have no choice but to relocate.  Relocating a data center is no small task and requires a good deal of planning and preparation so that it can be executed smoothly, downtime can be avoided and important data protected.

A data center manager needs to be on the lookout for signs that it may be time to relocate so that potential problems can be avoided.  One of the first signs it may be time to relocate are a wide array of power problems.  Whether your data center is no longer energy efficient (especially in spite of energy efficient improvements) or your power source is unstable or unreliable, it is probably time to relocate data centers.  Energy usage, energy efficiency and the cost to power a data center are major topics of conversation for any data center and the bottom line is that if you cannot achieve energy efficiency within the existing data center, it will dramatically impact the bottom line and cost more money than it should.  A data center designed for energy efficiency will not only be more eco-friendly but will save money which is important to any business.  Additionally, if your power supply is unreliable that is just unacceptable in the data center business.  If the power supply is not working and your backup power fails for some reason your data center could experience small amounts of downtime or, worst case scenario, significant downtime.  Even small amounts of downtime come at a significant expense and prolonged downtime could mean lost revenue, frustrated customers and ultimately could cause a business to have to shut their doors.  If your data center has any form of consistent power problems, it is time to relocate.  The next problem that could indicate it is time to relocate is that there is no room to grow.  As data centers grow in capacity and add more server racks and other assorted equipment spare room could very quickly run out.  Once you have outgrown your space there is not much that can be done which means that it is probably time to relocate.  When choosing a location, do not simply accommodate growth that has recently occurred but anticipate additional growth that could occur in the future so that you will not have to move again right away.  An additional reason to consider relocating, especially if you have any of the aforementioned concerns, is if your location is not right.  Wrong location could translate to a few different things – it may be inconvenient or too remote, it may be in an area where real estate is too pricey or it may not be in an area near the target market that you serve.  If any of these are true they are probably frustrating and expensive issues and the best way to solve a location problem is to relocate.   Determine if the benefits and potential savings outweigh the costs and difficulty involved with relocating a data center but a move may be just the right thing to mitigate problems for your data center.

Posted in computer room construction, Computer Room Design, Construction Industry, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design | Tagged , , , , , | Comments Off