Data center migration. Those three words could probably send any data center manager running for the hills. Alas, when a data center has outgrown its current facilities or has to be moved for any other reason, data center migration is a necessity. A data center migration is a significant undertaking fraught with potential risks and hidden problems. So, how does anyone ever accomplish it? Through proper research, planning that is complete factors in contingencies, careful instruction to ensure that everyone is on the same page, and a lot of patience, a successful data center migration can be achieved. When anticipating a data center migration there is no avoiding it, you simply have to face the fact that there will be surprises along the way.
For any data center, the primary concern with a migration is downtime. Even seconds of downtime can be incredibly costly so reducing or completely eliminating downtime is the name of the game. While certain things can be moved during off-peak hours, some things still may occur during peak business hours. For this reason it is very important to keep everyone well informed and on the same page at all times. End users, support teams and anyone impacted by the migration should be informed of the time table, schedule and anticipated plans regarding the migration. Next, it is important to research what existing infrastructure there is, what legacy systems are in place, and what will be making the migration. Once you have a thorough idea of what will be making the migration, it is important to anticipate what additional infrastructure and equipment will be needed in the new location so that a layout can be planned. Plan a layout based on best practices in the new data center so that you can accommodate growth as needed. Scalability is important for the lifespan of any data center so while you are making the move it is an ideal time to ensure the new data center will be scalable. When anticipating scalability you must see beyond current needs and standards and look forward to future rack density expectations. With all of these things being considered, you may want to also consider what option is best for efficient cooling. Cooling a data center is often one of the biggest expenses a data center experiences so when determining layout you may want to think about implementing hot aisles/cold aisles and whether or not there are other layouts that may serve your data center best. The final part of any data center migration is the migration itself. For most migrations it is a good idea to use the help of an experienced project manager with knowledge and expertise about data center migrations to ensure everything goes smoothly and that once the migration has occurred, everything runs as it should.
Designing and constructing a data center is no small undertaking. Heating and cooling, energy efficiency, infrastructure management and scalability must all be considered. A data center must be designed to meet current needs while being able to scale up or down in the future as needed. The tricky part of that with a data center is that if you build a data center that is far bigger than current needs it may not be energy efficient. But, if you build a data center that is just barely big enough and you expand over time you may quickly outgrow your space and find yourself looking to move which is costly and difficult. So, how does one build the illusive immortal data center? Do not fear, it can be achieved.
First thing is first, you have to determine the goals of your data center when designing an immortal data center. One of the most important things to consider is scalability. Without scalability, most data centers will outgrow themselves with time. A data center must have dense server racks, do not run racks on only 50% capacity. Increase the density of racks to maximize space. But, you cannot increase rack density without having sufficient cooling options. Some data centers prefer to employ hot aisles and cold aisles to help more efficiently heat and cool a data center. By creating hot aisles and cold aisles, or specific zones for certain infrastructure, it can drastically improve energy efficiency and reduce energy expenses. If hot aisles or cold aisles are not ideal for your data center, consider adding a high density room where cooling energy can be focused to the most demanding area and the rest of the data center can be kept at a warmer temperature. This will help keep costs down while still efficiently cooling high density server racks. Additionally, when designing an immortal, future-proof data center, it is wise to look at green and more efficient methods of cooling. By implementing green methods you can save significantly over time while still effectively cooling a data center. The last consideration for all data centers is the cloud. The cloud is really the way of the future and many data centers are converting to more and more cloud usage to reduce physical infrastructure. This helps free up additional room in a data center and reduce heating and cooling needs. The cloud is infinitely scalable so it offers immense potential for the future and is a significant tool to keep any data center remain immortal.
Data centers use a lot of power, have a lot of equipment and run around the clock. With so much going on in data centers, circuit breakers have a lot to manage and can easily become overloaded without proper attention, maintenance and regulation. When a circuit breaker gets overloaded, an arc flash can occur. If you are not familiar with what an arc flash is, you should be. Why? They are very dangerous and pose significant risk to data centers. The Workplace Safety Awareness Council explains what an arc flash is and why it should be avoided at all costs, ” Simply put, an arc flash is a phenomenon where a flashover of electric current leaves its intended path and travels through the air from one conductor to another, or to ground. The results are often violent and when a human is in close proximity to the arc flash, serious injury and even death can occur.” This is an understandable concern for data center managers because an arc flash could endanger data center employees and possibly damage data center equipment and infrastructure.
To best achieve circuit breaker protection and prevent arc flashes from occurring you have to begin by protecting the circuit breaker from overload. You have to assess the load, and then size the load to what each circuit breaker can manage. With monitoring and the use of today’s technology the load the circuit breaker is experiencing can be assessed, monitored and tweaked as needed. While this is a small solution, it is not a complete solution because it does protect from significant arc flashes. Next, protection must be implemented for branched circuit breakers to ensure that there is no upstream or downstream problems if a circuit breaker is tripped. Through the use of tools like ZSIs, or zone selective interlocking systems, circuit breakers can better communicate with each other. Upstream and downstream circuit breakers communicate with each other so that, should a problem occur downstream, it can send a message to upstream circuit breakers to wait until the problem has been cleared. When you better manage circuit breakers you protect your data center from downtime as well as other significant concerns such as injury to workers and damage to property. All of these things cannot have a price tag put on them, they are truly invaluable. In addition to implementing effective systems to protect circuit breakers from becoming overloaded or experiencing arc flashes, it is important to properly train personnel so that they know how to assess and maintain circuit breakers and know what to do should a problem occur.
Corporate IT departments are complex and critical to the success of any business. In today’s world, technology is constantly evolving, more and more is being done in the online environment and customers are demanding constant uptime. A corporate IT department, run by a CIO, needs to stay on the cutting edge and meet demands to stay effective and successful. The successful CIO will undoubtedly be a trendsetter by nature, driven by technology and all of its constant changes. To be anything else will leave the CIO, and the entire IT department, reactive in its approach.
Because IT departments demand a lot of time and money for any business, their expenses and budget are often under great scrutiny. Infrastructure changes often which leads to constant expense and, in the corporate world, the expense could be significant, particularly for companies with hundreds or thousands of employees. One step to achieving a successful corporate IT department is to try to limit IT infrastructure expenses to around 50% of the budget, as opposed to the more common 70% of the budget. In today’s workforce most companies are dealing with a wide-range of technology knowledge amongst their employees but, the great majority, are relatively tech-savvy. This means that when a corporate IT department does not perform as it should, employees are more than aware. But, that does not mean that there are not individuals in the workforce that are significantly less tech-savvy. So, how does a corporate IT department manage such a wide employee knowledge and skill base? CIOs have an important task when choosing a product suite for their business to suit the needs of their employee demographic while pushing technology and infrastructure forward to stay current. But, when choosing infrastructure the budget must be an important consideration as well so that infrastructure costs do not become astronomically out of proportion to the rest of the budget. As more and more IT departments and data centers move towards automation and cloud computing, infrastructure can be greatly reduced which frees up more money in the IT department budget. This is something that not only the CIO can be happy about but that the CFO and rest of the management will be happy about. To truly have the most successful IT department in a corporate setting, the CIO must lead the ship and carefully direct the path of the department through careful consideration of all products, well-executed management and training of corporate personnel, an eye on the bottom line of the budget, and, ultimately, a technology-driven, cutting edge perspective that is focused on innovation.
Data centers are supposed to be technology driven and on the cutting edge of technological advances. But, when technology changes in the blink of an eye, data center infrastructure and programs can very quickly become outdated. A data center is the core of any IT department and when things are outdated they no longer run smoothly or independently and become quite problematic. As a way to mitigate these issues and hopefully avoid them altogether, many data centers are making the move toward automation to support cloud computing. When automation works as it should, processes run smoothly and, ideally, on their own, to perform desired tasks and keep a data center functioning as it should.
When it comes to automation, testing, more testing and, yes, even more testing is critical. Everything must be checked to ensure that it is running properly before full implementation. When automation is done correctly, it allows a data center to be more scalable so that load can be actively balanced and more efficient which increases response time. Because efficiency is the name of the game, more and more data centers are finding ways to reduce and better improve infrastructure and ultimately reduce overhead and overall data center expenses. Cloud computing, coupled with automation, provides an opportunity to do so, lowering expenses and remaining more competitive without sacrificing quality of service. By automating a data center personnel are able to have control of all resources at both the software and hardware level. Data Center Knowledge provides a great description of the different layers of automation available and what that means for data centers:
The automation and orchestration layers
- Server layer. Server and hardware automation have come a long way. As mentioned earlier, there are systems now available which take almost all of the configuration pieces out of deploying a server. Administrators only need to deploy one server profile and allow new servers to pick up those settings. More data centers are trying to get into the cloud business. This means deploying high-density, fast-provisioned, servers and blades. With the on-demand nature of the cloud, being able to quickly deploy fully configured servers is a big plus for staying agile and very proactive.
- Software layer. Entire applications can be automated and provisioned based on usage and resource utilization. Using the latest load-balancing tools, administrators are able to set thresholds for key applications running within the environment. If a load-balancer, a NetScaler for example, sees that a certain type of application is receiving too many connections, it can set off a process that will allow the administrator to provision another instance of the application or a new server which will host the app.
- Virtual layer. The modern data center is now full of virtualization and virtual machines. In using solutions like Citrix’s Provisioning Server or Unidesk’s layering software technologies, administrators are able to take workload provisioning to a whole new level. Imagine being able to set a process that will kick-start the creation of a new virtual server when one starts to get over-utilized. Now, administrators can create truly automated virtual machine environments where each workload is monitored, managed and controlled.
- Cloud layer. This is a new and still emerging field. Still, some very large organizations are already deploying technologies like CloudStack, OpenStack, and even OpenNebula. Furthermore, they’re tying these platforms in with big data management solutions like MapR and Hadoop. What’s happening now is true cloud-layer automation. Organizations can deploy distributed data centers and have the entire cloud layer managed by a cloud-control software platform. Engineers are able to monitor workloads, how data is being distributed, and the health of the cloud infrastructure. The great part about these technologies is that organizations can deploy a true private cloud, with as much control and redundancy as a public cloud instance.
- Data center layer. Although entire data center automation technologies aren’t quite here yet, we are seeing more robotics appear within the data center environment. Robotic arms already control massive tape libraries for Google and robotics automation is a thoroughly discussed concept among other large data center providers. In a recent article, we discussed the concept of a “lights-out” data centerin the future. Many experts agree that eventually, data center automation and robotics will likely make its way into the data center of tomorrow. For now, automation at the physical data center layer is only a developing concept.
Are you familiar with VMWare? If not, you should be. VMWare is a cloud computing and virtualization software that is exclusively compatible with X86 hardware. In IT, by going virtual, IT related expenses can be greatly reduced and with VMWare multiple virtual machines can be run on a single real machine. VMWare is a leader in the virtualization field and with many IT departments choosing to utilize their technology, it is important to understand common VMWare terms and acronyms so that everyone can be well-informed and on the same page. Recently, TechRepublic shared a list of 50 VMWare terms and acronyms with which everyone in IT should familiarize themselves.
- VM: Virtual Machine. Okay, that’s easy enough!
- ESXi: The vSphere Hypervisor from VMware. For extra trivia points, know that Elastic Sky was the original proposed name of the hypervisor and is now the name of a band made up of VMware employees.
- vmkernel: Officially the “operating system” that runs ESXi and delivers storage networking for VMs.More info.
- VMFS: Virtual Machine File System for ESXi hosts, a clustered file system for running VMs.
- iSCSI: Ethernet-based shared storage protocol.
- SAS: Drive type for local disks (also SATA).
- FCoE: Fibre Channel over Ethernet, a networking and storage technology.
- HBA: Host Bus Adapter for Fibre Channel storage networks.
- IOPs: Input/Outputs per second, detailed measurement of a drive’s performance.
- VM Snapshot: A point-in-time representation of a VM.
- ALUA: Asymmetrical logical unit access, a storage array feature. Duncan Eppingexplains it well.
- NUMA: Non-uniform memory access, when multiple processors are involved their memory access is relative to their location.
- Virtual NUMA: Virtualizes NUMA with VMware hardware version 8 VMs.
- LUN: Logical unit number, identifies shared storage (Fibre Channel/iSCSI).
- pRDM: Physical mode raw device mapping, presents a LUN directly to a VM.
- vRDM: Virtual mode raw device mapping, encapsulates a path to a LUN specifically for one VM in a VMDK.
- SAN: Storage area network, a shared storage technique for block protocols (Fibre Channel/iSCSI).
- NAS: Network attached storage, a shared storage technique for file protocols (NFS).
- NFS: Network file system, a file-based storage protocol.
- DAS: Direct attached storage, disk devices in a host directly.
- VAAI: vStorage APIs for Array Integration, the ability to offload I/O commands to the disk array.
- SSD: Solid state disk, a non-rotational drive that is faster than rotating drives.
- VSAN: Virtual SAN, a new VMware announcement for making DAS deliver SAN features in a virtualized manner.
- vSwitch: A virtual switch, places VMs on a physical network.
- vDS: vNetwork Distributed Switch, an enhanced version of the virtual switch.
- ISO: Image file, taken from ISO 9660file system for optical drives.
- vSphere Client: Administrative interface of vCenter Server.
- vSphere Web Client: Web-based administrative interface of vCenter Server.
- Host Profiles: Feature to deploy a pre-determined configuration to an ESXi host.
- Auto Deploy: Technique to automatically install ESXi to a host.
- VUM: vSphere Update Manager, a way to update hosts and VMs with latest patches, VMware Tools and product updates.
- vCLI: vSphere Command Line Interface, allows tasks to be run against hosts and vCenter Server.
- vSphere HA: High Availability, will restart a VM on another host if it fails.
- vCenter Server Heartbeat: Will keep the vCenter Server available in the event a host fails which is running vCenter.
- Virtual Appliance: A pre-packed VM with an application on it.
- vCenter Server: Server application that runs vSphere.
- vCSA: Virtual appliance edition of vCenter Server.
- vCloud Director: Application to pool vCenter environments and enable self-deployment of VMs.
- vCloud Automation Center: IT service delivery through policy and portals,get familiar with vCAC.
- VADP: vSphere APIs for Data Protection, a way to leverage the infrastructure for backups.
- MOB: Managed Object Reference, a technique vCenter uses to classify every item.
- DNS: Domain Name Service, a name resolution protocol. Not related to VMware, but it is imperative you set DNS up correctly to virtualize with vSphere.
- vSphere: Collection of VMs, ESXi hosts, and vCenter Server.
- SSH to ESXi host: The administrative interface you want to use for troubleshooting if you can’t use the vSphere Client or vSphere Web Client.
- vCenter Linked Mode: A way of pooling vCenter Servers, typically across geographies.
- vMotion: A VM migration technique.
- Storage vMotion: A VM storage migration technique from one datastore to another.
- vSphere DRS: Distributed Resource Scheduler, service that manages performance of VMs.
- vSphere SDRS: Storage DRS, manages free space and datastore latency for VMs in pools.
- Storage DRS Cluster: A collection SDRS objects (volumes, VMs, configuration).
Data center capacity management is a hot topic among data center managers and staff. Effectively managing data center capacity is a challenging undertaking and one that is constantly evolving which keeps everyone on their toes. Without a capacity management plan that involves the entire data center team, as well as regular auditing of infrastructure, usage and capacity, a data center could quickly run into problems.
One of the best ways to help keep data center capacity management under control is to be well informed. How does a data center manager and team go about doing this? Regular data center capacity audits. As soon as you begin audits you will establish a baseline and as you evaluate your audits each month you will begin to see trends, ways to improve and be able to anticipate whether or not you may exceed capacity, and when. Your baseline will include information about how your storage, server, and network infrastructure are used. Within your data center infrastructure management (DCIM) everything is monitored and data is collected to allow for improved operational support and more efficient capacity planning. Capacity management is becoming more and more important as an emphasis is put on lowering data center expenses and improving efficiency. The logical step for many data center managers is to add more to their existing load within a data center rather than expanding or changing locations. But, the more you add the more your infrastructure must be capable of handling, and handling efficiently. When you can assess trends and more accurately predict future capacity needs it allows for better budgeting accuracy. This will help data center managers better manage expenses, investments, changes in infrastructure and more. This approach improves operational transparency and allows data center managers to anticipate future needs rather than react to them as they occur.
Of course, another part of capacity management is to establish plans. It is great to use data collected from analyzing usage, manage capacity and plan for future capacity needs but, beyond that, data center managers must formulate various plans for different potential scenarios and inform all data center employees of their roles in said plans. You cannot plan for just one scenario because, unfortunately, in an ever-evolving world, scenarios can change very quickly. It is simply best practice to plan for multiple contingencies and involve the entire team so that when it is time to execute plans it can be done efficiently and effectively. This form of anticipation, planning and execution is exactly the kind of capacity management that prevents downtime from occurring and allows a data center to run as it should at all times.
A lot of data centers are doing it but is it right for you? Relocation – how do you really know when it is time? For data centers, there may be some tell-tale signs that it is time to relocate. And, if your business is being relocated a data center or computer room may simply have no choice but to relocate. Relocating a data center is no small task and requires a good deal of planning and preparation so that it can be executed smoothly, downtime can be avoided and important data protected.
A data center manager needs to be on the lookout for signs that it may be time to relocate so that potential problems can be avoided. One of the first signs it may be time to relocate are a wide array of power problems. Whether your data center is no longer energy efficient (especially in spite of energy efficient improvements) or your power source is unstable or unreliable, it is probably time to relocate data centers. Energy usage, energy efficiency and the cost to power a data center are major topics of conversation for any data center and the bottom line is that if you cannot achieve energy efficiency within the existing data center, it will dramatically impact the bottom line and cost more money than it should. A data center designed for energy efficiency will not only be more eco-friendly but will save money which is important to any business. Additionally, if your power supply is unreliable that is just unacceptable in the data center business. If the power supply is not working and your backup power fails for some reason your data center could experience small amounts of downtime or, worst case scenario, significant downtime. Even small amounts of downtime come at a significant expense and prolonged downtime could mean lost revenue, frustrated customers and ultimately could cause a business to have to shut their doors. If your data center has any form of consistent power problems, it is time to relocate. The next problem that could indicate it is time to relocate is that there is no room to grow. As data centers grow in capacity and add more server racks and other assorted equipment spare room could very quickly run out. Once you have outgrown your space there is not much that can be done which means that it is probably time to relocate. When choosing a location, do not simply accommodate growth that has recently occurred but anticipate additional growth that could occur in the future so that you will not have to move again right away. An additional reason to consider relocating, especially if you have any of the aforementioned concerns, is if your location is not right. Wrong location could translate to a few different things – it may be inconvenient or too remote, it may be in an area where real estate is too pricey or it may not be in an area near the target market that you serve. If any of these are true they are probably frustrating and expensive issues and the best way to solve a location problem is to relocate. Determine if the benefits and potential savings outweigh the costs and difficulty involved with relocating a data center but a move may be just the right thing to mitigate problems for your data center.
Posted in computer room construction, Computer Room Design, Construction Industry, Data Center Build, Data Center Construction, Data Center Design, Datacenter Design
Tagged compute room construction, computer room build, computer room construction, Data Center, Data Center Construction, data center design
When you think of a healthcare facility and the IT challenges it may face, you probably first thing of large hospitals. And, while they certainly face a number of IT challenges, many other healthcare facilities, such as clinics, outpatient practices, dental offices and more face IT challenges as well. Today, medical records, patient files, physician notes, prescription orders and more are all kept on secure servers and transmitted securely via electronic file sharing methods. Paper files are quickly becoming a thing of the past and this means that all healthcare facilities, large or small, have to find efficient and effective means of managing their IT. Many small healthcare facilities are short on space as it is so a dedicated server room is likely not practical or even possible. Rather than having a dedicated server room, many healthcare facilities could simply opt for a server-room-in-a-box instead.
A server-room-in-a-box is small but still effectively houses IT equipment in one location that is ideally designed for security and energy-efficiency. A server-room-in-a-box can be housed in furniture that coordinates with the rest of the decor in a healthcare facility so it will blend right in. The housing can be on caster wheels or can be completely locked down depending on each healthcare facilities specific needs. When you hear about a server-room-in-a-box you may worry about overheating and energy efficiency but a server-room-in-a-box is self-cooled with fan options and ventilation so that you do not need any special equipment to keep it running properly. Also, if you have been in a server room you know that it can be noisy at times. With a server-room-in-a-box it is actually quiet because it will muffle noise from the equipment. Ideally you should aim to muffle as much noise as possible (aim for approximately 90%) so that you can create an ideal healthcare facility setting and work without interruption. The server-room-in-a-box you choose should have an integrated PDU to make power distribution easy. If you are also wondering if the capacity of a server-room-in-a-box will be enough, they actually come in a variety of capacities ranging from 12U to 38U. And, if necessary you could always add an additional server-room-in-a-box should capacity needs increase. Healthcare facilities face a number of different challenges than a typical office setting might face. It is not just employees visiting the facilities each day but patients and outside personnel as well. With that in mind, as well as reduced space needs, a server-room-in-a-box is an ideal approach to managing IT needs in a healthcare facility.
Today, many data centers are finding that, rather than trying to make an existing location work, it is better to consolidate data centers or move to a new location. Moving to a new location provides the opportunity to have more room, better energy efficiency and an optimized environment for any center needs. Any time a data center faces a big move, it will undoubtedly cause some stress and concern. After all, a move so large poses a lot of potential risks when it comes to data loss and downtime. Whether a data center runs critical components of a retail location, retail websites, healthcare systems, warehouse systems or any important operations that would impact staff and personnel if down for very long, downtime must be minimized to protect a business form significant lost revenue, frustration and major problems.
It can be difficult to properly assess the heating & cooling needs as well as the power and capacity needs of an existing data center for a data center manager. Because of this, when a data center is being moved, anticipating possible problems or needs during a move to be properly prepared can prove incredibly challenging. One problem many data centers encounter during a big move is miscommunication or lack of communication altogether. Operational information silos are not both unproductive and detrimental during a data center move. If communication breaks down important messages may not be relayed, things may not be done as they should, and downtime while everything is being sorted out is likely to occur. To avoid this, it is important to have all important people involved in a data center move involved in every meeting. Even if it seems that some may not be necessary for all meetings, have them be present so that everyone is on the same page. The next problem many data centers experience during a move is a lack of proper planning. When planning for a data center move it is important to anticipate all needs as well as possible unforeseen needs. Have contingency plans in place for the unexpected so that, should something unforeseen arise, you can quickly and efficiently mitigate the problem without loss of critical information or downtime. Next, it is wise to test critical data center equipment and infrastructure to ensure that everything is in proper working order and current. A breakdown in data migration due to faulty equipment or outdated products is something that should never occur if proper preparation takes place. Before the big move occurs, test everything, replace anything that is outdated or not working and then test to ensure that data is able to migrate properly. Lastly, when moving to a new data center, it is important to ensure that the new center can meet all heating and cooling needs and, most importantly, capacity needs with room to grow. While energy efficiency and lower costs of operation are important, the last thing you want when moving to a new data center is to have to move again in a few years because the needs of a data center were not properly anticipated. Leave room to grow so that a data center move is not just a short term solution but a long term solution for your data center.
Posted in computer room construction, Computer Room Design, Construction Industry, Data Center Build, Data Center Construction, Data Center Design, data center equipment, Data Center Infrastructure Management, data center maintenance, Datacenter Design, DCIM
Tagged computer room construction, Data Center Construction, data center design, data center equipment