Hack Your Way to the Truth about Hacking


truth-about-hackersIt doesn’t matter how big or how small your business is, there’s always a possibility that your website could be hacked. And the worst part is that several weeks or months might go by before you even realize that something is amiss with your site. No matter what business you’re in or how small of a budget you have, you always want to make sure that you, your website and your customers are properly protected against digital attacks. Besides firewalls and antivirus programs, knowledge is another effective tool in protecting yourself against hackers.


The Cost of Hacking


One thing that you have to always keep in mind about your business website is that it’s always open, so there’s always the possibility that it could become hacked. If you have a small- to medium-sized business, you can expect to pay more than $180,000 if your website is attacked, and that’s just on average.


Not only is there the financial cost that comes with hacking, there’s also the fact that future jobs could be lost as well. It’s estimated that at least 500,000 American jobs are lost every year because of the costs of hacking and cyber espionage. So not only do businesses have to pay for a hacker’s misdeeds, potential future employees do as well.


Other than businesses and potential future employees, there’s also the possibility that the web designer will also have to pay the price of a hack because they were the ones who designed the website that was hacked in the first place. Should clients spread the word about the web designer, there’s a possibility that the designer could lose out on business while the same happens to the company that they originally designed the website for.


The Ripples Caused By Hacking


Should your site ever become compromised, you can immediately expect a loss in profit and revenue because you have to shut down while you get everything taken care of. Even if you open up a temporary website while you’re building a new website, there’s still the possibility that your customers will be uneasy about doing further business with you, and that’s especially true if their credit or debit card information was stolen during the hack.


Just like you have to do everything that you can to protect your business reputation after a cyber-attack, search engines also have to protect their reputation as well as anyone who uses their search engine. What this means is that your search engine rankings can take a serious hit after it’s been discovered that you’ve been hacked. Should your site be blacklisted by search engines, you might show up significantly lower on search engine results than you did before the hack.


Businesses that have been hacked also have to consider the very real possibility that some of their customers might take legal action if their financial information was stolen during the hack. Now you have to take time away from rebuilding your business in order to go to court and you might even have to spend money to hire a lawyer and pay for legal fees.


And to add insult to injury, if your business has a credit card issuer, they might fine you hundreds of thousands of dollars for the breach in security.


After the Dust Settles


You can still be feeling the effects of getting hacked several months and possibly even several years after it happened. Should the media get ahold of the information, they might always throw in the fact that you were hacked whenever they mention your name in the future.


If you have sensitive employee information that was also hacked, social security numbers, health care information and even home addresses might have been compromised. Now your employees have to worry about their identities being stolen at any time in the future, which might cost them additional frustration on top of being associated with a business that’s been hacked.


How it Happens


There are several ways that your website can be hacked, including:


  • Content Management Systems(CMS) are often vulnerable to access to the site through back doors created by the permissions needed for a CMS to operate.  Old CMS’s or not updated versions are a very common entrance for a hacker
  • Plugins are often used in CMS’s for many reasons to make adding content, custom code, or images easier for the user.  Some plugins, however can leave you vulnerable to a hack.
  • Insecure passwords. There are computer programs that are designed to filter through random combinations of passwords until they find the right one that will give them access to your site
  • Old code. Outdated or poorly written code can also act as a gateway for hackers. You might want to think about updating any old plug-ins or themes on your website.


Some of the warning signs that your site might have been hacked are:


  • Sudden surges in traffic from odd locations
  • Massive uploads of spam
  • Malware warnings
  • Several 404 error messages
  • A sluggish site


If you even suspect that you site might have been hacked, act as quickly as possible in order to mitigate any damage that might have been done. Get in touch with your hosting provider and inform them of your suspicions before you change all of your passwords. You can also hire someone to professionally “scrub” your website, which might cost upwards of $200.


To protect yourself from being hacked, it’s a good idea to change your passwords often, install security plug-ins and make sure that your website is always up to date. At the end of every month you’ll want to take a close look at your website and get rid of any themes or plug-ins that you aren’t using so that they don’t become a security liability in the future.


Think of business website protection like insurance—while you might not ever need it, it’s still a good thing to have and one of the best ways to save yourself time and money in the future. There’s no way to make your site completely hack-proof, but you can most definitely make it hard for hackers to make off with ill-gotten gains.



Posted in Technology Industry | Comments Off

Considering the Fire Concerns of a Data Center

Fire Danger Data CenterData centers have become the new warehouse of the 21st century. Literally millions of bits of information flows through their servers daily, and corporations and organizations rely on them for optimal performance. A corporation’s data is among some of its most valuable capital, and should be thusly protected. When considering data protection, the old adage that “it’s better to be safe than sorry” certainly applies.

Any data loss or network interruption can be catastrophic to an organization. Yet many often think that once they have their data flowing through a server farm, concerns about its safety are unwarranted. That attitude can be dangerous, as there are still a number of ways outside of conventional methods through which data can be lost. Often a single layer of protection isn’t sufficient, and there are some things that even a firewall can’t stop. Thus, it’s encouraged to consider any and all aspects of your data’s safety, even when it’s under the care of a data center. This is certainly one area of business where overkill is underrated.

A Threat from the Center Itself?

When considering the security of their data, most focus on protecting it from intrusion. And with good reason; cyber crime and the theft of intellectual property can be disastrous. Yet many overlook concerns about the where their data is being stored, and the potential risk that the actual data center can pose.

Try leaving a refrigerator door open for than 5 minutes and see what happens. Most are surprised to find out that the room actually gets warmer. That’s due to the increased energy consumption that the refrigerator has to use to power the motor to keep its contents cool with the introduction of the outside air. That energy is given off of the unit as heat, and its effects are noticeable.

The same principle of thermodynamics applies to a data center. Servers that are in constant use expend a lot of energy. This energy causes the actual machine housing to heat up. Now, imagine many of those servers stacked one atop the other from floor to ceiling, all arranged in row after narrow row, filling the room. Couple that with the miles of wiring comprised mostly of copper and other alloys that are terrific conductors of heat that connect the machines, and one can imagine the immense heat that builds up inside of the rooms of a date center. The introduction of any small amount of combustible material to such a hot environment could easily produce a literal fireball.

Most data centers have measures in place in to combat these potential hazards. The rooms housing the servers must be well-ventilated to help transfer some of the heat outside of the room. Yet, in the event of a fire, these ventilation systems can also serve to delay response. The rapid airflow caused by a ventilation system can actually carry smoke away from the source before a smoke detector can trigger any sort of fire suppression system.

Aside from fire, water is the next most harmful substance to a server. Yet most data centers employ some form of sprinkler system as part of their fire suppression system. Water raining down upon the servers in the event of a fire can often cause more damage to a server farm than the fire itself. Often, the suppression systems themselves are the actual hazard. Accidents have happened at data centers where fire alarms either triggered incorrectly or by an inconsequential amount of smoke have accidently damaged or destroyed all of a center’s assets when their servers were doused by the sprinkler systems, causing data loss and greatly limiting the operational effectiveness of entire companies and organizations.

How to Mitigate These Risks

Corporations are faced with a bit of a dilemma: finding data centers to house their information and run their servers  that don’t in-and-of-themselves pose a threat. The answer only comes by placing an extensive amount of time into researching how a data center handles the risks of fire, and what methods it employs in extinguishing one should it occur.

Because the mission-critical nature of data centers require that extended downtime be avoided at all costs, simply turning servers off to allow them to cool isn’t a practical way to mitigate the risk of fire. Given that a center can’t totally eliminate the risk, the key to better protecting a center and it’s servers from fire then is to find way to immediately identify fire at its source and to suppress it without damaging the servers in the area.

Recent advances in fire suppression systems have turned such ideas into possibilities. In order to better identify the source of fires, photoelectric smoke detectors have been created that to better detect combustion particles in the air. The detectors can be placed on ceilings or within air ducts. These new systems also help in preventing false alarms from causing any potential sprinkler damage, as they use microprocessing devices that function in real-time to determine if whatever the alarm is sensing is real or not.

New suppression solutions have also been created to replace conventional sprinkler systems. Rather than using water, these systems instead deploy gas-based, waterless agents aimed at extinguishing the fire in its inchoate stage. These agents work by either absorbing the heat from the source of the fire or by depleting the oxygen in the area and essentially choking the fire out. Once settled, no residue is left on the machines and their performance is unhindered.

Data centers have helped countless organizations to increase their operational effectiveness and protect their data. Yet there are safety trade-offs to be made when considering renting space within a center. However, a little research into the safety practices of a data center can help one know with which providers he/she will be able to enjoy the effectiveness that they offer without the safety risks.


Posted in Uncategorized | Comments Off

Looking Into The Future Automation Of Data Centers

Data Center AutomationData storage is as demanding as ever, pushing server providers to seek innovative solutions that are still barely able to keep up with the rapid growth. Server providers have the delicate task of maintaining fast, safe, and reliable user access—not to mention simple concerns such as preventing equipment from overheating. Automation appears to be the future of servers, as purely manual control of the incredible amount of data will likely no longer be reasonable. The solution is expected to be a mix of high-tech artificial intelligence used in both computers and equipment that is best described as robots. While current technology and engineering is still not quite up to the task, the vast resources being poured into research and development will soon bring this tech revolution to reality.

What is Fueling the Incredible Growth of Data Storage and Server Demands?

In 2012 the estimated stored digital data in the world was over just under three zettabytes, a number that is expected to soar to over eight zettabytes by 2015. A zettabyte is over one billion terrabytes; many new computers feature one or two terrabytes of storage, an amount of storage that was at one time limited to only super computers. Data storage has grown at such a fast clip that the tech world has had to rush to develop names to quantify storage amounts. As a frame of reference, a zettabyte is capable of storing two billion years of music! So what is fueling this frantic data growth?

  • Growing World Wide Access to the Internet. Roughly 40% of the global population now has access to the web, representing a 250% growth since the mid 2000s. Growing third world economic development will continue to add more and more users over the next decade.
  • Vast Amounts of Video. By the year 2016 over half of web traffic will involve internet video, further creating a massive storage demand. YouTube users alone upload somewhere in the area of fifty hours of video a minute, 24 hours a day.
  • Increase in Mobile Device Use. Smartphones, tablets, and other mobile devices are quickly become internet users’ primary method of web surfing, dramatically raising the average amount of time spent on the web.
  • Booming Online Commerce. Online retail purchases are set to exceed in store purchased over the next ten years, further boosting data demands.

The above are just a few of the many contributors to increasing data demands. Government and business internet usage are also increasingly adding to total digital data, leading both to develop their own storage server facilities.

How Will Servers Keep Up With Storage Demands?

Google, Facebook, and other organizations that handle massive amounts of data continue to build bigger and more advanced data centers throughout the globe, and are still struggling to keep up. Building more and more data centers is not a feasible long term solution to data demand; the development of new technology is a necessity. The following are some of the current and anticipated trends that will move the data storage forwards:

  • Automated Monitoring and Fixing of Processes. Identifying the root of server performance problems can be a time intensive task, as is fixing them. Newer computer technologies that automatically find and fix faulty processes are growing in popularity. Developing programming and artificial intelligence technology will build on current systems to allow for more complex fixes and for performing daily tasks that currently have to be done manually.


  • Increased Energy Efficiency. Energy use has been a consistent thorn in the side of server operators, with some experiencing as much as a 90% waste rate. This problem has been not only expensive but has drawn the ire of environmental advocates as well, who are also upset at the fact that most data centers rely on gas powered generators in the event of a power out. Google, Facebook, and other internet giants have turned to building servers in areas like Sweden and Greenland to take advantage of natural cooling and nearby hydroelectric power.


  • All in One, Portable Data Centers. AOL and other online operations are experimenting with the use of small, portable unmanned data centers that can be used to supplement high demand areas or used in the event that a main data center is damaged. The smaller nature of these portable centers necessitates that they have the ability to function on their own as much as possible. Advancements in robotic and AI technology have the ability to fuel the widespread use of small, self-reliant data centers.

The Dawn of the “Lights-Out” Data Center

AOL’s vision of a “lights-out” data center has attracted the attention of other major web players. A “lights-out” data center is one designed to be completely human free in day to day operation, save for significant breakdowns in equipment. Data server robotics are currently focused on a rail system that allows robotic equipment to move throughout a data to center to move servers, perform minor repairs, clean, and use integrated software to handle processing problems. Current data centers require administration and engineering staff to be responsible for a high number of servers; the use of robotics has the potential to cut down on operating costs and reduce employee workloads.

The development and use of robotics is currently too expensive to make it feasible for widespread use, as the upfront costs associated with building a lights-out center make it an uneasy investment to make. However, in the near future the upfront costs will likely be more than made up for due to higher efficiency, decreased labor costs, and the ability to remotely control robotics from anywhere in the world.

Given the staggering growth in data needs, server operators have no choice but to adapt in order to meet demands and provide the performance that users expect. Robotic technology, innovative software, and artificial intelligence will likely be the foundation of data center improvements, potentially revolutionizing one of the most important resources in the world.

Posted in Data Center Construction | Comments Off

The Damage Caused by Downtime

Data center reliability has helped to increase the operational effectiveness of thousands of companies around the globe. Improvements in technology and the design of server farms, as well as in the facilities that house them and the level of training for the people who oversee them have allowed collocation centers to reasonably offer the expectation of the holy grail or network reliability: the seldom-achieved 99.999% optimal running time.

Yet it’s often that exceptional reliability that leads to organizations into trouble. 99.999% still isn’t 100%, yet too many companies take for granted how difficult dealing with that 0.001% can be. This is only compounded by the fact that corporations allow the security they feel in their data center’s reliability to convince them to allocate resources normally dedicated to network maintenance to other areas. While this line of thinking may produce the desired results in the immediate, it need take only one instance of peak-hour downtime to reveal just how flawed this idea truly is.

Given the vast resources some organizations have in terms of staff and production capability, it’s remarkable to see how easily their operations can come to a grinding halt during network downtime. This is most often to due poor training of employees on how to handle manual processes as well as what to do in the event of a server failure. While this typically isn’t an issue for those organizations that have chosen to go with a third party collocation provider whose only role with them is server maintenance, it can spell disaster for those who operate an in-house data center, as limited resources often contributes to extended downtime.

The Difference 0.001% Can Make

This is an issue facing both large and small businesses alike. To put a monetary value on exactly how much that cumulative 0.001% represents, downtime cost companies $26.5 billion in revenue in 2012, an increase of 38% over 2010 numbers. Considering the numbers from companies that reported server outages, that breaks down to roughly $138,000 per hour lost during downtime. And considering that those number’s only signify those outages that were reported, one has to wonder if the actual loss isn’t much higher. Experts actually estimate that were every data center in the world to go dark, $69 trillion would be lost every hour.

That number represents the oddity in these figures. If data center reliability has actually increased, why the increase in lost revenue? One might expect the numbers to actually go the other way. Yet despite the potential for network outages, demand for services provided by data collocations center is growing at an astronomical rate. Currently, data centers worldwide use as much cumulative energy as does the country of Sweden annually. That usage will only increase with time, as it’s believed IP traffic will hit 1.4 zettabytes by 2017.

Another contributing factor is the virtual devaluing of data storage. in 1998, the average cost of 1 GB of hard drive space was $228. The same amount of space was valued at $0.88 just 9 years later. As data storage and transmission needs continue to grow exponentially, enormous data warehouses are becoming more common. Eventually, this leads to an increased need for data collocation, as more companies need to dedicate increased attention to their server maintenance.

The Myth of 100% Network Reliability

In anticipation for such an increased need for collocation services, many may think that the next logical step in data center evolution to reach the 100% reliability plateau. Unfortunately, such a estimate is beyond reach (five 9s is barely achieved a handful of the time). The reason being is that there virtually are almost as many causes of server failure as there are bits of data that they carry. And the chief amongst these is a problem for which there is no solution.

The most common case for downtime is, by far, human error, accounting for almost 73% of all instances of downtime. Whether it be from lack of operational oversight, poor server maintenance techniques, or just poor overall training, people can’t seem to get out of their own way when it comes to guaranteeing the performance of their servers. Companies can and should invest resources into mitigating this, but eliminating it all together remains an illusion.

Aside from human error, downtime can be caused by literally anything. Anyone doubting this need only do a quick internet search for “squirrels” and “downtime” to see just how often these furry little rodents can bring elements of the human business world to a standstill by chewing through cables and thus limiting a data center’s operational capacity.

Focus on Preparation, not Prevention

Given that downtime is an inevitably, an organization is much better served in turning its efforts to dealing with downtime rather than trying to eliminate it. Most, if not all companies aren’t doing all they can to adequately deal with network downtime. 87% of companies admitted that a data center crash resulting in extended downtime or data loss could be potentially catastrophic to their businesses. Yet more than half of American companies admit that they don’t have sufficient measures in place to deal with such an outage.

Given this information, organizations looking to either establish an on-site data center or rent space through a third party data collocation company should consider these two aspects:

  • What degree of network reliability can the data center deliver? Taking into account what’s actually feasible, can it guarantee operational effectiveness to the highest of current standards?
  • What needs to be done to improve in-house performance in the inevitable event of downtime? Are sufficient efforts being placed on maintaining like operational capacity as well as on running data recovery once the network is back up?

As corporations and organizations continue to rely more and more on their data collocation services for optimal performance, understanding the limits of data center reliability and being prepared to deal with network outages is the key to avoiding huge revenue losses from downtime.

Posted in Uninterruptible Power Supply | Comments Off

How to Select the Hard Drive That is Right for You

hdd-data-centerSelecting and purchasing a hard drive used to be a simple task. Generally people would find a few of the highest capacity drives they could afford, pick the fastest of the group, and were done. Today, however, there are many different types of storage devices on the market today filled with different and more complex technology. The sheer variety of devices available can make it difficult to select one that will work best for your needs. Another thing that just increases the selection problem is choosing between disk drive interfaces Solid State Drive (SSD) and Serial ATA (SATA).There are also other factors to consider when selecting a storage device like random access performance, cost, sequential performance, reliability, and density.

There are many factors that make selecting the right drive a challenge. Understanding the intricacies of each storage device feature will help tease out which device will better suit your needs. The fundamental question that needs to be considered is which is better: conventional hard drives (HDD), or Solid State Drives (SSD)

Technology: HDD

The standard version of HDD drives contains multiple disks, otherwise known as platters. These platters are covered in a magnetic coating and rotated at high speeds. The platters are then read by drive heads, which change the magnetization of the material beneath to either a reading state or a recording state.

Reading data and writing data requires a lot of work. Even while the HDD ideas are simple, having manufacturers create high capacity drives with reasonable prices poses several problems.

During normal operation, the heads and platter must spin to an exact point before the drive can actually do anything. This whole process takes time and is one of the main reasons for performance bottlenecks in PCs. This problem is also found in netbooks and laptops because moving all of the components around creates a constant power drain.

Drive heads are positioned very close to the platter, only the thickness of a human hair separates them. If there is a shock or electrical bump and the wrong time, they may collide, which then damages the drive and loses data. Although manufacturers do a lot to prevent these types of collisions from happening, there is always still a risk.

All of these issues with HDD are not seen with SSD, although the price difference between the two is very significant.

Technology: SSD

Solid state drives are very technologically different than HDD drives. They forgo the platter and the head components completely and replace them with a simply, non-moving memory chip. The chips do vary from drive to drive, but the majority of them use flash memory, the same technology that is found in MP3 players, cameras, and memory cards. The advantage of this flash memory is its ability to store data without any power.

The more efficient technology found in SSD is much more expensive than the technology found in HDD. SSD drives on the market typically have low capacities with a high price tag.

SSDs do make up for the high price by their excellent performance. A typical hard drive with HDD takes seconds for its platters to accomplish their full speed, whereas SSDs are ready to go immediately. SSD also doesn’t have to move heads around or wait for a platter to reach the miraculous special point to be able to reach data. SSD can be up to fifty times faster than a regular HDD drive. Equipping a PC with an SSD would more than halve the boot time, which can be very beneficial for some users.


When it comes to the technologies of HDD and SSD, it is very obvious that SSD drives are technologically superior to HDD, but there is a problem that comes with that technical increase: cost and lower capacities.

An SSD is capable of reading large chunks of data at over 500 MB per second, write data at over 300 MB/s per second, and access random data in .1 milliseconds. This speed is significantly different from the fastest HDD which is only 150 MB/s for sequential reads and writes with random data access at 17 milliseconds. SSDs allow for Windows to boot and run faster, load programs and games in seconds, consume less power, puts out less heat, noise, and vibrations. SSD is superior when used in the right context.

If choosing between equally priced drives, SSD will only provide about 3% of the capacity of the same priced HDD. There are higher capacity drives available, but they are prohibitively expensive. Unless money is not an issue, the first desktop drive should always be standard HDD. Although the performance won’t be up to SSD standards, it is adequate for most tasks, and the money saved can offer a more significant boost of speed on a different part of the computer. If optimizing an existing PC, an SSD would be the way to go. The faster boost times and overall speed boost will help Windows load more quickly, even with a small capacity SSD because data and other applications are installed on the regular hard drive.

SSDs are great for laptops, depending on what their intended use is. If a laptop is being used as a replacement desktop system (running lots of applications), SSD won’t really have the capacity to help. If the laptop’s intended use is basic (things like email, word processing, browsing), SSD, even a low capacity would be worth it. The SSD would improve the overall system performance, reduce weight and noise, and increase the battery life.

There are many things to consider when selecting the right drive for your PC environment. Don’t get sucked in by new fads or hypes. Know your personal requirements and PC environment and price out drives from there.



Posted in data center equipment | Comments Off

The Future of Data Center Design and the Open Compute Project

Recently, Intel and facebook banded together to create the next-generation of data center rack technology. The prototype includes an innovative type of architecture called the photonic rack architecture. The design of this new prototype improved every aspect of data center racks including the design, cost, and reliability by implementing a disaggregated rack environment, Intel switch, and silicon photonics technology.


What are Silicon Photonics


Silicon photonics take advantage of light photons to move gigantic amounts of data at extremely high speeds over a small, thin optical fiber. Traditionally, to move data, electrical signals were used and sent over a copper cable. The prototype that Intel came up with can move data at up to 100 gigabites per second. This speed allows the components to work together, even when they are not in close proximity to one another.


New Options in Design


Intel has created a rack that separates components to their own server trays, one tray for atom CPUs, one tray for Xeon CPUs, and another for storage. This design is great because when a new generation of CPUs is on the market, the user can swap out the CPU tray instead of having to wait for an entire new server and motherboard design.


This design approach enables the independent upgrading of compute, network and storage subsystems. This independent upgrading ability will absolutely define the future of datacenter designs for the next ten years. Intels photonic rack allows for fewer cables with an increased bandwidth, power efficiency, and farther reaches than today’s copper based connections. These new technologies makes hardware much more flexible, and when coupled with the silicon photonics, enables interconnection without much concern over physical placement.


Rack Disaggregation

The term ‘rack disaggreagation’ simply refers to the separation of resources that exist in a rack, this includes storage, networking, comput and power distribution. The separation is in the form of discrete modules. The traditional arrangement of a data center rack would have a server with its own group of resources. When data center racks are disaggregated, resources are distributed and grouped by their types and upgraded on their own pacewithout affecting others.


Disaggregation not only increases the lifespan for each resource, it enables IT managers to be able to replace individual resources rather than the entire system. This modulation makes data centers much more flexible and serviceable which subsequently improves the total cost for an infrastructure rehaul investment. This arrangement also improves thermal efficiency because it allows for more optimal placements within a rack.


Connector Design


The optical interconnects today generally use a connector called MTP. The MTP connector was not optimized for data communication applications, it was designed in the 1980’s for telecommunications. Even though, at the time it was created, MTP utilized the state-of-the-art technology, it has not kept up todate. Many parts of the MTP connecter utilizes parts that are individually expensive and can be easily contaminated by dust.

New Connector Design


In the last 25 years, there have been significant changes in materials and manufacturing technology. Utilizing this technology, Intel, with the help of optical fiber and cable specialists, designed a brand new type of connector that uses modern technology and manufactoring technique. They have included a telesckiping lens to help prevent dust contamination, as well as used fewer parts with up to 64 fibers in a smaller form, all at a lower cost than MTP.


New Innovations


The new Intel prototype utilizes silicon photonics technology as well as distributed input/output using Intel’s ethernet switch silicon. This prototype also supports Xeon process as well as next-gerneration system-on-chip Atom processers. These innovations dovetail nicely with many other ongoing Open Comput projects. The SOC/memory module was created with the writing of CPU/memory ‘group hug’ module specifications that were proposed by Facebook. The exisitng OCP windmill board specification that supports the 2S Xeon processors, is planned to be modified to illustrate that the signal and power deliviery were modified to be able to interface with the OCP Open Rack v1.0 spcification for power dlivery through 12V bus bars, as well as for networking, to allow for interfacing with a tray-level mid-plane board that contains the switch mezzanine module.


Intel is also planning to contribute a desing that enables a photonic recptacle to the Open Compute Project, and plan to work with Corning and Facebook to standardize the design.


Other New Features


Intel has been highly involved and added several innovations to the Open Comput Project. Their innovations include new storage technologies, racks, and systems. Specifically, Intel has been working on finalizing the Decathlete board specification for a general-purpose, dual-CPU motherboard, large-memory-footprint for enterprise adoption.



What is the Open Compute Project?

The Open Compute Project is an initiative announced in 2011 by Facebook to openly share datacenter product design. This initiative began after Facebook redesigned their data center in Prineville, Oregon by Frank Frankovsky, the Open Compute Project leader. The design is still a long way from being used in data centers, but some aspects that were published have been successfully used in the Prineville center to help them increase energy efficiency.

The future of data center design is being created right before our eyes. If successful, the Open Comput Project will enable a rapid technological increase that will make data safer and more accessible than it has ever been before. With the new rack, connector and photonics, data will be much easier to store, use, and share. Collaboration is the key, which is why the Open Compute Project is so widely accepted by many diffreent companies across the globe. The future is coming, are you ready?

Posted in Data Center Design | Comments Off

Data Protection Solutions and Virtualization

In today’s modern world, protecting data from hackers and viruses has never been more important. There are many different types of data protection available on today’s market, all of which possess unique features, which can make it difficult to select the one that is right for you.

User Friendly?

When it comes to data protection products, the bottom line is ease of use. This is important because the person responsible for data protection may not be very knowledgeable or available to manage the product every day.

When selecting a data protection product, look for a system that can be managed on one user interface. The interface should be simple so that it only takes a couple of minutes to complete daily tasks. Another must is the protection product should be able to send alerts to your mobile devices and inbox with a simple, straightforward report about the recent activities and their status.

The ease of use requirement coincides well with historical reporting. If the system user understands what the solution is doing daily, then he or she will be able to understand what the system has been doing for the previous days, weeks, months and more. This feature allows for future planning with very little hands-on work. A great example of this is if there is a report of growth for the last four months, the user will then be able to predict when the system will be running out of space and be able to know when they should purchase more tapes. Although simple, this example shows that data protection products should help plan for the future usage, future budgets, in addition to operating well in the present.

One of the other features that can dictate whether the solution will be easier or harder to manage is the single-server footprint versus the master/media footprint. This is the same with automatic client software updates because manually updating systems across an entire infrastructure takes time away from the IT administers.

A successful experience with data protection products should have administrative time-savings inbuilt to help reduce the cost of operations.

The Life-Cycle of Entering Data

All data is not created equal, and it shouldn’t all be stored the same way. Sometimes IT organizations will drive up the cost of storage because the treat all the data the same and store it on the same media. Things like hierarchical storage management (HSM) or long-term archive management allows for flexible data storage on different tiers and with specific policies. These types of storage systems allow for administrators to migrate and store data on the tier that is most appropriate for the data they are storing. With these types of storage, older and less frequented data can be moved to a storage platform that is slower and much less expensive, like tape, while newer and more frequented data can be moved to faster, more expensive data storage. Automated data archiving can also help in the life cycle of data by helping organizations comply with data retention policies as well as reducing cost that is incurred because of that compliance.

It is important to look for data storage systems that reduce the overall cost because of automated data life-cycle management based on policy. Also, moving data to the most appropriate tier helps an organization become cost effective while still maintaining service level requirements.

Hierarchical Storage Management (HSM)

Hierarchical storage management (HSM) is a technique to store data by automatically moving it between high-cost and low cost storage media based on the data itself. High-speed storage devices (hard disc drive arrays) are much more expensive than slower device (optical discs, magnetic tape drives). In an ideal world, all data would be available on high-speed devices all of the time, but in the real world, this is extremely expensive. HSM offers a unique solution by storing the bulk of the organizations data on slower devices and copying it to faster drives when needed. HSM monitors how and how frequently data is used, and ‘decides’ which data should be moved to slower drives and which should stay on the faster drives. Data files which are used frequently are stored on fast drives, but will eventually be migrated to a slower drive (like tape) if they aren’t used for a certain period of time. The biggest advantage to HSM is that the total amount of data stored can actually be much larger than the capacity of the high speed disk storage available.


The technology surrounding virtualization has drastically helped IT organizations of every size reduce their costs. It has done this by reducing application provisioning times and improving server utilization. These cost reductions, however, can disappear quickly when faced with a virtual machine sprawl. Also, the connection between physical and logical devices becomes very hard to track and map, creating a virtual environment that is more complex than has ever been seen before. In these complex virtual environments, backing up and restoring data can become very difficult. For example, restoring and backing up data for a group of virtual machines that reside in one physical server can make all other operations running off of that server stop completely, including data protection services.

Data Reduction

Data reduction technology is the first line of defense against the volumes and cost of data that is rapidly expanding. Solutions to this problem include progressive-incremental backup, data compression, and data deduplication. These things can help organizations to reduce their backup storage capacity by as high as 95 percent.  Efficient tape management and utilization can also help reduce data storage capacity requirements.


Selecting a data protection service can be arduous, but if you understand the services offered and which are most useful to you, the selection will be much easier.

Posted in data center equipment | Comments Off

Ways to Ensure Security Within a Data Center

datacentersecurity2If you are designing a data center, one of the most critical issues to focus on is security. There are an abundance of threats that the data center will face, some of which can be prevented with proper planning. Hackers, disgruntled or careless workers, and even weather-related disasters can all wreak havoc on a data center. Not only can this lead to data loss, but theft of sensitive information and many other types of damage. In order to prevent these intrusions, it is a good idea to act before something occurs. When the data center is prepared for an emergency scenario, the damage will likely have less of an impact. Although it might cost more to do everything you can to increase security, the investment could pay off very well in the long run by addressing (or even preventing) complications.


The risks that data centers face include everything from seasoned criminals breaking in to employees that don’t pay attention on the job. By taking into account the multitude of problems that could threaten the data center, you will be less likely to experience a devastating intrusion or accident. When workers are aware of the various problems, accidental damage will be less likely and they will be more vigilant. Also, when you do everything you can to keep criminals out, they could decide not to target the facility, and if they do, you will be ready.

Landscaping and Windows

If you want the data center to have a quieter presence, you can employ some landscaping tactics to blend in better. By strategically planting trees or placing boulders, you can make the building less noticeable. Not only will this deter unwanted attention, but it will complement the facility as well. After all, who doesn’t appreciate nice landscaping?

Although windows might seem like a good idea, they should actually be avoided in a data center. The center shouldn’t be designed with typical rooms, where windows are often important. If you absolutely have to include windows, make sure to use glass that is laminated and resistant to bombs. If you have weak windows, this is a huge vulnerability that any potential intruders are likely to notice.

Entry Points

Every point of entry and exit should be watched at all times. Make sure that you know exactly who is going through entry points and record their movements. Not only could this come in handy in the event of a fire, but if a security issue ever arises, you will know which people were in different parts of the building at a certain time. Furthermore, you should reduce the number of entry points. Not only will this be less expensive, but much easier to control as well. For example, it could be a good idea to only have a front and back entrance.


Make sure you have top-notch authentication. For example, with biometric authentication, such as scanning a fingerprint, the building will be much more secure and the likelihood of unauthorized access will be significantly reduced. In parts of the building that are not as sensitive, simpler means of authentication (such as a card) could work. The level of authentication should depend on how sensitive a particular part of the building is. If a room houses highly important information and equipment, it is wise to employ strict authentication.

Video Cameras

Fortunately, many people understand the benefits of using video cameras to improve security. Not only are they worth the cost, but they work well. If something happens to the data center, it is extremely helpful to have the ability to see what happened. In order to make security cameras more effective, strategically place them throughout the premises. Plus, you can consider hidden and motion-sensing cameras to further enhance security. It is smart to keep track of all the footage at another location.

Keep it Clean

Although security and cleanliness don’t always appear to go together, people have a tendency to spill food. Of course, when food and drinks are spilled on computer equipment, it can damage them, so make sure everyone understands the importance of eating in a designated area. There are already too many things that can go wrong in a data center and spilling food shouldn’t be an issue.

Doors and Walls

Another way to reduce the likelihood of an intrusion is to use doors that only have handles on the inside. Exits are essential, in case there is ever a fire, but keep the handles on one side. Also, set up an alarm system so that if one of these doors is opened, security will be aware.

Although some people might not devote too much attention to the walls, it is crucial to ensure that nothing will remain hidden on them (and the ceilings). Make sure that there are no points of access that are not visible and that walls go from beneath the floor to slab ceilings.

Get Started

If you want to ensure that everything is done to protect your data center, you should always focus on the numerous ways to improve security. If you haven’t thought about security as much as you should have, start researching and preparing as much as possible. Even after the data center has been operating for years, there are still ways to make the facility safer and more secure. Because there are countless threats that will never go away, security is a constant concern. By remaining vigilant, taking advantage of new technology and tactics, and doing everything you can to prevent and deal with threats, your data center is less likely to suffer from a future attack.

Posted in data center maintenance, Facility Maintenance | Tagged , , , | Comments Off

Should You Build Your Own Data Center or Partner With Someone Who Already Has One?

datacenter45There’s nothing quite like the satisfaction that comes with doing things on your own, but you might be better off working with someone else for certain tasks, such as data storage. Even if you have a small business, there are circumstances in which you’ll want to either pay for data center services or partner with someone who already has a well-established data center. So how do you decide between maintaining your own data center and partnering up with someone who already has one?
How Much Data Do You Have?
If you have a massive amount of data to keep up with, then you might be better off partnering up with someone who has a data center and an equally massive amount of data that needs to be kept safe. By pairing up with someone who already has the same needs as you, you won’t have to worry about doing a lot of trial and error as you’re trying to figure out all of the requirements necessary for keeping such a large amount of data safe, and that’s especially true if you have sensitive information from your customers and clients.
If you don’t have very much data that you need to store in a data center, then you’ll probably be fine with building your own data center. Just make sure that that data center has all of the security features necessary to truly keep your information safe and that it can grow as your business grows.
How Sensitive is the Information You’re Storing?
Sensitive information calls for an equally sensitive data center. If you have credit card numbers, bank accounts, social security numbers or any other kind of sensitive financial or personal information from your customers and clients, then you might want to strongly consider partnering with someone who has an extremely well-protected data center. Not only can cyber criminals do a lot of damage to your customers and clients if they manage to get ahold of their information, they can do a lot of damage to your reputation as well. How many people are going to want to do business with you if they can’t trust you to keep their information safe?
Even if cyber criminals don’t steal financial and personal information, they might infect your data center with a virus that could wipe everything out or corrupt your data, which also won’t do any favors for your reputation. Identity theft and viruses can cost you a lot of money in terms of your having to inform all of your customers and clients in addition to having to pay professionals to take care of all of the damage done by cyber criminals. Eliminate the process by pairing with someone with a secure data center.
What is the Upkeep and Run-Time of Keeping Your Data Safe and Secure?
If your data requires an abundance of upkeep and run-time, you have to ask yourself whether or not you have the time and resources necessary to take care of all of that. Individuals who aren’t very technologically savvy or who have a limited amount of time on their hands might want to think about letting someone else handle the run-time and upkeep of the data center. One mistake in either could cost you in time and money. Something else to think about is that as time passes and your needs grow, you might need more upkeep and run-time in order to keep your data center properly up and running. If your business is growing along with your run-time and upkeep, you might not be able to keep up with everything on your plate. Don’t just consider your present needs when thinking about data center requirements, think about the future and your individual goals as well when determining whether or not you’d prefer to have your own data center or partner with someone who already has one.
Do You Plan on Building a Larger Database?
If your current data center is more of a starter data center, you should look into co-location. One of the many joys of co-location is that you can find a facility that is built specifically to your individual needs and requirements. Something else to think about with co-location is that data centers have their own air conditioning, security systems, generators and constant professional monitoring, all of which can cost millions of dollars if you were to buy them on your own.
Another good thing about co-location is that it’s a great way to manage your risks if your main data is kept at your main office. If anything unfortunate should ever happen to your office, all of your data will still be safe and sound at a co-location facility. One good idea is to use your main office for backups and recovery while you use co-location center as the main location where your data is kept.
With co-location, you can rent out property for a long period of time while being able to upgrade to building a bigger and better business. If you choose to build your own data center, the needs of your business might quickly outpace your data center capabilities, which can potentially cost you several lucrative business opportunities.
There are more resources than ever that allow you to take care of your data storage needs yourself, but unless you have the education and experience necessary to handle each and every one of your data needs, you’ll be better off partnering with someone who already has a data center and the experience and resources necessary to take good care of your data and your customers.

Posted in Data Center Design, Partnerships | Tagged , , | Comments Off

Google’s Going Green

windenergygoogleNon-renewable Resources

Non-renewable resources such as petroleum, uranium, coal, and natural gas provide us with the energy used to power our cars and houses, offices, and factories.  Without energy we wouldn’t be able to enjoy the luxuries that we are used to living with in our everyday lives.  Our factories would shut down, our cars wouldn’t work, and our houses would remain unlit.  The problem with using non-renewable resources is that they can run out, and if they do we will have to find other ways to get power.

As of 2010 the world’s coal reserves were estimated to last until May 19, 2140, and petroleum reserves were estimated to hold out until October 22, 2047.  While these dates may seem far off, they are coming fast upon us.  If the petroleum reserves run out, as predicted, in 34 years, many of us will feel the effects.  Not only will we not have gas for cars, trucks, and planes, but commodities such as heating oil, paint, photography film, certain medicines, laundry detergent, and even plastic will cease to exist.

While there has been a push to preserve petroleum by converting cars to run on natural gas, this isn’t a perfect solution, as the natural gas reserve is only expected to last until September 12, 2068.  Although coal energy will last us a little bit longer, in less than 150 years it is predicted that this resource will run out as well, at which point we will be forced to make a change in our energy use, unless we make that change now.

Green Power

Many big businesses are making the shift to renewable resources such as water, solar, and wind energy in an attempt to preserve these quickly depleting resources.  Not only are these alternative power sources easily accessed and sustainable, but they also leave less impact environmentally and can be domestically produced lessening our dependence on foreign energy sources. Google is one company that is making the transition to “green power.”

Green power, according to the EPA, is any form of energy obtained from an indefinitely available resource whose generation will have no negative environmental impact.  This includes wind power; wave, tidal, and hydropower; biomass power; landfill gas; geothermal power; and solar power.  According to Google Green, Google’s goal is to power their company with 100% renewable energy, both for environmental reasons, and for the business opportunity provided in helping accelerate the shift to green power in order to create a better future.

One way Google has begun the transition is by piloting new green energy technology on many of its campuses.  One of the first pilot programs began in 2007 when they installed the largest corporate solar panel – generating 1.7 MW – at the Mountain View campus in California.  Today they have expanded that panel to 1.9 MW and from it are able to generate 30% of the energy needed to power the building the panel was built on.

Google’s Wind Power Investments

Along with solar power, Google has recently begun using wind power as well.  On June 06, 2013 they signed a contract with O2, an energy company and wind farm developer in Sweden, to purchase 100% of the power output from a new wind farm being built in northern Sweden for the next 10 years.  The 24 turbine project, which is expected to be fully operational by 2015, will provide 72 MW of power to Google’s Hamina, Finland data center, furthering the companies attempt to use only carbon free, renewable energy sources.

Because of Europe’s integrated electricity market and Scandinavia’s shared electricity market, the shift to wind power has been relatively simple.  Google has been able to buy 100 % of the wind farm’s electricity output with the guarantee that the energy they purchased will be used to power their data tower alone.  The wind farm itself will be powered by Nord Pool, the Leading power market in Europe.

Shortly after purchasing the contract with O2, Google made another wind farm investment, this time domestic, in Amarillo, Texas.  On September 17, 2013 the company announced its purchase of 240 MW of wind from Happy Hereford wind farm, which will be fully operating by the end of 2014.  This is Google’s largest investment to date, and will provide power to the Mayes County grid in Oklahoma, where another of their data towers is located.  Unlike their investment in Sweden, Google is unable to send the power directly from the wind farm to their tower because of local energy regulations, so they are taking a different course of action.  By selling their purchased energy into the wholesale energy market they are able to utilize it along with the surrounding area, thus decreasing their carbon footprint.

Other Investments

Google has taken this rout before due to domestic power regulations, and while their Texas contract is their first wind power purchase in the United States, they have invested equity in many other wind farms throughout the state.  One of these is the Spinning Spur Wind Project, a 161 MW wind farm consisting of 70 2.3 MW wind turbines.  The farm spans 28,426 acres and is located about 30 miles west of the Happy Hereford project.  When the project is finished it will provide power for around 60,000 homes in the Oldham County area.

Over all Google has invested over $1 billion into 2GW of carbon free, clean power projects and pilot programs.  Some of their other investments include a solar power project in South Africa; photovoltaic projects in both California and Germany; and Solar City, a project providing solar panels for thousands of home rooftops.  By investing in these projects and purchasing power both directly and indirectly they are able further their goal of becoming 100% reliable on renewable carbon free resources for energy as well as help others do the same, thus aiding in lessening the carbon footprint left by everyone.

Posted in Technology Industry | Tagged , , , | Comments Off