Wednesday18 October 2017

 

How to Avoid Data Loss from Natural Disasters like Storm

There are various human induced and natural causes than can affect the data leading to data loss. Mishandling of data by the users can be controlled by following some safety tips and tricks. However, natural disasters are uncontrollable. Various natural causes can lead to severe data loss.

Sudden power failure is one of the prime reasons for data loss. Such failures are often caused by storms, hurricanes, tornadoes, etc. Such a disaster can last up to many days causing a season of outages. This kind of behavior can be found especially in coastal tropical areas where even advanced technology and tools fail to stop the loss. The most affected sector by such disaster is the government sector, which rely on most of the data centers. In case the government is not well prepared to combat such situations, such data centers can easily be swept away with blowing winds and storm. Such a disaster was recently seen just two years back in Virginia to New Jersey region. The data centers lying in the east-coast region ranging from Virginia to New York to New Jersey were hugely affected by hurricane Sandy. This had a huge impact on the public power leading to unavailability of lots of important information.

However, various measures can be taken to minimize the loss. For example, with a proper planning and cloud-based disaster recovery mechanisms, organizations can handle the situation of power outages due to storms. Public multi-tenant clouds can be a great help for government agencies, which use their own data centers for house applications. Such a recovery mechanism can be easily deployed at a low cost.

The following steps can be taken to help data centers to plan and execute disaster recovery mechanisms effectively with minimal or no interruption in the production environment.

1.  Identify your Critical Apppcations

Identify your critical Web-based apppcations. Critical Web apppcations are those, which have to run almost continuously. Determine their dependencies and minimal hardware requirements to operate. Note down such findings and include them in the plan, which would be required in step two.

2.  Decide for a good Cloud Service provider

Based on the technical requirements and the nature of the business, identify an appropriate cloud service provider (CSP). It is recommended to use an in-house hypervisor used by the CSP. This helps in the long run and makes the process much easier, quicker, and cheaper.

3.  Automatically Mirrored Virtual Machines

You can either setup the data centers based on the current hypervisor being used for virtuapzation or you can set up the remote virtual machines (VMs) manually. In either of the cases, you need to ensure that you have a mirrored VM of the production systems that needs emergency backup.

4.  Handpng Failures in case of Disaster

Select a mechanism to handle failures in case of disaster, once the tested virtual machines are in place. However, you need to be careful while selecting any such technology as any technology that depends upon the Internet domain name system records should be avoided.  This is because; a change in the DNS will create a downtime for several hours affecting the overall system. You need to select a technology that can detect issues in the primary data center and that can redirect users to the solution instantly.

5.  Regular Failover Tests

This is the final step where you need to perform the regular failure test. There should be an end-to-end failover test, which should be regularly tested. Depending upon the individual popcies and failure the test could be a small test covering an individual apppcation or to schedule a full site failure. However, in any case the process should be properly documented. The document should include the steps taken to perform the test and the result of the test. In case the plan does not work, the documentation can be looked to identify what went wrong. By analyzing the situation, you can make adjustments and test again. You need to perform the test repeatedly to have a final bulletproof failover plan.

In most of the times, we cannot control natural disasters; however, we can have precautionary measures to control the disasters done to the data. A data center can go down by just a single emergency plan; however, just a simple plan is required to prevent disaster. Either this can be done in-house or you can outsource it. In both of the cases, a proper plan is required to control the data loss situations like power outages due to storms.

With the ever-growing technology, most organizations have their databases secured to keep their business running. With the evolution of Big Data, Hadoop, and NoSQL, security mechanisms also need to evolve.

Out of these, Big Data is the fastest emerging technology. It involves different format of terabytes, petabytes, and Exabytes of data getting transferred into many new and different software packages. It provides business with one of the most fastest and frequent results. With the security concerns as the prime motive and the difficulty in maintaining high security due to the cost has led to the introduction of big data into an organization. It may be the right time where full adoption of modern security practices can be started.