Six Disaster Recovery Truths, Learned The Hard Way

Here are six of the hardest lessons IT pros have learned when it comes to disaster recovery, and how these tech travesties could have been avoided.

There are some things in life you’ve got to learn the hard way. Lessons in failed disaster recovery are not one of them. There are plenty of IT pros who will happily tell you the mistakes they’ve made, and how you can avoid them.

Here are the biggest pitfalls that doom the data of businesses worldwide, no matter their size, and how you can easily avoid suffering from the same mistakes.

1. Human Error Can Cause Big Damage

Human error is a massive cause of data disasters. It can be as simple as a cleaner unplugging the server, clueless building managers turning off the power at the breaker, or someone knocking over an important NAS drive.

Human error can come from your end too – an accidental deletion when you get overconfident, or installing something that clashes with everything else and corrupts your system. This is where a good disaster recovery plan is invaluable.

2. All Hardware Fails

Hardware failure is another major cause of data loss. By keeping all your data on a single server, SAN, or hard disk, you’re at massive risk if it suddenly decides to throw in the towel.

The longer you keep your hardware past its life expectancy, the greater risk there is of everything going at once. You can avoid this by taking into account that your hardware is going to fail, and planning appropriately.

3. Not Protecting Against Hostile Parties

These days, ignoring ransomware or disgruntled employees is simply playing with fire. There are plenty of tales from IT pros who haven’t put up defenses, and paid dearly (and literally) for it.

Ask yourself what defenses you’ve put up against this scenario: a rogue employee deletes all your important files from shared storage. They keep a copy on their personal hard drive of the business data, and use it as a bargaining chip. What would you do? Could a SysAdmin delete several of your virtual machines with a click of their mouse, making sure they’re never to be recovered?

Perhaps the worst scenario is ransomware, when you have no idea who the perpetrator is. Ransomware is a piece of malware that encrypts all your business data and demands you pay a ransom to get it back, though it can do other things too.

A good solution to making yourself ransomware-ready is having a backup strategy, then protecting those backups with a dedicated ransomware protection feature.

4. Not Having Backups

Speaking of which, one of the biggest mistakes many IT professionals still make is not having any backups in place. Most data loss scenarios can be solved simply by having a solid backup plan.

Accidental file deletion? Hardware failure? Ransomware infection? Natural disaster? A great backup scheme can protect you against all of these.

5. Onsite and Offsite Backups

That said, part of a great backup scheme is having at least two different kinds of backup, with one of these backups off-site. If you don’t have backups stored physically offsite – either using a third-party backup storage solution, or a public or private cloud solution – then you’re not really protected.

6. Backups Need To Be Tested

We wrote an article on it recently, on how many IT pros prioritize backup features over recovery. It’s best put by these two dictionary entries:

Backups: A copy of your data that has been tested.

Hope: A copy of your data which has not been tested.

Share on email
Share on print
Share on facebook
Share on google
Share on twitter
Share on linkedin



Start your free 30-day trial today