In today’s online business world, when access to data is interrupted the ability to conduct business is too. In this digitally transformed business era, data is king.
Common causes for data center downtime
The most common cause for interrupted access to data is not system or infrastructure failure. A 2016 report by the Ponemon Institute found human error to be the second most common reason for data center downtime, accounting for 22% of all incidents. Power supply failure is the number one reason for data center downtime, accounting for 25% of all incidents. Only 4% of downtime was IT equipment-related. Cybercrime, accounting for 22% of incidents in 2016, is another a growing threat.
Ponemon also reports that the average cost per outage was $740,357, with business disruption, missed revenue and reduced productivity driving the financial losses. With an average downtime of 95 minutes, that equates to an economic loss of $7,700 per minute.
Another study by the Uptime Institute suggests the problem may be even worse. Uptime’s analysis of data center outages found that more than 70% were directly attributable to human error, with staff training seen as one of the biggest data center oversights.
Accidental (human involvement), and cyber-related events are often realized or detected soon after their occurrence. The ability to recover data and resume from moments before the event minimizes data loss and economic costs to the business.
Creating multi-tiered backup strategies
A snapshot backup freezes a copy of the data at a given point in time. This enables business to remain on-line and recover from outages by resuming business from a previous point in time. However, backups using a snapshot create additional load on a system; each snapshot adds incremental performance load and increases storage capacity consumption.
Consequently, determining the backup frequency is a balance between the cost of an outage and system performance. The term Recovery Point Objective (RPO) is a time value identifying the decision. It identifies the amount of data loss (financial loss) acceptable when recovering from an outage. The larger the value, the greater the potential financial cost.
There is an alternative backup option that is capable of recovering data with an RPO of seconds and minutes—where all change data is tracked by time of change. This is referred to as continuous data protection (CDP), or real-time backup. The system keeps a history of all change data by saving that change data and associating a time with it. As a result, one is able to recover the state of the data as it was seconds before a change is made. A time machine in essence! In some cases, this improves system performance and saves on storage capacity over snapshots, as change data as written is sequentially stored. Storage devices perform most efficiently when accessed sequentially.
A multi-tiered backup strategy, incorporating tape, snapshot and CDP should be considered to address the spectrum of business outage scenarios, spanning the rare (natural disasters/terrorism), possible/likely (environmental) and certain (human error/cyber-attack) events.
Several vendors offer both CDP and snapshot technology in combination. One such vendor is DataCore Software with our software-defined storage solution.
Ready to see how DataCore can help your organization create a successful multi-tiered back up strategy? Request a live demo with one of our expert solution architects today.