3 measures to avoid insurance against breakdowns …

Insurance to cover computer crashes and their consequences for the business has experienced a boom in recent years. The insurance covers losses, but does not solve all problems. Even if it means remembering the basic facts, here are 3 measures to avoid problems and the need for insurance…

By Florian Malecki, Executive Vice President, Arcserve

Cloud computing is becoming ubiquitous, and more and more businesses are exposed to incidents that cause potentially catastrophic downtime. According to Gartner, the average price for these is $5,600 per minute. And that, not to mention the additional losses that do not necessarily only result in financial damages.

These indirect costs also include, for example, business interruptions that require IT teams to interrupt their work to get the business up and running again.

This is one of the reasons why cloud outage insurance has boomed in recent years. Their providers cover customers for short-term cloud, network and platform failures for up to 24 hours.

Moreover, these crashes are frequent. Cloud insurance provider Parametrix reports that one of the three major public cloud providers – Microsoft Azure, AWS and Google Cloud – would experience an average outage of at least 30 minutes every three weeks.

Insurance can then prove to be a real safety net for companies, but is not a final solution. It is important to remember that it cannot guarantee the maintenance of activity during an incident.

Yes, it will cover any loss in the short term. However, it will not cover impacts on customers, damage to the brand image or weakening of loyalty when the company no longer has the capacity to provide its services.

Instead of relying entirely on insuring against cloud outages, here are three tips that organizations should follow instead of dealing with these outages in advance.

1- Have a solid recovery plan.

It is often mistaken to think that data is protected and secure when transferred to a cloud. However, the fire at the OVHcloud data center (Europe’s largest cloud service provider) last year resulted in the loss of huge amounts of customer data. This has affected government agencies, e-tailers and even banks.

Backing up data to the cloud or on-premises is an essential first step in any disaster recovery plan. But this is only the first. It’s also helpful to have a plan to quickly recover data in an emergency.

Think of the company as a cruise ship. Indeed, such a boat must regularly test its lifeboats, it is therefore necessary in the same way to test the recovery plan regularly by simulating incidents to ensure that it works correctly. As part of this, snapshots should also be tested regularly and any problems fixed.

ALSO READ:

Sure

5 Security Best Practices That Really Work

2- Implement a backup and recovery solution.

Cloud data security is not only the provider’s responsibility, but also the customer’s.

Cloud providers usually promise to secure their infrastructure and services. But securing operating systems, platforms and data is up to the user.

Data security is not guaranteed. Regardless of the cloud platform used, the files always belong to the company, not the provider. Many of them therefore recommend that their customers use third-party software to protect their data.

It is possible to secure them completely with a reliable cloud backup and recovery solution, while maintaining the necessary level of control. It is therefore appropriate to implement such a solution to protect data by automatically backing it up every 15 minutes and offering multiple restore points. This ensures that data is always protected with quick access and visibility 24/7.

ALSO READ:

Cloud

The Code of Conduct for Data Protection has finally been put into practice

3- Be proactive and resistant data.

Many companies don’t test their recovery plans, and some don’t even have one. It is therefore necessary to develop the adoption of these plans, as well as to test them regularly.

Here you have to be proactive and not reactive, in other words: be resilient to data.

A data resilience strategy ensures business continuity in the event of an interruption. It is based on recovery point goals (RPO) and recovery time goals (RTO) and regular testing is required to ensure that these can be achieved.

ALSO READ:

Gartner delivers grim cybersecurity predictions for 2022 and 2023.

Sure

8 Gartner’s predictions about cyber security in 2022/2023…

The RPO determines the backup frequency. This is the data loss tolerance.

If a company can accept a loss of data from the last 24 hours, then its RPO is said to be 24. Other organizations, such as those in the financial and healthcare sectors, absolutely cannot tolerate such an outage. Their RPO can then be set to a few milliseconds.

RTO measures the acceptable downtime between data loss and recovery. It’s about how long the company can be down before it suffers serious damage. The RTO thus determines the necessary investment in a disaster preparedness plan. If the RTO is one hour, it is therefore necessary to invest in solutions that allow you to resume activities within the hour.

Determining the RPO and RTO and then implementing the necessary solutions to achieve them are the keys to data resilience.

We live in a world where cyber threats and natural disasters are on the rise. Every day, companies get hurt without being prepared. That is why more and more companies are buying cloud outage insurance. But it is important to understand that this type of insurance alone is not a protection plan. It’s far better to think of it as an addition to the backup and recovery efforts already in place, rather than a full-fledged replacement.

Leave a Comment