#loading

Multi-cloud: Pros and Cons

Aug 12, 2021

The pandemic of the previous year gave the boost to global digitization, including growth of cloud services. By 2022 the end-user spendings on public cloud services are predicted to reach $397,5b, which is 47% more compared to $270b of 2020. Such an increase in popularity is rather reasonable, since the usage of clouds has its benefits for businesses as well as for individual usage: easy access from any location, time and money savings, storage of large amounts of data, etc. 

Of particular note is the multi-cloud strategy, which has become the main trend of 2021 in cloud computing. As the glossary says, multi-cloud is the use of multiple cloud computing and storage services in a single network architecture. This refers to the distribution of cloud assets, software, applications, and more across several cloud environments. With a typical multi-cloud architecture utilizing two or more public clouds as well as private clouds, a multi-cloud environment aims to eliminate the reliance on any single cloud provider or instance.

 Thus, according to the Flexera report, 92% of responding enterprises stick to the multi-cloud strategy (among which 82% use hybrid cloud and 10% use multiple public ones) using 3-4 different clouds on average. But is the multi-cloud strategy really a cure-all for all digitization issues? Let’s review all pros and cons. 

So using different clouds you can:

  • Get the ability to avoid vendor lock-in, which means more independence in data portability and flexibility in switching cloud service providers;
  • Customize and optimize processes and spendings by choosing separate services from different cloud providers which perform best for certain tasks;  
  • Minimize the risks of having downtime, since you have spare computing capabilities in case one of the clouds in use is down. 
  • Enhance data privacy using hybrid cloud environments by storing the most sensitive data on a private cloud while keeping all other workloads on public cloud.

Although, as much as anything else, the multi-cloud strategy has its challenges too. Thus, 62% of surveyed IT leaders say that the legacy system integration is one of the biggest problems they meet when migrating to the multiple clouds, while 61% face troubles dealing with new technologies. Besides, you have to be skilled in data migration and complex cloud network management, since the strategy supposes the distribution of great data amounts on various clouds. And you should be accurate when choosing cloud providers in the context of data security. Let’s have a look at some challenges in detail with the actual examples:

- You work with more than one cloud provider. Such a kind of infrastructure model demands much more attention from the very beginning of your project architecture planning. You or your DevOps should have a deep expertise in a great number of technologies from both sides depending on your tasks. In case you have, for example, Azure Production environment and you need to backup important files from Azure Storage and make a copy of them to AWS S3 - you can do this pretty easily. If you want to create your own Load Balancing solution for your web application using Nginx - feel free to configure similar environments inside of the virtual machines or Docker containers. But if you want to create a reserve environment in another hosting using a unique vendor's solution - be careful, it may take substantially much more time (especially if you don't have a proper description for your project's architecture). When it's done - it should be maintained and monitored. Be patient and count risks and time.

- It is challenging to use some specific technologies. For example, you use Azure CosmosDB and are satisfied with it. But times change and you decide to reduce the downtime risks by adding the AWS infrastructure. You know about AWS DynamoDB: this is a NoSQL DB service, just like Azure CosmosDB, but they are not compatible, though. You can't configure, for example, the replication between those NoSQL solutions, which is absolutely possible when using MongoDB in any virtual machine from Azure to AWS. You can't check the consistency of your DB backup if you do it by yourself using a different NoSQL provider. You can migrate from CosmosDB to DynamoDB, but just take a look at this manual: https://aws.amazon.com/blogs/database/migrate-from-azure-cosmos-db-api-for-mongodb-to-amazon-documentdb-using-the-online-method It is a pretty workable solution to use virtual machines inside of AWS or Azure hosting - they may be the same inside (you may configure your environment within Ansible, for example, for Linux-based virtual machines of any kind of hosting). It's possible to use prepared Docker images for deploying containers using similar services in both cloud hostings. But if you decide to use some vendor-lock technology (just like NoSQL solutions or some cloud's Log Collectors) and make a step to it - your hybrid infrastructure becomes much more complex.

- It is complicated to manage the access to your resources as well as to create them. If you don't use such a kind of technology as IaC (Infrastructure as Code) for describing all your hosts, connections, and permissions - you'll spend a lot of time on trivial operations, such as creating resources and granting permissions to them. Every single time you should remember that manual management is much more expensive that automated. Even if your DevOps specialist is not so cool in this aspect (it's possible, first of all, in case they use this technology from time to time but not too often to memorize all its niceties in their muscle memory), you should ask them to learn it well and use it in the project, especially if you use the full set of environments for application development lifecycle, called dev/stage/prod (at least in a single quantity). Most of the projects are not giants so they need a pretty limited number of resources. It may look that creating and configuring all of them is much faster than describing the whole process as a code with the use of such tools as Terraform or Azure Resource Manager templates, or AWS Cloud Formation, but if it is necessary to repeat the same, almost standardized, process with a similar number and types of resources 3, 10, 50 times, it becomes obvious - DevOps will waste a lot of time on manual configuration. And while they configure a bunch of resources manually, the risks of making mistakes increase because of the human factor. So, as a conclusion of this part, - mostly you should use IaC for your projects, you should work with DevOps who are hungry for their own development and you should monitor the number of your resources, billings and permissions level. If it's gonna get out of control - that might be pretty painful.

- At last, but definitely not least: this is much more expensive than you suppose. This is not about doubling the price because of infrastructure billing. That's about DevOps specialists too. It would be much better to have only one skillable expert, who knows AWS and Azure as well as possible, but it's a bit unreal mostly. You should have some number of team members who can communicate with each other easily and constantly. This might cause big expenses, because you can't allow some low-qualified specialists to manage your complex infrastructure. So you should pay attention to unnecessary complexity of your infrastructure.


As you can see, the multi-cloud strategy is rather controversial, and hides various technical issues that you may face while integrating it into your business. The multi-cloud environment is pretty complicated to manage and requires great effort and expertise to work with. So weigh all the pros and cons it may have for you before using it. Are all its benefits worth of time, money and effort it requires to be spent on?

BE THE FIRST WHO GETS UPDATES
Using Corsac Blog and website you agree to our Privacy Policy and Terms and Conditions.