Cloud Compute Lock-In: Truth and Fiction

Containers offer incredible advantages. They allow us to create a single deployment artifact that encompasses not only our code artifacts but also the entire operating system.

container

Less code with higher level services

The identical artifact, the container, is rolled out sequentially in the development, testing, and staging stages. After all tests, security checks and validations have been successfully completed, it is transferred to the production environment and made available to the customer/user. Furthermore, we have the capability to update the artifact at our discretion by pushing the new container and seamlessly conducting the deployment in a single, cohesive operation from testing to production levels.

More focused on business logic

Together with Functions-as-a-Service and other Higher-Level Services this has drastically lowered the amount of code that locks us into a specific cloud or service provider. While cloud providers have been drastically increasing the number and complexity of their service offerings, those Higher-Level Services then also often mean less direct integration necessary creating even less lock-in on the compute side.

Containers combine OS and application code into one code artifact

The landscape changes significantly when it comes to managing data. As the volume of data continues to grow, with data creation, computation, parsing, translation, and integration becoming more prevalent, the challenges associated with transitioning between providers or exploring new services become increasingly complex.

Data has always held a central role in the decision to switch providers, with only a few exceptions. This is even more pronounced in the current era, especially as cloud providers introduce machine learning tools and data-related products and services. These developments further exacerbate the data-related challenges.

While some machine learning tools and services have standardized interfaces, the real challenge remains the substantial volume of data that needs to be transferred between providers, particularly when minimizing downtime is a critical factor.

This isn't to suggest that replacing the compute layer with another provider is a straightforward task. It remains a substantial undertaking for most companies, necessitating a deep understanding of each cloud provider, their respective services, and the intricacies of the transition process.

Compute lock-in is typically not a massive issue

A significant challenge that many customers encounter when planning their transition is the attempt to implement Kubernetes, such as EKS, on various cloud providers. While this approach may initially appear to simplify the process of transitioning between providers, in practice, it often leads to a more cumbersome onboarding experience with the first provider.

The overhead associated with managing a Kubernetes cluster is far from negligible, especially for simpler web applications where such complexity may not be necessary. Furthermore, these transitions typically occur infrequently, if at all, and when they do, they often involve only certain parts of the system. As a result, you may find yourself with a suboptimal setup on both providers. This situation not only increases management overhead costs but also leads to higher infrastructure expenses, as you may be unable to leverage the most effective cost optimizations.

Conclusion

There are numerous tools out there from CDK to Terraform that help with creating compute-infrastructure and competent companies that can help you with setting that up in a way that is best for a specific provider. With as little overhead as possible and also the ability to move some compute workloads over when necessary or reasonable. If there are any open questions let us know, we’re happy to help.

Start your individual journey with ORBIT.