“Docker Swarm vs. Kubernetes,” “Openshift vs. Kubernetes” – while people were busy arguing over the best orchestration tool, K8s became mainstream. According to a RightScale report, adoption of K8s among enterprises sky-rocketed from 27% in 2018 to an impressive 65% in 2020. All top cloud providers – Amazon, Microsoft, and Google – integrated the system into their platforms. Pokemon GO, Airbnb, and New York Times are just a few of its use cases.
Yes, Kubernetes is all the rage now. Mainly because of its out-of-the-box reliability, scalability, and automation. But don't fall into the trap of thinking that it’s a magic bullet that will solve all of your infrastructure troubles. In some cases, it can be overkill that will bring you lots of new headaches instead, making your project difficult to develop, maintain, and deploy.
Let’s take a peek at the ups and downs you can expect when rolling out your project with Kubernetes. Below is a “view from 30,000 feet” overview that we’ve put together based on our experience with the tool.
So, what are the Kubernetes secrets that make us love it so much? It’s all about how scalable, secure, and flexible your infrastructure can be; allowing your teams to work transparently, develop quickly, and deploy regularly. Combine an automation tool, like Terraform, with Kubernetes, and you’ll get a failure-tolerant mobile home for your project that enables you to roll it out on almost every platform.
You can easily place your infrastructure and application into Git, script out the operations you want to automate, and let it do the job. Your version control system becomes a single source of truth, making deployment perfectly transparent. This is essential for big, distributed teams, so that everyone is aware of all changes in the codebase. Work is no longer a black box, it’s a true team effort. Finally, if anything goes south, you can easily roll back to the last stable version, minimizing the downtime for your business.
Keywords: ArgoCD, FluxCD, Faros, GitKube, Werf, JenkinsX, WeaveCloud
It's a simple matter of files and objects, which makes it easy to scale your infrastructure up and down on the go, just by adding a couple of parameters. It's also automatable – Kubernetes can change the scale of your infrastructure depending on the workload your application experiences over the course of a day.
Keywords: HPA, VPA
Kubernetes provides you with security tools right out of the box, which is handy since safety control becomes more challenging as teams get bigger.
Keywords: OPA, KubeLinter, Kube-bench, Kube-hunter, Terrascan, Clair, Trivy, Falco, Chekov
Kubernetes works with all the top cloud providers, like AWS, MS Azure, or Google Cloud, making development much more accessible, and business owners happier. You can transfer the project to your client with a single command, or switch cloud providers in a snap. Your application won't even notice.
Keywords: Cloud Controller manager
Kubernetes provides you with more than enough resources for the basic needs of an average application. But if you feel like adding more – you can. Or, even better, write an operator to control resource groups right from the shell.
Tools: Operator SDK
It's always a pleasure to work with fast-growing software. Kubernetes is just that. It is developing rapidly, responding to new challenges, and solving old problems. It gets better every day.
Kubernetes became one of the most significant open-source projects in the world. We're talking about tens of thousands of people, with thousands of active contributors. Its repository consists of around 80,000 pull requests, 150,000 commits, and more than a million contributions. Good luck finding another project with the same dynamic!
Under the care of the Cloud Native Computing Foundation (CNCF), the K8s community runs KubeCons with thousands of guests and hundreds of developers. Such a massive knowledge exchange is crucial, considering how complex the system is.
You can put on a conveyor belt almost every process inside, and even outside, of your infrastructure. While automation is not an out-of-the-box Kubernetes feature, the community provides you with all sorts of tools to make it happen. For example, using Terraform, the Kubernetes cluster can be set up automatically on any cloud-based platform.
Tools: Terragrunt, Hashicorp Terraform registry
It has its ups and downs and requires lots of maintenance and manual modifications, but it works right out of the box, which is vital for continuous delivery. Self-healing capabilities are truly amazing as well.
Kubernetes automation makes it possible to deliver new features from codebase to production in almost no time. From the very first commit to actual production - Kubernetes is here to automate every little step you want to implement.
The separation of infrastructure and application makes the software more maintainable. Sometimes, we have to change the environment daily, and our application should not be affected by that. Utilizing pluggable interfaces to network and storages, Kubernetes could help your application to adapt and grow transparently no matter what hardware is used in a moment.
The only way to learn Kubernetes is the hard way. The problems with the system stem from its complexity, bottomless codebase, and numerous components. Even its documentation is tricky, and would probably benefit from its own documentation just so you can figure out where to start your readings.
Because of its complexity, every team ends up with a unique set of troubles with Kubernetes, and their own cheat sheets and workarounds as a result. We're sharing ours below, but keep in mind that our challenges may not align with yours.
Kubernetes tries to be a universal solution for distributed applications. It consists of numerous components, and many of them are complicated on their own. The variety makes Kubernetes very challenging to get your head around, and incredibly hostile to rookies.
There are hundreds of thousands of lines of code in its core (excluding comments) that contain all sorts of issues, like duplicative or even dead logic. This makes Kubernetes challenging to maintain and highly bug-prone.
Kubernetes is challenging to operate, configure, and even understand. It's a genuinely tricky tool. You have to learn a whole pile of concepts just to kick-start it for the first time. And even after you do that, you won't be able to use it because each of its individual components is difficult as well; so you'll need to go back to the documentation and read further. Yes, Kubernetes is lots and lots of reading.
Sorry guys, there is no magic here. The documentation for Kubernetes is so complicated, and extensive, that in order to find something, you have to know where to look. Before you can even begin to learn Kubernetes, you will first have to search for what it is that you need to learn.
Even developers might fall victim to its complexity. Especially those who have never developed distributed systems before. It can take time, and some blunders, before your teams get used to it.
Old projects can quickly become a nightmare since it can be almost impossible to split legacy applications into microservices. Trying to do so might take a lot of time and cost businesses a pretty penny.
The autoscaling leaves a lot to be desired, especially the horizontal one. You have to plan it out at the beginning and do it very carefully because a single wrong decision could bring down your whole project. So, while you can rely on vertical autoscaling, we don’t suggest you try and automate the horizontal one.
Paradoxically, as big as Kubernetes is, it is practically helpless on its own. You'll quickly find yourself bringing in more supportive tools, like DevOps, GitOps, pipelines, deployment tools, CI/CD, tests, monitoring, logging, etc. As a result, infrastructure becomes bloated, making things complex and costly; plus, it’s difficult to explain to the business why you need all of this additional stuff. The team gets bigger too, and support takes longer, making negotiations with the business even harder.
As a result of Kubernetes's flawed architecture, making tools for it is not a simple job, so almost every piece of software integrated into it is packed with bugs.
Concentrating on infrastructural work becomes tricky since you're constantly troubleshooting and fixing third-party issues. Complex tools require familiarity in order to understand how they work, and how they get broken. Before getting your hands dirty with fixes, though, you'll have to find what exactly is broken. This is annoyingly laborious because Kubernetes's logs are flooded with data. A cluster of 10 machines can generate gigabytes of information hourly, making the logs unreadable.
Managed public clouds are full of bugs as well.
Having a single cluster for distributed applications inside a project sounds appealing. Setting up and supporting one cluster rather than ten is ten times easier and cheaper. Luckily, Kubernetes allows you to build such architecture. Sadly, it comes with some unfortunate peculiarities that can affect the entire infrastructure down the line if not addressed in the very beginning.
For multi-tenancy, you have to plan carefully, giving special attention to the aspects of the project that you won't be able to redo later without starting over from the beginning. Resources shared among applications are one of these aspects. If, for some reason, one of your project’s storages breaks, there is a significant chance that it will trigger cascading failure across the whole environment.
Circling back to disputes like “Docker Swarm vs. Kubernetes” and “Openshift vs. Kubernetes,” it’s true that K8s has all sorts of problems that have not yet been resolved. It’s also true that you’ll have to learn Kubernetes the hard way. It will probably cost you nerves, time, and money, but if you run a complex project, K8s may be exactly what you need and, in the long run, it could save the business a fortune.
Special thanks for contribution go to Artem, Lead Systems Engineer with EPAM Anywhere, and Maxim, Senior Systems Engineer with EPAM Anywhere.