Not only is Kubernetes not complicated, and I could prove it to you, but the bigger problem is if you skipped being insulted by the part that you aren't engineering, because that's the real thing holding this industry back.
I speak in a lot of hyperbole, but people that know me typically know that I choose my words carefully. For instance, there's a whole swath of common folk that will read the title and think I said "you just aren't an engineer." Read that difference carefully, I don't have time to walk you throw how words are important.
If you aren't an engineer, nothing can help you here. If you want to do engineering, you're going to realize that Kubernetes isn't complicated by the end of this. I can give you a bone to chew on if you've made it this far on just how uncomplicated it is: kubernetes started as largely a collection of bash scripts proving the concept. Just think about how unmanageable that would be if it was doing anything actually complex. I'm not implying it's not more complex now, I'm saying the idea, the core of it, is mostly a few bash scripts.
No seriously, go watch Kubernetes: The Documentary. The prototype was simple, and the actual released version took 3 months for 7 main people with some help from others. That was the pitch. Gall's law applies.
Let's get back to our deficiency in thinking it's complex. The core problem is we didn't think, we were told or we were intimidated and we assumed but were not giving space for comparison and critical analysis. We felt it was complex, we were uncomfortable. This is very typical of us all when we have figured out a previous way to deploy our app, however trivial or real. Did you use Heroku? Heroku's entire ethos is to be simple, to just deploy your thing. Heroku also isn't real. It works because we don't have users. It works because our app isn't doing anything so we don't have to learn anything to make it appear in a web browser.
The moment, and I mean the very second that you have to face anything real with your app, this goes out the window. The illusion is broken. We wanted to imagine it was simple and we paid the price. Now our app is real and we need to come to the realities while everything is on fire. Kubernetes isn't the answer, though. I'm not saying that. That's not the point here. Let's ignore Kubernetes for now and think deeply about how we solved this fact of users and uptime and reliability and just routing a request to the webserver from a place without kubernetes. Maybe we do this instead. We'll come back to that, keep it in mind.
Take a step back. Do we know how we would deploy an application to a single server? Let's go through the list of things we need to deploy an app, manually, and have it run without us being logged into the server constantly.
- First we need a server, which means we have to install an OS and configure it.
- We need to have some way to log into the server, probably SSH, which also means we need to manage passwords or keys. Both, realistically. Even if you forget the password a moment after you've authorized the key, you save that in a vault somewhere in case you lose the key.
- Now you need a web server. Apache httpd? Nginx? HAProxy? (Have you even heard of HAProxy?) Caddy?
- OK, now you have to run the application. Node? Maybe you need an express project. Install node, setup you package. Python? Run your wsgi mod. PHP? Fine, there's lots of options. Go? Just run the binary? Do you compile on the server, too? Ugh.
- Static frontend. OK, is it built on server? Is it aware of its domain, or does it just assume it can be served from any domain, so you can serve it on "dev.yourdomain.com" or whatever without code changes?
- Wait, what version am I running? Do I need to upgrade, or is that just an assumption of a script I run to get it to run latest of what's in git? How do I roll back? I have to invent that?
- What if the app crashed? OK, I run it in supervisord, now it restarts. Maybe I get fancy and I run it in docker compose.
- Where's the database? Same box? Some cloud service? A separate VM I also configure? Is it publicly addressed? Oh hell, what's security like around this?
- Finally, my service is running. Wait, is it? Do I have to monitor it? Do I run Nagios? Icinga? Zabbix? Xymon? How do I tell the service is healthy? Oh, I suppose I should use all the telemetry and log aggregation stuff, right? Oh hell.
Let's say you skipped half that, because you run a blog. Who cares about most of this, just run it off a raspberry pi. Great.
Now your box died.
I want to be clear here, Kubernetes isn't involved here, and not all of this is necessary. Do you need observability? Maybe. Do you need the thing to restart when it crashes? Probably. Do you need DNS? TLS? Yes. Absolutely. How complicated is this?
The point here isn't that Kubernetes isn't complicated, the point is that Kubernetes is only as complicated as anything else it needs to be, but we missed it because we were too busy ignoring reality. You still have to learn something to solve all the problems you actually need to solve. That's not going away. Kubernetes isn't making it worse today, it's just different than what you know. Could it be simpler? Sure. Heroku or its replacements exists. If you don't care about your app, just use thos, but when your site is down and your customers are waiting for a fix, you have to have an answer and Kubernetes is only one of a dozen ways to solve that. Do you need to scale? OK, load balancers exist.
Let me tell you a story. A long time ago, I ran a PHP site for a company in the 2000s era. Cloud wasn't even a term yet. We had physical servers. We ran a virtual IP on VRRP across two servers running HAProxy, because it was by far the most performant. We had 12 million unique visitors every month on an ecommerce site. It needed to run. We had a redundant MySQL cluster, and it was multi-region. I built it by hand, by myself. The other devs just knew PHP. That's fine, we all have our roles, I don't diminish their contributions. I made the architecture sing. Borg (the predecessor to Kubernetes) didn't exist even internally at Google yet. This was just how things were done, in one of a dozen ways. Tools existed to make it easier for folks, like cpanel, but you gave up performance, freedom, and time/money.
I want you to picture me in a colo staring at the crash cart (look it up if you don't know, it's worth it, I promise). I'm not saying this for a "back in my day" scenario. Part of this is important for understanding some basic and very recent history, but moreso I want you to imagine the complexity and the day to day work. I had alerts paging me for outages of all kinds via script and xymon checks and I was on it. Huge pain, but worth it. I didn't make 6 figures, I didn't have a team, and I didn't have the luxury of being able to figure anything out with a quick search. Remember that Stack Overflow was launched in 2008 and didn't take off immediately (nor was it particularly useful for several years outside specific niches).
Now think. Think critically. Is Kubernetes more complicated than that amalgamation? Answer some questions here. What documentation did I need to make so I could handle issues at 3 AM? When I got more people to help, what training did they need? Could they read docs? Where? Who wrote those? When we did an upgrade to any of these systems, how did we do that? What was the cost? People in our industry are often satisfied with the deployment story, but we have to maintain it. How did we do that in this environment? Day 2 operations are still very important for any dev work.
Kubernetes isn't the answer to all problems. I don't want to sell you kubernetes. This article mentions kubernetes a lot, but it isn't about kubernetes at all. It's about engineering. Engineering is about the lifecycle and craft of software. I'm a dev, but a lot of what I've describe here is about operations. I hate operations. My career is mostly built on automating operations out of my life so I can focus on dev while holding the small cognitive load left in operations for when problems arise. Admittedly, I like kubernetes because it allows me the greatest degree of automation. I can update DNS, TLS certs, monitoring endpoints, autoscaling, and so much more with a single API. Is it perfect? ROFL! NO! Nothing will ever be. I don't even think Kubernetes will last 10 years from now. It will be replaced with something higher leverage. I might even write it. Who knows!
The point is that Kubernetes isn't complicated. Engineering is about trade-offs and understanding costs. People that say Kubernetes is complicated aren't doing that analysis. They aren't engineering. We should do the work. Maybe a simple raspberry pi is enough. Maybe it's enough for now and we need to be prepared to upgrade soon. Maybe ECS is fine. I've told a couple companies as much. Literally, "don't use Kubernetes, use Cloud Run or ECS and you'll be ready for Kubernetes when it's needed." Maybe that's the answer for you. Let's just not pretend that the problem is Kubernetes being complicated. It's time we did engineering and had a better answer.