> What's the really simple K8s?
It's k3s. You drop a single binary onto the node, run it, and you have a fully functional one-node k8s cluster.
And microk8s from Canonical.
In ascending order of functionality and how much complexity you need:
- Docker Compose running on a single server
- Docker Swarm cluster (typically multiple nodes, can be one)
- Hashicorp Nomad or K3s or other light Kubernetes distros
Big question is which feature subset you want to replicate.
Kubernetes means everything to everyone. At its core, I think it’s being able to read/write distributed state (which doesn’t need to be etcd) and being able for all the components (especially container hosts) to follow said state. But the ecosystem has expanded significantly beyond that.
> What's the really simple K8s?
I think K8s couples two concepts: the declarative-style cluster management, and infrastructure + container orchestration. Keep CRDs, remove everything else, and implement the business-specific stuff on top of the CRD-only layer.
This would give something like DBus, except cluster-wide, with declarative features. Then, container orchestration would be an application you install on top of that.
Edit: I see a sibling mentioned KCP. I’ve never heard of it before, but I think that’s probably exactly what I’d like.
One of the issues is that the open source Helm charts (or whatever) for something like Grafana do not come out-of-the-box with good config. I spent a significant amount of time a little while ago reading blogs to get Grafana to use up to date indexing algorithms, and better settings, etc.
Considering these companies make money when you use their hosted solution, this is not surprising, and it just goes to show TANSTAFL.
I like to use operators for intra-cluster infra, they tend to offer a "sorta-managed" experience. I'll use a Helm chart deployed by ArgoCD to provision the operator, then go from there - mainly because I try to limit Helm usage as much as possible.
Its a tradeoff, because operators will use some of your cluster's resources of course, but I get you.
Of course things can be simpler.
Remove abstractions like CNI, CRI, just make these things built-in.
Remove unnecessary things like Ingress, etc, you can always just deploy nginx or whatever reverse proxy directly. Also probably remove persistent volumes, they add a lot of complexity.
Use some automatically working database, not separate etcd installation.
Get rid of control plane. Every node should be both control plane and worker node. Or may be 3 worker nodes should be control plane, whatever, deployer should not think about it.
Add stuff that everyone needs. Centralised log storage, centralised metric scrapping and storage, some simple web UI, central authentication. It's reimplemented in every Kubernetes cluster.
The problem is that it won't be serious enough and people will choose Kubernetes over simpler solutions.
Some people want their k8s logs to be centralized with non k8s logs. Standardizing log storage seems like a challenging problem. Perhaps they could add built in log shipping. But even then, the transfer format needs to be specified.
Adding an idp is pretty standard in k8s... What do you want to actually do different?
I want to add users via manifests, so these users could use logins/passwords/pubkeys, and that's out of the box, without installing dex, keycloak or delegating to other systems.
Think about Linux installation. I don't need to add IDP to create unix users for various people.
Right now it's super complicated in Kubernetes and even requires third-party extensions for kubectl.
IMO this is what keeps people from building systems that might challenge kubernetes. Everyone wants to say Kuberentes is too complex, so we built something that does much less. I respect that! But I think it usually fails to grok what Kubernetes is and why it's such an interesting and vital rallying point, that's so thoroughly captured our systems-making. Let's look at the premise:
> That’s why I like to think of Kubernetes as a runtime for declarative infrastructure with a type system.
You can go build a simple way to deploy containers or ship apps: but you are missing what I think allows Kubernetes to be such a big tent, thats a core useful platform for so many. Kubernetes works the same for all types, for everything you want to manage. It's the same desired state management + autonomic systems patterns, whatever you are doing. An extensible platform with a very simple common core.
There are other takes and other tries, but managing desired state for any kind of type is a huge win that allows many people to find their own uses for kube, that is absolutely the cornerstone to it's popularity.
If you do want less, the one project I'd point to that is kubernetes without the kubernetes complexity is KCP. It's just the control plane. It doesn't do anything at all. This to me is much simpler. It's not finding a narrowly defined use case to focus on, it's distilling out the general system into it's simplest parts. Rebuilding a good simple bespoke app container launching platform around KCP would be doable, and maintain the overarching principles that make Kube actually interesting.
I seriously think there is something deeply rotten with our striving for simplicity. I know we've all been burned, and there's so often we want to throw up our hands, and I get it. But the way out is through. I'd rather dance the dance & try to scout for better further futures, than reject & try to walk back.
Everything in infrastructure is a set of trade-offs that work in both directions.
If you want better monitoring, metrics, availability, orchestration, logging, and so on, you pay for it with time, money, and complexity.
If you can't justify that cost, you're free to use simpler tools.
Just because everyone sets up a Kubernetes / Prometheus / ELK stack to host a web app that would happily run on a single VPS doesn't mean you need to do the same, or that nowadays this is the baseline for running something.