Show HN: GitOps Template for Kubernetes

66 pointsposted 3 days ago
by pmig

33 Comments

vanillax

3 days ago

" It also works work existing Kubernetes clusters if just want to use GitOps for some applications. Just make sure that the argocd and glasskube-system namespaces are not yet in use. See: https://github.com/glasskube/gitops-template/ "

I assume this statement is for running this? glasskube bootstrap git --url <your-repo> --username <org-or-username> --token <your-token>

I think I'd like to understand what the argo cd / git ops template is and how its different than argocd autopilot. Maybe some pictures of how argo is deploying apps. Etc.

linkdd

3 days ago

IIUC, it's basically "manage your Glasskube packages from Git, thanks to ArgoCD".

The `glasskube install` command does a bunch of stuff that ends up as resources in your Kubernetes cluster, that are then interpreted by the Glasskube operator.

The "Gitops template" make use of ArgoCD and Git to do what `glasskube install` would have done.

hadlock

3 days ago

Thanks. It sounds more like glasskube is a plugin for ArgoCD IIUC

I am not super thrilled about critical applications like Argo getting a plethora of plugins, otherwise we end up looping back to Jenkins and plugin hell

linkdd

3 days ago

Off-topic: To be honest, after trying almost all the CI/CD offerings out there, CircleCI, Github Actions, Gitlab CI, Travis, etc... I've started to believe that none of them actually did it better than Jenkins (despite all its flaws).

On-topic: Glasskube isn't really an ArgoCD plugin as it can work standalone, but in 2024, can you really propose a package manager for k8s without having some integration with ArgoCD and GitOps in general?

If you want to migrate, having interoperability between the tools can make the process smoother. And if you don't want to migrate and still benefit from a centralized, curated, audited repository of packages for Kubernetes so that your "Powered by ArgoCD" GitOps are easier to manage, that's what the GitOps template propose.

In Debian, you can just `apt install <that big thing i don't want to write a deploy script for>`. Imagine doing that with the usual big operators you want in your cluster (cert-manager, a hashicorp vault operator, istio or nginx ingress controller or envoy or ...)

mtndew4brkfst

2 days ago

ArgoCD is not synonymous with GitOps (for Kube or in general) nor do they have a monopoly on effective implementation thereof.

I certainly prefer FluxCD, myself.

pmig

3 days ago

Thanks linkdd. Exactly Glasskube in "GitOps mode" will output these package custom resources as yaml so you can commit them to git and argo pulls these resources from git into the cluster.

pmig

3 days ago

I just setup a argo cd autopilot repository https://github.com/pmig/argocd-autopilot as a comparison. Autopilot gives you a great opinionated scaffold for internal applications.

Our template already includes update automations with the renovate integration and easier integration of third party applications.

vanillax

3 days ago

I mean renovate in github is 1 file and a app integration. It takes very little effort to setup. What exactly do you mean by easier integration of third party apps? Why wouldnt someone just use https://operatorhub.io/. ?

pmig

3 days ago

If you are not on open-shift olm and the operator hub is quite an overkill. I wrote a more detailed explanation on our website: https://glasskube.dev/docs/comparisons/olm/

Tldr.: If you are already using open shift, make use of the operator hub, else glasskube is the more lightweight and simpler solution with similar concepts.

vanillax

3 days ago

huh? you dont need openshift to use operator hub.

kubectl create -f https://operatorhub.io/install/flink-kubernetes-operator.yam...

pmig

3 days ago

Yes, you can get started by executing this commands.

Our bootstrap git sub command is similar to argo cd autopliot. I give it a try right now to be able to better state differences and follow up on this question.

raffraffraff

3 days ago

How would I integrate it into existing setup that uses tools like terraform, along with helm?

See, there's a bunch of stuff that I deploy using terraform (VPC, DNS, EKS) and a bunch of stuff that I deploy with FluxCD. But in between, there's some awkward stuff like the monitoring stack that requires cloudy things and Kube things, and they're tightly coupled.

Right now I end up going with terraform and the awful helm provider. Many of these helm charts have sub-charts, nested values etc, but thankfully the monitoring stack doesn't change that much. It's still not ideal but it works as a one-shot deployment that sets up buckets, policies, IAM roles for service accounts, more policies for the roles, and finally the helm charts with values from the outputs of those clouds resources.

pmig

3 days ago

Instead of using the Terraforms Helm provider, you can simply install the Glasskube package controller and provision package custom resource definitions via Flux. This is also how we manage our internal clusters.

I recommend giving Glasskube a try with Minikube locally and join our Discord to interact with the community.

raffraffraff

3 days ago

Does glasskube create any of the AWS resources? Or does it have a terraform provider that's better than the helm provider? If neither are "yes" then I didn't get my point across, or you didn't parse out the important point.

I want a single code project that describes and deploys my monitoring stack. With terraform and the helm provider I can create cloud resources with terraform and deploy the kube resources using the terraform helm provider, using values that come from the outputs of the terraform cloud resources, in a single operation.

I don't think glasskube can replace helm in this instance. Wouldn't I have to split my monitoring stack into cloud ops and Kube ops, and manually paste outputs from terraform into glasskube configs?

momothereal

3 days ago

What is awful about the Helm provider for Terraform?

raffraffraff

3 days ago

When you make a change to a gigantic values yaml, it shows you the worst possible diff: the entire block removed, a whole new block added, even for a 1 line change.

https://github.com/hashicorp/terraform-provider-helm/issues/...

Any time I touch the monitoring stack, which has several helm releases with large values blocks (kube-prometheus-stack, promtail, Loki, Mimir), it's absolutely nightmarish. The plan can be hundreds of lines that have to be diffed manually.

linkdd

3 days ago

I'd say both Helm and Terraform :)

raffraffraff

3 days ago

;)

I've tamed terraform, mostly. Individual providers can be awful though.

slederer

3 days ago

Very interesting, where do you see value:

a.) in Kubernetes setups that operate the same software stack, with the ongoing updates and regular releases.

b.) in Kubernetes setups that frequently install new software/diverse software

pmig

3 days ago

A package manager for a software stack the does not change that often (a) can move managed services like databases, message queues or even more complex obseravbility tools inside the cluster with the same convenience of a manage service.

If your setup become more complex and changes often (b) a I would recommend breaking it up into smaller pieces.

For both scenarios it makes sense to use git to keep track of revision and previous states of your Kubernetes cluster and incorporate review processes with pull requests.

cianuro_

3 days ago

Can’t this be achieved using the app of apps pattern in ArgoCD?

pmig

3 days ago

Teams can also build applications with the apps of apps pattern with Argo CD or Flux kustomizations. They both feature concepts of dependencies. But these packages can then not published and shared between different clusters and organization that don't share a git repository.

linkdd

3 days ago

The ultimate goal is a centralized, curated, and audited repository, similar to Debian's official apt repository.

Sure, you could make your own .deb, your own repository, and manage dependencies yourself. But do you really want to?

esafak

3 days ago

How do you handle updates, manually?

iwwr

3 days ago

Why do you need an in-cluster operator/pod?

pmig

3 days ago

There are multiple reasons and limitations with current tooling that we want to overcome.

We have abstracted all packages as custom resources and have a controller that reconciles these resources to (1) enable drift detection. Additionally, we use admission controllers (2) to validate dependencies and package configurations before they are applied to the cluster, while also working with custom resources to store and update the status of installed packages.

iwwr

3 days ago

Genuinely interested. What problems did you have dealing with the standard reconciliation mechanism provided by ArgoCD and by k8s itself. I understand the advantage of the operator approach, but it might be hard to show the state in ArgoCD and somewhat breaks the idea of gitops.

Can we benefit your project in a more limited but agentless way? Limiting the types and CRDs we allow in k8s makes operations better, especially with the aggressive upgrade cycle that k8s already imposes.

pmig

3 days ago

A deeper integration into Argo CD (similar as how helm is integrated) will be needed to in order to display all status conditions.

I don't think that idea of gitops is broken if the glasskube package controller and all custom resources are versioned you will always lead to a reproducible result.

> Can we benefit your project in a more limited but agentless way?

We are building a central package repository with a lot of ci / cd testing infrastrucutre to increase quality of kubernetes packages in general: https://github.com/glasskube/packages

rajishx

2 days ago

pretty interesting :)

is glasskube a reboot of jenkins-x ?