Go-Safeweb

187 pointsposted 4 days ago
by jcbhmr

77 Comments

pushupentry1219

4 days ago

Not sure how I feel about the HTTPS/TLS related bits. These days anything I write in Go uses plain HTTP, and the TLS is done by a reverse proxy of some variety that does some other stuff with the traffic too including security headers, routing for different paths to different services, etc. I never run a go web application "bare", public facing, and manually supplying cert files.

ongy

4 days ago

I suspect this is partially from google's internal 0 trust cluster networking.

I.e. even if the communication is entirely between components inside a k8s (or borg) cluster, it should be authenticated and encrypted.

In this model, there may be a reverse proxy at the edge of the cluster, but the communication between this service and the internal services wouls still be https. With systems like cert-manager it's also incredibly easy to supply every in-cluster process with a certificate form the cluster-internal CA.

-- Googler, not related to this project

kevinmgranger

4 days ago

The policy(?) change came ever since "SSL added and removed here ;-)", right?

That's when I remember seeing a broader shift towards app-terminated TLS.

cyberpunk

4 days ago

Why wouldn’t you use istio or cilium for this?

ongy

4 days ago

This might be me being daft, but I never quite understood the appeal of doing this with istio. OR also partially just due to the timing of when I started to care about things in k8s world. (Rather recently)

My understanding of that model is that the services themselves still just do unauthenticated HTTP, this gets picked up on the client side by a sidecar, packed into mTLS/HTTPS, authed+unpacked on the server sidecar, then passed as plain HTTP to the server itself.

This is great when we have intra-host vulnerabilities I guess. But doesn't allow to e.g. have code sanitizers that are strict on using TLS properly (google does this).

And while it is a real gain over simple unauthed with untrusted network between nodes, with cilium taking care of the intra-node networking being secure, I don't quite see how the added layer is more useful than using network policies strictly.

(besides some edge cases where it's used to further apply internal authorization based on the identity. Though that trusts the "network" (istio) again.)

cyberpunk

3 days ago

For us it’s compliance related first rather than any real security upgrade; we must use mtls between all services (finance) and it’s simply less to manage to use a service mesh.

The cloud provider could read the memory of a k8s node and in theory capture the session keys of two workloads on the same node, and we can’t really protect against that without something like confidential computing.

We get some other benefits for free by using istio though like nice logs, easily sending a chunk of traffic to a new release before rolling it everywhere, or doing fully transparent oauth even in services which don’t support it (oauth2proxy and istio looks at the jwts etc).

liveoneggs

4 days ago

in the modern world extra network hops, novel userland network stacks, and additional cycles of decrypted/re-encrypting traffic make your apps go faster, not slower.

nine_k

4 days ago

Not sure if it's ironic or not. Because it should be not.

AES-NI gives you encryption at the speed of memcpy basically. Userland network stacks are faster because they don't incur the kernel call cost. With that, if your NIC drivers support zero-copy access, an extra hop to a machine in the same rack over a 10G link is barely noticeable, may be shorter than an L3 miss.

The cost of this is mostly more hardware and more power used, but not much or any additional latency.

grogenaut

4 days ago

Why add another layer if you aren't already using istio or cilium?

cyberpunk

4 days ago

Because it’s zero configuration auto mtls between all the services in your cluster (or intra-node if cillium) instead of managing a tls cert for every service?

sofixa

4 days ago

Zero to little configuration at point of use, but a lot of upfront configuration, maintenance, fun issues when you need something slightly less traditional (e.g. something that needs raw TCP or heavens forbid, UDP). Different trade offs for different situations.

cyberpunk

4 days ago

I still think it’s far less than managing tls per service.

Every component needs a different tls configuration, vs one time installing istio.

Raw TCP is supported by istio even with mtls, you just have to match in your VirtualServices on SNI instead of Host header.

We routinely mix tcp and http services on the same external ports, with mtls for both.

UDP I don’t really see how is relevant to a conversation on tls

sofixa

4 days ago

> one time installing istio

And never update it afterwards?

> UDP I don’t really see how is relevant to a conversation on tls

You might have UDP services alongside your TCP/HTTP behind TLS.

cyberpunk

4 days ago

At least in our org security let us know when it's time to patch various components and it's typically just a devops chore to bump a helm chart version and merge..

I don't really understand your point; You're trying to say managing a single helm release for istio is more effort than (in my case, for example) manually managing around 40 TLS certificates (and yes, we have an in-house PKI with our own CA that issues via ACME/certbot etc also) and the services that use them? It's clearly not?

Just templating out the config files for e.g Cassandra or ES, or Redis or whatever component takes multiple x the effort of ./helm install istio.

sofixa

4 days ago

Istio is a notorious pain to maintain, because it has a bunch of dependencies around Kube clusters, so you can't just helm install istio every time there's a new release.

cyberpunk

4 days ago

That’s not my experience at all and I’ve run hundreds of clusters across multiple cloud providers and on bare metal.

You absolutely can helm upgrade istio, why not?

Can you give any actual examples of this?

karmarepellent

4 days ago

I have use cases for both approaches (letting a reverse proxy handle TLS, letting the application listen on an external socket and handling TLS in the application).

I find is is easier to configure an application with a reverse proxy in front when different paths require e.g. different cache-control response headers. At the end of the day I do not want to replicate all the logic that nginx (and others) already provide when it integrates well with the application at the back.

Other commenters suggest that both ways (with or without additional reverse proxy) add "tons of complexity". I don't see why. Using a reverse proxy is what we have done for a while now. Installation and configuration (with a reasonable amount of hardening) is not complex and there exist a lot of resources to make it easier. And leaving the reverse proxy out and handling TLS in the application itself should not be "complex" either. Just parse a certificate and private key and supply them to whatever web framework you happen to use.

eptcyka

4 days ago

And implement cert reloading if your application reaches any kind of respectable uptime.

effdee

4 days ago

The phrase "SSL added and removed here" from an NSA slide comes to mind.

pushupentry1219

4 days ago

To be clear I meant something like Caddy, or nginx not a solution like cloudflare or GCP doing my TLS

arccy

4 days ago

once you outgrow a single machine, unsecured network connections become an issue again

bayindirh

4 days ago

While I understand the sentiment, this makes bare installations too hard.

A big project not handling HTTPS themselves (like docmost), adds tons of complexity on the server side. Now, I have to install that service as a container to isolate that, then need to add a reverse proxy on top, etc.

That leads to resource inflation when I just want to use a small VM for that single task. Now, instead I deploy a whole infrastructure to run that small thing.

NhanH

4 days ago

Handling https in the project also adds tons of complexity in the long run though: tls/ssl library versions, cert handling. Instead of having one way to deal with all of them (at the proxy layer, or sometimes at network layer), I have to deal with individual software way of managing those

kortilla

4 days ago

I think you’re vastly overestimating the complexity of pointing a TLS library to a CA.

hnlmorg

4 days ago

Their point isn’t about the complexity of installing a certificate. It’s doing it successfully and securely at scale.

Everything is easy until you have to do it at scale.

9dev

4 days ago

Do that for a bunch of different applications and you hit interesting issues. For example the Java TLS stack, which doesn’t accept a PEM certificate on its own, but needs the full certificate chain. Kibana, however, requires the full certificate chain including the root certificate, which isn’t usually a part of the certificate itself, and Elasticsearch complains about an invalid certificate if you point it to the same one.

So even for two apps from the same vendor, which are commonly deployed together, you need bespoke TLS file variants. Scale that to more applications, and you’ll find out the hard way that you are vastly underestimating the complexity of operating a software ecosystem.

kortilla

3 days ago

I’ve done it. What you’re describing is like an hour of work. Moving TLS outside of the application is possibly the dumbest reason to spend the resources and complexity on a side car.

thayne

4 days ago

For a single application, it's not too bad. When you have dozens of applications that all have different mechanisms to install a CA, rotate certs, etc. And some of those don't have a good way to automate rotating the certs, then it becomes a pain.

bayindirh

4 days ago

I don't think so. We run a horde of machines, and run plethora of services, which are custom and/or has a very narrow install base due to the niche they serve.

99% of them use system-wide PKI store for CA and their certificates, which is under /etc. All of them have configuration options for these folders, and have short certificate lives because of the operational regulations we have in place.

At worst case, we distribute them with saltstack. Otherwise we just use scp, maybe a small wrapper script. Managing them doesn't take any measurable time for us.

...and we have our own CA on top of it.

thayne

4 days ago

> 99% of them use system-wide PKI store for CA and their certificates, which is under /etc.

Consider yourself lucky then.

For self-hosted third party software, I've seen requirements to provide it in an environment variable, upload it to a web form over plain http on localhost, specify an AWS secret service secret that contains it, put it in a specific location that is bind mounted into a container, create a custom image (both VM and container), etc.

bayindirh

4 days ago

I mean, tons of "old-school" services handle these things fine for the last two decades, at least. It can't be that hard. It's just a TLS library, and some files in a specific format at the end of the day.

redundantly

4 days ago

In my experience, most people have an extremely hard time wrapping their minds around how to configure TLS/HTTPS services and fail completely at understanding how it works.

yjftsjthsd-h

4 days ago

> Now, I have to install that service as a container to isolate that, then need to add a reverse proxy on top, etc.

Why? I've run plenty of normal non-containerized apps that bind localhost:1234 and then are reverse proxied by nginx or caddy or whatever.

(I agree that you would need a reverse proxy, I think that's kinda the point, it's the container thing I don't get)

bayindirh

4 days ago

Because some of the applications are "container native" and do not support configuration of IP/Port binding. Why? UNIX philosophy and working traditions be damned.

Exhibit A: Docmost: https://docmost.com/docs/self-hosting/environment-variables

I can understand the reverse proxy, but I want to run thing on a small VPS or a Raspberry Pi, etc. I want to use minimum resources so I can run more things per server.

If this thing was being installed at work, I can build a great wall in front of it, but for personal things, I rather have less moving parts, and just deploy and forget the thing.

sofixa

4 days ago

> Because some of the applications are "container native" and do not support configuration of IP/Port binding. Why? UNIX philosophy and working traditions be damned.

I'm not sure I follow. Docmost runs in a container, on a port which is configurable. By default, as all containers, that port is local to the container. The container orchestrator (be it docker CLI, docker-compose, Swarm, Nomad, Kubernetes, Podman) is the one who you instruct to expose a port from the container network to your host network. Docker tries to be smart and easy and will expose it on all networks by default, but even on it it's configurable, let alone on the more advanced options.

bayindirh

4 days ago

Let me try to explain myself clearer.

First, any enterprise/homelab installation with significant resources is outside of my scope. I don't care about them. When you don't pay for the infra/power cost, all bets are off. You can install quadruple redundant 300 node K8S cluster sitting on two different data centers in different cities connected to their own power generation equipment for a simple docmost instance.

I'm talking about small fry installations, for personal reasons, on small equipment. Think Raspi5/N100 NUC boxes as the target hardware.

I have limited resources, want no heat/noise at my home or want to pay as small as possible for a good VPS. Now, for something like docmost, I need a container runtime, and ingress controller/a reverse proxy, because I want to open this service to a VPN or web somehow.

I don't have much qualms with containers, as long as there are no cut corners. We'll talk about two examples here.

First is Wiki.js. Comes with 3 containers. DB, Wiki, Updater. OK. They are not much resource heavy, so acceptable. On top, Wiki container handles its own HTTPS, TLS certificates via Let's Encrypt (and allows custom ones, as well). It's the nice one. I can expose it, or hide it, I can handle my certificates or it can handle itself. Nice, batteries included. Install in 10 seconds, update occasionally, enjoy.

Second is docmost. It's a nice tool. Allows me to create a space and put a bunch of people in to collaborate. Install for desktop in 10 seconds & Works. Nice. I want to open it to outside. Forward a couple of ports? No. There's no HTTPS. Now I need to terminate it. Traefik? Too big for the job. Apache, doable but needs half an hour from get go. Can I limit it to localhost for local installations, for myself? No. As long as I run it, it binds to 0.0.0.0. So, if I want to install it locally to my desktop, I still need to add a firewall in front of it, because whole LAN can see it, and no, I don't trust any devices on my network (not people, but the apps).

This is my problem. I want batteries included solutions, which can adapt to my circumstances. I don't have the desire to run n servers for m services, because they are too lazy to be flexible to adapt their environments. It's not I can't do it, it's the contrary, but I don't want to, because I want to live my life and have time for people and things I care.

The services I use must be able to adaptable to my environment. This is how bare-metal services, daemons in UNIX world work. I can fit a whole rack of services in a powerful enough server and isolate them if required. It saves space, power, time and sanity. Littering services with services in boxes and putting more boxes on top of that because their developers are lazy makes my blood boil.

gr4vityWall

4 days ago

> Think Raspi5/N100 NUC boxes as the target hardware.

I don't think those would be negatively impacted by running Docker or Caddy. Isn't the performance cost of containers minimal these days?

I had similar thoughts in the past, but when I looked up performance comparisons between running something on docker vs without a container, the resulting performance was practically identical.

bayindirh

4 days ago

The problem is not containerization per se, I run containers both in my personal systems and at work. If you extremely optimize for compilers, the performance difference can be trimmed down to 2%. I think for most of us, it's 5%-6% band, and it's OK for non-loaded servers.

My qualm is about trimming fairly standard features and offloading it to somewhere else.

A single HTTP service + traefik (or Apache/NGINX reverse proxy) is heavier than a single HTTPS service. Plus adds more moving parts for smaller installations. f I was running an API farm, I can add all kinds of bells and whistles, and it'll be lighter overall, but this is not a valid reason for stripping fairly simple features from applications which will be used by small teams on small hardware.

Plus, this additional layers can sometimes conflict (A requires B, C required D, where B & D are same thing but either one can't accommodate A & C at the same time), requiring a completely new system to run the service, which is wasteful, from my perspective.

gr4vityWall

4 days ago

> A single HTTP service + traefik (or Apache/NGINX reverse proxy) is heavier than a single HTTPS service.

How heavy are talking, and what would be the measured impact?

I have worked on small teams (3~4 people) where we had to use our own infrastructure for regulatory reasons. I also self-host a few things as a hobby. I don't think Nginx or Caddy were ever a bottleneck, and at the human level, they saved more resources than not using them. I don't remember the last time I exposed something to a network using their bundled http server directly rather than a reverse proxy. I don't like wasting computational resources of course, but % wise, 'optimizing' them by not using containers or a reverse proxy wouldn't net any visible gains - there's usually other low hanging fruit that gives you more for your time.

sofixa

4 days ago

IO performance is the most severely impacted, but it's still fine for most uses.

sofixa

4 days ago

> Traefik? Too big for the job. Apache, doable but needs half an hour from get go.

What do you think Traefik is if you think Apache is OK? It's similar in size and footprint, just modern (you can point it directly to Docker or any number of sources and it auto configures itself). I've ran Traefik with Docker containers behind it on Raspberry Pi 3s, this isn't supercomputer territory.

> As long as I run it, it binds to 0.0.0.0. So, if I want to install it locally to my desktop, I still need to add a firewall in front of it, because whole LAN can see it, and no, I don't trust any devices on my network (not people, but the apps

https://stackoverflow.com/questions/22100587/docker-expose-a...

Come on, this took literally a few seconds to Google.

> This is my problem. I want batteries included solutions, which can adapt to my circumstances.

Which is it, batteries included or flexible? It's hard to be both, and funny you complain about that, Traefik is a perfect example of a tool that does both well.

> The services I use must be able to adaptable to my environment. This is how bare-metal services, daemons in UNIX world work

Ah, so you don't need systemd/an init and service system? Or libc? Or iptables/firewall? You just plop an app and everything magically works how you want it to?

I think you have fundamental knowledge gaps, and instead of trying to address them, or think about why would a project prefer to not reinvent the wheel everyone already has anyway, you prefer to rant that it wasn't like this in the good old days. Good old days with cgi-bin and php-fpm that needed a reverse proxy in front anyways, so nothing has changed other than an abundance of documentation and examples and flexibility.

rjh29

4 days ago

Global vs local variables.

nirui

4 days ago

> Now, I have to install that service as a container to isolate that, then need to add a reverse proxy on top, etc.

You can setup a Traefik (or some other ingress service) instance in a container and let it handle all the reverse proxying thingies for you. And if you do it right, the services should automatically register to the ingress service as they start up, and a port/HTTP route should be automatically assigned to them.

Do it in the old bare is harder and probably will always be harder, since you will be directly interact with OS facilities that was probably designed for something else than what you might try to run. Container management service such as Docker and Kubernetes abstracted away a lot of these complexities.

Funny enough, Traefik is written in Go... guess we've gone some (maybe not full) cycle on this one.

bayindirh

4 days ago

Honestly, doing things on bare server and interacting with OS is easier because it involves less moving parts and everything is in a more accessible state.

Containers are not bad per se, but cutting corners just because "this will run in a container, so they can just add another HTTPS terminator" is just carelessness IMHO. Because not all of us have homelabs at home to install an onion of services to run a simple service open to outside.

A good example of this is Wiki.js. It's desinged as container native, but handles its own ingress, HTTPS and Let's Encrypt certificates. I have no qualms to it, but when another tool just cuts corners and tells you that "It's easy to install, but bring your own secure ingress layer on top", it gets ugly.

Because it adds moving parts, most importantly wastes resources for a 3 person installation on small hardware, etc. Keep in mind, these are tools designed for small user-bases. They're not enterprise software.

On my day job, we call 80 machine clusters "small". But this is not about things I install/manage at my job.

treflop

4 days ago

While I am not ready to recommend that everyone install Traefik, this is false.

You can get a single node Docker “cluster” going with Traefik in 15 seconds. There is no maintenance except updating occasionally. It doesn’t use much more resources. You do not need to install any third party tools. There is no onion of services. You literally just boot up Traefik plus your app.

This has been doable since at least 2019 by just installing Docker via your OS’ package manager.

I’ve started using containers before 99% of people and so got to see the fundamentals build up. You do not need to skip directly to “Kubernetes.” That’s like needing to wash your clothes so you skip directly to buying an industrial washing machine and then lamenting how all washing machines are overkill.

bayindirh

4 days ago

Traefik plus my service is two layers. My service has a DB hidden behind it, it's three layers. I put a VPN in front of it, and now it's four.

My service doesn't take much resources, also the DB I use is light by itself. I added traefik, which is also light, and the VPN daemon which is also light.

However, these four layers are not light. They're heavier. I'd rather don't have Traefik in front and have a lighter stack, because for that much resources, I can run another server at another port, which can save me another VPS (money, maintenance time, documentation and interconnection).

I mean, we were using jails before Linux had containers. I'm not new to system administration or computers in general.

I don't get angry because things are complicated/hard. I get angry because we waste resources and write bad software because we somehow think "worst is the best".

Things add up. Light becomes heavy, easy becomes meaninglessly complex. This shouldn't be like that.

nirui

4 days ago

Are you mad because everybody is down voting your 42133422 post to shit, so you're down voting everyone who replied you? Don't do that, because it will only give you more down votes.

> it involves less moving parts

Every sane person who experienced a few server updates will not put out claim like that. One day you ran `apt upgrade -y`, minutes later 30% of your clients PHP website somehow went down? "less moving parts"?

In fact, Docker was invented to address this exact "it worked on my machine, why not yours" problem, by actually introducing "less moving parts".

> cutting corners just because "this will run in a container, so they can just add another HTTPS terminator" is just carelessness IMHO

Stop trying to misrepresent anything. No one is trying to solve a problem by wrapping HTTPS terminator with "another HTTPS terminator". Also, for real, how can you terminate HTTPS twice?

Also, from your another post:

> Traefik plus my service is two layers. My service has a DB hidden behind it, it's three layers. I put a VPN in front of it, and now it's four.

Why stop there, if you start from user's keyboard as a layer, then there's 5, and screen 6, then countless Internet cable and routers. Man! so many layers, it's crazy!

Also, that's not how you structure things. You have a reverse proxy, and then a service, 2 layers total.

And VPN? Oh boy you'll probably be surprised that Docker has cross-node communication built right in (https://docs.docker.com/engine/network/drivers/overlay/), so you don't even need VPN to connect to your database.

Alright, alright, please, just admit that currently containerization is not a technology you can correctly take advantage of, and then you go ahead and learn more about it. It will be an useful tool for you, as it did for many, I'm sure.

bayindirh

4 days ago

> Are you mad

No!?

> everybody is down voting your 42133422 post to shit

They can, we don't have to agree on everything.

> so you're down voting everyone who replied you?

You can't downvote replies to your own comments, plus I don't downvote people because they have a different view than mine. Heck, I don't downvote as a principle. As long as the comment needs a flag, too.

> No one is trying to solve a problem by wrapping HTTPS terminator with "another HTTPS terminator"

Ow. I say "people don't add https functionality to their code, because they expect it to be done elsewhere, and that's bad practice". Do you read what I have written?

> Why stop there, if you start from user's keyboard as a layer...

Can you just calm down?

> And VPN? Oh boy you'll probably be surprised that Docker has cross-node communication built right in (https://docs.docker.com/engine/network/drivers/overlay/), so you don't even need VPN to connect to your database.

Who said I use a VPN to connect to my database? I have a closed/dark network of hosts on different locations sitting behind a NATs forming a personal network. This is what that VPN is about.

> Alright, alright, please, just admit that currently containerization is not a technology you can correctly take advantage of, and then you go ahead and learn more about it. It will be an useful tool for you, as it did for many, I'm sure.

Alright, alright, please just admit that you didn't understood a single bit of my posts, and can't paint a picture of what I'm talking about, and then you go ahead and read it all again. It will be a useful exercise for you, as it did for many, I'm sure.

Now I need to clean my keyboard of this coffee, and need to handle my OpenStack cluster. Oh, boy...

Thank you.

:)

user

4 days ago

[deleted]

nox101

4 days ago

What does this mean?

> NG1: Safe API Completeness > Creating safe APIs for all the corner cases might result in a bloated codebased. Our experience shows that this isn’t necessary.

To me "Safe API" means it's hard to use the API incorrectly. I see lots of poorly designed APIs where it's easy to use the API incorrectly and then you don't know you've used it incorrectly until (1) the edge case you weren't aware of appears (2) you get pwnd by an exploit because you used it wrong.

ongy

4 days ago

My reading is that it's supposed to provide an API that's safe to use, but might not allow to do everything technically possible within the HTTP spec.

kele

4 days ago

That was the intention of that paragraph.

kittikitti

4 days ago

The only reason this is noteworthy is that a bunch of big tech employees worked on it.

ssahoo

4 days ago

This is basically the helmet of go?

EvanHahn

4 days ago

Helmet author here. This is a similar project, but not the same.

Helmet covers HTTP response headers, and that's it. This project seems to have a wider scope, covering things like auth and error handling.

efilife

2 days ago

Your last comments were 10 months ago and 10 years ago. A rare occurence. HN at its finest

zeroq

4 days ago

I always thought that the "rubber" was a missed opportunity.

caust1c

4 days ago

[flagged]

kyrra

4 days ago

Any projects that does not have official headcount gets that tag line. Even some projects that are funded internally, may get that moniker externally as there's no guarantee it will be maintained.

Most of the time these are projects that individual engineers go through the pain of open sourcing.

L3viathan

4 days ago

> as there's no guarantee it will be maintained

As opposed to official projects like Allo, Wave, Reader, etc. that will?

kyrra

4 days ago

I'm talking open source here. When it's funded, there will be people to help fix issues and put out new releases. This includes things like Go, Flutter/dart, k8s, and others.

Yes, any of those could be defunded at any moment if it was no longer advantageous for Google to support it, but for now, they are getting support and new releases.

sofixa

4 days ago

> Should google put this on all their products

They have a few products you pay for and are thus supported (for consumers YouTube Premium, Google One which is Drive storage+support, for organisations Google Cloud, Google Workspace), but for everything else, yeah. Unless you pay for One, you get no support from them on the random free stuff like Keep or Maps - you get what you pay for (in reality much more than that, something like Maps is a massive effort everyone can use for free).

denysvitali

4 days ago

As a Google One customer, I can guarantee that you get no support from them either.

The real support comes from talking with someone who works at Google and can direct you to the right team...

warkdarrior

4 days ago

It's an open-source library (or collection of libraries) for Go HTTP servers. Apache license, so it does not matter if Google supports it or not.

HumanOstrich

4 days ago

Until you have an equivalent team of volunteers ready to maintain the project, I think it does matter.

jerf

4 days ago

It's not really that sort of project. In principle, this is all stuff you should already be doing. But, statistically speaking, $YOU aren't. Even if they disappear today, you're probably still better off picking this up and starting with it as-is than you are with your current server.

While I'm sure this is the sort of project that could benefit from ongoing improvements, it is also not going to decay away into utter uselessness if nobody commits to it in a week or two.

I understand the need for projects to be maintained, but I think some people have been badly burned by the Javascript world, or possibly some other language environments, and don't realize that those environments aren't the norm, but one of the extrema. Go is quite possibly on the other extrema. It does not generally decay. Again, I'm not saying that means it's totes cool to pick up a security-based project with a last commit from seven years ago and just assume it's still cutting edge, but this sort of Go code doesn't require a dozen commits a month just to tread water.

HumanOstrich

4 days ago

As someone stuck in JavaScript hell, I think you have a great point. How nice it would be to not touch a project for a year, come back, and not have to rewrite it just to upgrade dependencies.

azornathogron

4 days ago

A team of 1 volunteer would already be equivalent to the resources that Google puts into this particular project.

asgr

4 days ago

[flagged]