Golang optimizations for high‑volume services

67 pointsposted 2 months ago
by der_gopher

13 Comments

ad_hockey

2 months ago

I've been thinking about trying an alternative JSON library, but interested to hear opinions on whether jsoniter is still recommended. There are 208 open issues on the repo, and a question about whether it's still maintained[1]

Would particularly like to know if anyone has done a performance comparison with the new API coming in the stdlib[2], which feels like a better bet. That blog says:

The Marshal performance of v2 is roughly at parity with v1. Sometimes it is slightly faster, but other times it is slightly slower. The Unmarshal performance of v2 is significantly faster than v1, with benchmarks demonstrating improvements of up to 10x.

[1] https://github.com/json-iterator/go/issues/706

[2] https://go.dev/blog/jsonv2-exp

jftuga

2 months ago

I'd be curious to know transactions per second (or other metrics) before and after the suggested changes.

theHurzzen

2 months ago

Indeed. The post can be more interesting with proper metrics to backup the impact of each change.

aranw

2 months ago

I'm currently working on a project that is using an OpenAPI library that decided to use a non-standard JSON encoder. The developer experience definitely suffers when you can't use common encoding/json patterns in your own code. Simple operations become unnecessarily awkward

foobarkey

2 months ago

Interesting tips, looking into Go perf recently also. However making sure postgres wal log does not grow seems like putting an unnecessary constraint on things and then defeating it

vrnvu

2 months ago

My first thought: Controlling allocations and minding constraints... honestly, that's engineering stuff all services should care about. Not only "high-volume" services.

ashf023

2 months ago

I'm definitely in favor of not pessimizing code and assuming you can just hotspot optimize later, but I would say to avoid reusing objects and using sync.pool if it's really not necessary. Go doesn't provide any protections around this, so it does increase the chance of bugs, even if it's not too difficult to do right.

Yokohiii

2 months ago

What are the options? Repeated allocations are a huge performance sink.

ashf023

2 months ago

I mean, do it if it's worth it. But the parent seemed to imply everyone should be doing this kind of thing. Engineering is about tradeoffs, and sometimes the best tradeoff is to keep it simple.

Yokohiii

2 months ago

Your initial judgement of using sync.Pool is quite overboard. The average go dev would wind up goroutines without much thought and pull in mutexes to avoid trouble. That's a hard thing to manage, using sync.Pool is comparatively easy.

For me it looks like the general sentiment is that go enabled concurrency, which should be leveraged, it also did simplify memory management, which should be ignored. But memory management has an direct impact on latency and throughput, to simply ignore it is like enabling concurrency just because someone said it's cool.

Ameo

2 months ago

Was curious to read this, but then the massive full-page ugly-on-purpose AI-generated NFT-looking banner image at the top of the page turned my stomach to the point where there's no way I'd even consider it - even if the article isn't AI-generated (which it probably is).