zimpenfish
7 hours ago
At a previous job, there was an argument over a code review where I had done some SQL queries that fixed a problem but were not optimal. The other side were very much "this won't work for 1000 devices! we will not approve it!" whereas my stance was "we have a maximum of 25 devices deployed by our only customer who is going to leave us next week unless we fix this problem today". One of the most disheartening weeks of my software development life.
(that was also the place I had to have a multi-day argument over the precise way to define constants in Perl because they varied in performance except it was a long running mod_perl server process and the constants were only defined at startup and it made absolutely zero difference once it had been running for an hour or more.)
JohnBooty
2 hours ago
Ugggggggggggh. My favorite part of the job is scaling/optimization, and my least favorite part of the job is... this.
There are a LOT of "engineers" who understand that one thing might be faster than another thing, but lack the chops/wisdom to understand when it actually matters.
mewpmewp2
24 minutes ago
Best example I've seen is wanting to optimize a frontend JS for loop that gets triggered maximally only once per page load, runs max 100 times in a loop with complicated data structures that will confuse any reader of that code why it was done like this and not just with a for loop.
lofaszvanitt
6 hours ago
This is why everyone uses microservices and React. We don't know how these work, but we are netfligs/farcebook level companiez, so we must.
meribold
6 hours ago
> I had to have a multi-day argument over the precise way to define constants in Perl
Couldn't you have used whatever the other person was suggesting even if the change was pointless?
zimpenfish
6 hours ago
These days, I would, yeah, but this was a long time ago and I was a lot more invested in not letting nonsense go past without a fight.
JohnBooty
2 hours ago
Yeah it's one of those things where it's like...
...okay, I can let this nonsense go today with very little impact, but if I do... will the team need to deal with it 100 times in the future? (And more broadly, by silence am I contributing to the evolution of a bad engineering culture ruled by poor understandings?)
It is very very difficult to know where to draw the line.
bbarnett
7 hours ago
I actually like having room for optimization, especially when running my own infra servers included.
As an example, I can think of half a dozen things I can currently optimize just in the DB layer, but my time is being spent (sensibly!) in other areas that are customer facing and directly impacting them.
So fix what needs to be fixed, but if there was a major load spike due to onboarding of new clients/users, I could in a matter of hours have the DB handling 100x the traffic. That's a nice ace in the back of my pocket.
And yes, if I had endless time I'd have resolved all issues.
willvarfar
7 hours ago
Usually a good trick is to run small deployments at high logging levels. Then, as soon as there are performance issues, you can dial down the logging and get the hours of respite needed to actually make a bigger algorithmic improvement.
nine_k
7 hours ago
If that optimization is mere hours of work, I would go for it outright. BTW when you have an overwhelming wave of signups, you are likely to have more pressing and unexpected issues, and badly lack spare hours.
Usually serious gains that are postponed would require days and weeks of effort. Maybe mere hours of coding proper, but much longer time testing, migrating data, etc.
xandrius
7 hours ago
I think that's the pitfall: there are infinite things a skilled developer can do within "mere hours of work".
The key is to find which ones are the most effective use of one's limited hours.
I developed a small daily game and it has now grown to over 10K DAU, so now I've started going back to refine the low hanging fruits which just didn't make sense to touch when I had just 10s of players a day.
Jach
5 hours ago
We might all be operating under different ideas of what "matter of hours" means. Often times in a project, at least for me, the range of things that I can start and finish in mere hours is not actually infinite, but rather constrained. Only the simplest and smallest things can be done so fast or faster. So many other things just take longer, at least a day and a half, and can take extra long when other teams are involved. More experienced developers can learn how to break things down into increments, so while an entire big feature might take weeks to do, you're making clear chunks of progress in much smaller units. Those chunks are probably not mostly sized in mere hours pieces, though...
You're right that priority matters. Just beware of priority queue starvation. Still, if some newly discovered bug isn't urgent, even if I think it'd be rather easy (under an hour) to fully address, I'd rather not break my current flow, and just keep working on the thing I had earlier decided was highest priority. A lot of the time something will prevent direct progress and break the flow anyway, having smaller items available to quickly context switch to and finish is a good use of those gap times.
The "DB handling 100x the traffic" example above isn't quite well defined. I wonder if it's making queries return 100x faster? Or is it making sure the queries return at roughly their current speeds even if there's 100x more traffic? Either way, I can make arguments for doing the work proactively rather than reactively, but I'd at least write down the half a dozen things. Then maybe someone else can do them, and maybe those things can be done in around half a dozen tiny increments of 30 minutes or less each, instead of all at once in hours.
zimpenfish
5 hours ago
> Or is it making sure the queries return at roughly their current speeds even if there's 100x more traffic?
From what I remember, it was this. The DB was MySQL and a whole bunch of stuff would have been less efficient when there were 1000 devices instead of 25. But on the other hand the system was broken and the customer was threatening to cancel everything and fixing the DB stuff was going to take a fair amount of rearchitecting (not least dumping MySQL for something less insane) that we didn't have the time or resources to do in a hurry.
nine_k
5 hours ago
Exactly. Going for something that shaves 5% off your hosting bill or your build time only makes sense when you are already colossal. But something that halves your bill, or halves your build time, or halves your latency, is likely impactful if your project achieves the scale when the proceeds from it can support you.
And very often an optimization improves things not by factor of 1.1 but by factor of 10 or more.
OTOH it's worth to be mindful of your "tech debt" as a "tech credit" you took to be able to do more important things before it's time to repay. Cutting your hosting bill from $100 to $50 may be much less important than doubling your revenue from $500 to $1000.
zimpenfish
7 hours ago
> if there was a major load spike due to onboarding of new clients/users
This company was a hardware company with reasonably complex installation procedures - even going full whack, I doubt they could have added more than 20 new devices a week (and even then there'd be a hefty lead time to get the stuff manufactured and shipped etc.)
soco
7 hours ago
like != need
renatovico
7 hours ago
totally agree with your view,
the big design upfront moment again, maybe because of the current economy, we need focus more on be profitable in short term, i think it's great and always focus on optimize for now, and test for specs (specs in sense of requirements of customer)
throwaway48540
7 hours ago
I really don't think that's the case. If you ask a CEO/CTO of a startup, he would fire the guy who did the latter instead of the middle approach. Longer term stability and development velocity are very important concerns in engineering management. This is pure inexperience - it wouldn't take too long to setup for an experienced engineer anyways, they probably did it 5 times last month and have a library of knowledge and templates. I can't call a guy who isn't capable of setting up a project this way swiftly without seeing it as a problem "senior".
dambi0
7 hours ago
What is the middle approach in this scenario? What specifically are the templates you refer to?
throwaway48540
7 hours ago
The middle approach would be implementing the infrastructure in a basic way that is simplistic, but provides benefit and can be expanded.
dambi0
6 hours ago
How does that relate to the scenario and what of the templates?
zimpenfish
6 hours ago
> The middle approach would be implementing the infrastructure
That's great if you're implementing but it doesn't really work when you're coming in to an existing infrastructure (or codebase) that other people manage.
Arnt
6 hours ago
To spend time on thinking about performance, and then not write the code.
mewpmewp2
6 hours ago
I mean sure you would want to do that, but the above was very specific situation with existing feature set and an imminent novel problem.
SebFender
4 hours ago
In my last company that was always the golden rule - manage the present challenge with an eye on the future - but let's get things done.
anal_reactor
5 hours ago
I'm stunned to realize that most developers just blindly follow whatever are the newest hottest "good practices" and completely ignore the actual goals that code should try to achieve. Literally yesterday I had an argument with a coworker because he wouldn't buy the argument "High-availability costs us $2000 per year extra, downtime costs us $100 per day, so if the database breaks once per year, there's no point having high-availability". Some time ago someone important from upper management announced that we need to pump out features faster to stay afloat as a business, even if they're a bit broken, and two hours later the most senior programmer in my team introduced us to new bureaucracy "because that's the best practice" and he wouldn't understand that what he's doing is directly at odds with the company's goals. Same guy also often accuses me of "not seeing the bigger picture". I gave up trying to reason with coworkers because it's just pain and changes nothing in the long run besides exhausting me.
mewpmewp2
4 hours ago
On the other side of the argument, does $2000 a year seem a lot for high-availability?
Does downtime really cost only $100 per day? How was it calculated? How much does your business make? It would seem it should make more than 365 * $100 = $36,500 to be able to be in a position to hire people in the first place.
Database downtime would potentially:
1) Break trust with customers.
2) Take away focus from your engs for a day or more which is also a cost. If an eng costs $200 per day for example, and you have 4 engs involved with, it's already a cost of $800, not to mention increase of odds of stress and burn out. And in US engs would cost of course much more than that, I was just picking a more favourable sum.
3) If it's a database that breaks, it's likely to cause further damages due to issues with data which might take indefinite time to fix.
Overall in most businesses it would seem a no brainer to pay $2000 a year to have high availability and avoid any odds of a database breaking. It's very little compared to what you pay your engineers.
anal_reactor
2 hours ago
The whole thing is a side project with 5 customers total. If it dies it'll take a while before anyone notices.
mewpmewp2
2 hours ago
It's a side project but you are calling them coworkers and a company with company's goals?
KronisLV
2 hours ago
> I gave up trying to reason with coworkers because it's just pain and changes nothing in the long run besides exhausting me.
What are you even supposed to do in such situations?
It might be not possible to make them see things your way or vice versa.
You probably don’t want to get the person fired and going to look for a new place of employment just because of one or two difficult people doesn’t seem all that nice either.
You can get the management on your side (sometimes), but even then that won’t really change anything about similar future situations.
To me, it seems like there aren’t that many options, whenever it’s anything more abstract than being able to point to a sheet of test results and say “A is better than B, we are doing A”.
I’ve had fundamental disagreements with other people as well, nowadays I don’t really try to convince them of anything and try to limit how deep those discussions get because often they’re quite pointless, whenever they’re not purely data based. That doesn’t actually fix anything and the codebases do get less pleasant to work with at times for someone like me.