runako
4 days ago
One of the more detrimental aspects of the Cloud Tax is that it constrains the types of solutions engineers even consider.
Picking an arbitrary price point of $200/mo, you can get 4(!) vCPUs and 16GB of RAM at AWS. Architectures are different etc., but this is roughly a mid-spec dev laptop of 5 or so years ago.
At Hetzner, you can rent a machine with 48 cores and 128GB of RAM for the same money. It's hard to overstate how far apart these machines are in raw computational capacity.
There are approaches to problems that make sense with 10x the capacity that don't make sense on the much smaller node. Critically, those approaches can sometimes save engineering time that would otherwise go into building a more complex system to manage around artificial constraints.
Yes, there are other factors like durability etc. that need to be designed for. But going the other way, dedicated boxes can deliver more consistent performance without worries of noisy neighbors.
benreesman
3 days ago
In 2025 if you need convenience and no red tape you've got fly.io in the general case and maybe Vercel or something on a particular framework (there are some good ones for a particular stack).
If your needs go beyond that? Then you need real computers with real configuration and you have OVH/Hetzner/Latitude who will rent you MONSTER machines for the cost of some cheap-ass surplus 2017 Intel on The Cloud.
And if you just want a blog or whatever? Zillion VPS options.
The traditional cloud is for regulatory/process/corruption capture extraction in 2025: its machine economics and developer productivity use case is fucking zero I've seen. Maybe there's some edge case where a completely unencumbered team is better off with DMV trip permissions theatre, remnant Intel racked with noisy neighbors at massive markup, and no support recourse.
nine_k
3 days ago
(1) How does fly.io reliability compare to AWS, GCP, or maybe Linode or DO?
(2) What do you do if your large Hetzner server starts to show signs of malfunction? How soon would you be able to replace it, and how easily?
(2a) What do you do when your large Hetzner server just dies? I see that this happens rarely, but what's your contingency plan, if any?
(3) What do you do when your load is highly spiky? Do you reserve bare metal capacity for the biggest peak you expect to serve, because it's so much cheaper than running an elastic serverless architecture of the same capacity anyway?
(4) Considering that your stack still includes many components, how do you manage them, and how expensive is the management overhead? Do you need an extra SRE?
These are not rhetorical questions; I'd love to hear firm real practitioners! (E.g. Stack Overflow used to do deep dives into their few-big-servers architecture.)
runako
3 days ago
These are great questions.
A key factor underlining all of this is understanding, from a business/organizational perspective, your actual uptime requirements. Google may aim at 5 nines with the budget to achieve it, but many banks have routine planned downtime. If you don't know your objectives, you will have trouble making the tradeoffs necessary to get there. As a hypothetical, would your business choose 99.999% uptime (26 seconds down on average per month) vs 99.99% (4.3 min) if that caused infra costs to rise by 50% or more? If you said we can cut our infra costs by 50% by planning a short weekly maintenance window, how would that resonate?
Speaking to a few, in my experience:
2) (not at Hetzner specifically, but at a dedicated host). You have backups & recovery plans, and redundancy where it makes sense. You might run your database with a replica. If you are serving Web traffic, maybe you keep a hot spare. Also, you are still allowed to use e.g. cloud services if it makes sense to do so so you can backup to S3 and use things like SQS or KMS if you don't want to run them yourself. It's worth noting that you may not get advance notice; I recall our service being impacted by a fire at a datacenter that IIRC was caused by a traffic accident on a nearby highway. The point is you have to design resilience into the system. Fortunately, this is well-trod ground.
It would not be a terrible failover option to have something like an autoscale group at AWS ready to step in if the dedicated cluster goes offline. Keep that cluster scaled to 0 until it's needed. Put the cloud behind your cheap dedicated capacity.
3) See above. In my case, we over-provisioned because it's cheap to do so. I did not do this at the time, but I would probably look at running a replicated database with a hot standby on another server.
4) It has not been my experience that "modern" cloud deployments require fewer SRE resources. Like water running downhill, cloud projects seek complexity.
shrubble
4 days ago
It's more than that - it's all the latency that you can remove from the equation with your bare-metal server.
No network latency between nodes, less memory bandwidth latency/contention as there is in VMs, no caching architecture latency needed when you can just tell e.g. Postgres to use gigs of RAM and then let Linux's disk caching take care of the rest (and not need a separate caching architecture).
matt-p
4 days ago
The difference between a fairly expensive ($300) RDS instance + EC2 in the same region vs a $90 dedicated server with a NVME drive and postgres in a container is absolutely insane.
bspammer
4 days ago
A fair comparison would include the cost of the DBA who will be responsible for backups, updates, monitoring, security and access control. That’s what RDS is actually competing with.
shrubble
4 days ago
Paying someone $2000 to set that up once should result in the costs being recovered in what, 18 months?
If you’re running Postgres locally you can turn off the TCP/IP part; nothing more to audit there.
SSH based copying of backups to a remote server is simple.
If not accessible via network, you can stay on whatever version of Postgres you want.
I’ve heard these arguments since AWS launched, and all that time I’ve been running Postgres (since 2004 actually) and have never encountered all these phantom issues that are claimed as being expensive or extremely difficult.
sahilagarwal
3 days ago
I guess my non-management / non-business side is show here, but how can it be that much?? I still remember I designed a fairly simple cron job that took database backups when I was a junior developer.
It gets even easier now that you have cheap s3 - just upload the dump to s3 every day and set the s3 deletion policy to whatever is feasible for you.
alemanek
3 days ago
I am not an expert here but I am currently researching for a planned project.
For backups, including Postgres, I was planning on paying Veeam ~$500 a year for a software license to backup the active node and Postgres database to s3/r2. Standby node would be getting streaming updates via logical replication.
There are free options as well but I didn’t want to cheap out on the backups.
It looks pretty turnkey. I am a software engineer not a sysadmin though. Still just theory as well as I haven’t built it out yet
nine_k
3 days ago
Taking database backups is relatively simple. What differentiates a good solution is the ease of restoring from a backup. This includes the certainty that the restored state would be a correct point-in-time state from the past, not an amalgamation of several such states.
fragmede
3 days ago
How much were you paid as a jr developer, and how long did it take you to set up? Then round up to mid-level developer, and add in hardware and software costs.
dijit
3 days ago
That's a deflection. The question isn't about a developer's salary; it's about the fundamental difference between a one-time investment and a permanent cost.
Either way: 1 day of a mid-level developer in the majority of the world (basically: anywhere except Zurich, NYC or SF) is between €208 and €291. (Yearly salary of €50-€70k)
A junior developer's time for setup and the cost of hardware is practically a one-off expense. It's a few days of work at most.
The alternative you're advocating for (a recurring SaaS fee) is a permanent rent trap. That money is gone forever, with no asset or investment to show for it. Over a few years, you'll have spent tens of thousands of dollars for nothing. The real cost is not what you pay a developer; it's what you lose by never owning your tools.
fragmede
3 days ago
> The alternative you're advocating for
Not sure where I advocated for that. Could you point it out please?
applied_heat
4 days ago
$2k? That’s a $100k project for a medium size Corp
christophilus
3 days ago
$200 does seem too low. $100k seems waaay too high. That sounds like an AWS talking point.
sysguest
3 days ago
hmm where did you get the numbers?
(what's "medium-size corp" and how did you come up with $100k ?)
Aeolun
3 days ago
I’m assuming he’s talking about the corporate team of DBA’s that will spend weeks discussing the best way to copy a bunch of SQL files to S3
vidarh
4 days ago
I do consulting in this space, and we consistently make more money from people who insist on using cloud services, because their setups tend to need far more work.
benterix
3 days ago
You are aware that RDS needs backups, setting up monitoring properly, defining access, providing secrets management etc., and updates between major versions are not automatic?
RDS has a value. But for many teams the price paid for this value is ridiculously high when compared to other options.
pdhborges
3 days ago
AWS can make major version upgrades automatically now with less downtime. I think they do the logical replication dance internally.
yjftsjthsd-h
4 days ago
As long as you also include the Cloud Certified DevOps Engineer™[0] to set up that RDS instance.
[0] A normal sysadmin remains vaguely bemused at their job title and the way it changes every couple years.
mrweasel
3 days ago
It's also interesting that the cloud engineer can apparently be both a DBA, network-, storage- and backup engineer, but if you move the same services on-prem, you apparently need specialists for each task.
Sometimes even the certified cloud engineers can't tell you why an RDS behaves the way it does, nor can they really fix it. Sometimes you really do need a DBA, but that applies equally to on-prem and cloud.
I'm a sysadmin, but have been labelled and sold as: Consultant (sounds expensive), DevOps engineer, Cloud Engineer, Operations Expert and right now a Site Reliability Engineer.... I'm a systems administrator.
Aeolun
3 days ago
If you’ve started working in the industry more than about 15 years ago all the titles sound quaint.
icedchai
3 days ago
I haven't seen a company that hired DBAs in over 15 years. I think the "DevOps" movement sent them packing, along with SysAdmins.
dijit
3 days ago
Sysadmins never left, they just got rebranded.
icedchai
2 days ago
I actually agree with this. I meant you never seen roles with the "system administrator" job title, not that it actually disappeared as a function. DBAs on the other hand, I do think that has mostly been absorbed into other roles.
data_marsupial
3 days ago
Need to get Platform Engineer for a full house
Cthulhu_
3 days ago
While that's fair, most organizations I've worked at in the past decade have had a dedicated team for managing their cloud setup, which is also responsible for backups, updates, monitoring, security and access control. I don't think they're competing.
sgarland
4 days ago
You don’t need a DBA for any of those, you need someone who can read some docs. It’s not witchcraft.
Aeolun
3 days ago
I’d argue that AWS is witchcraft a lot of the time. They’ll have all these they claim will work for everything, but you’ll always find one of the things you’d expect to be unavailable.
lelanthran
3 days ago
The RDS solution doesn't need a technical person to set it up?
It doesn't need someone who knows how to use the labrythine AWS services and console?
whstl
3 days ago
Agree.
These comments sound super absurd to me, because RDS is difficult as hell to setup, unless you do it very frequently or already have it in IoC format, since one needs setting up a VPC, subnets, security groups, internet gateway, etc.
It's not like creating a DynamoDB, Lambda or S3 where a non-technical person can learn it in a few hours.
Sure, one might find some random Terraform file online to do this or vibe-code some CloudFormation, but that's not really a fair comparison.
matt-p
4 days ago
Totally. My frustration isn't even price though RDS is literally just dog slow.
steveBK123
3 days ago
My firm paid DBAs for RDS as well so..
zenmac
4 days ago
Yeah but AWS SRE are what making the big bucks! Soooo what can you do? It is nice to see many people here on HN are supporting open network and platform and making very drastic comments as to encouraging google engineers to quite their jobs.
I totally also understand why some people with family to support mortgage to pay they can't just walk way from a job at FAANG or MAMAA type place.
Looking at your comparison, this point it just seems like a scam.
jpgvm
4 days ago
Right now the big bucks are in managing massive bare metal GPU clusters.
benterix
3 days ago
Yeah, let's use the opportunity while it lasts.
reactordev
4 days ago
This. Clustering and managing Nvidia at scale is the new hotness demanding half-million dollar salaries.
t_mahmood
3 days ago
I don't get why people are so hell-bent on going to AWS, for the most minor applications, without looking at simpler options!
I am not even thousands km near the level of what you are doing, but my client was paying $100/m for an AWS server, SQS and S3 bucket, for a small PHP based web application that uses Amazon Seller API, Keepa API for the products he ships. Used MySQL for data storage.
I implemented the whole thing in Python, Django, and PostgreSQL (initially used SQLite) put it in a $25/m unmanaged VPS.
I have not got any complaints about performance, and it's running continuously updating product prices, details, processing PDF invoices using OCR, finding missing products in shipments, while also serving the website, and a 4 core server with 6GB RAM is handling it just fine.
The load is not going to be so high to require AWS and friends, for now. It's a small internal app, probably won't even get over 100 users, and if it ever does, it's extremely simple to migrate, because the app is so compact, even though not exactly monolithic.
And still, it probably won't need a $100 AWS server, unless we are scaling up much larger.
jeroenhd
3 days ago
AWS is useful for big business. Automatic multi region failover and hosted databases may be expensive, but they're a massive pain to manually configure and an easy footgun if you're not used to doing that sort of thing. Plus, with Amazon you already have public toolkits to use those features with all of your services, so you don't need to figure how to integrate/what open source system to use to accomplish all of that. Plus, if you go for your own physical server, you need to arrange parts and maintenance windows for any hardware that will eventually fail.
If all you need is "good enough" reliability and basic compute power (which I think is good enough for many businesses, considering AWS isn't exactly outage free either), you're probably better off getting a server or renting one from a cheap cloud host. If you're promising five nines of uptime for some reason, you may want to reconsider.
t_mahmood
3 days ago
> If all you need is "good enough" reliability and basic compute power (which I think is good enough for many businesses, considering AWS isn't exactly outage free either), you're probably better off getting a server or renting one from a cheap cloud host.
This is exactly my point. Sorry if I was not clear on my OP.
We are using Seller API to get different product information, while their API provides base work for communicating with their endpoint, you'll have to implement your own system to use that, and handle the absurd unreliability of their API's rate limiter, and the spider web of API callbacks to get information that you require.
choeger
3 days ago
How much did that reimplementing cost and when will the savings exceed that cost?
t_mahmood
3 days ago
This costed around $10k. Which also includes work that is outside the reimplementation.
I do not know how much actually cost of the original application.
The app, that I was developing, was for another purpose, and the reimplementation was later added.
The app replaces an existing commercial app that is being used, which is $200+/m. So, may be 4/5 years to exceed the savings. They have been using the app for 3 years, I think.
And, maybe I am beating my drum a little, I believe my implementation works, and looks much better than the commercial or the first implementation.
So, I am really looking forward for this to success.
Esophagus4
3 days ago
Without understanding the architecture and use case better, at first read, my gut says that isn’t an AWS problem - it sounds like a solutions architecture problem.
There are cheaper ways of building that use case on AWS.
Most AWS sticker shock I’ve seen results from someone who doesn’t really understand cloud trying to build on the cloud. Cost has to be designed in from the start (in addition to security, operational overhead, etc).
In general, I’ve found two types of engineering teams who don’t use the cloud: the mugs and the superstars. And since superstars are few and far between, that means…
dijit
3 days ago
Sounds like we need a specialist.
I guess those promises about needing fewer expensive people never materialised.
tbh, aside from the really anaemic use-cases where everything actually manages to scale to zero and has very low load: I have genuinely never seen an AWS project (outside of free credits of course) that works out cheaper than what came before.
That's TCO from PNLs, not a "gut feeling". We have a decade of evidence now.
t_mahmood
3 days ago
... you failed at reading comprehension?
My comment was not about using AWS is bad, it has its uses. My comment was about how in this instance it was simply not needed. And I even speculated when it might be needed.
To pick the correct tool for the job, is what, it means to be an Engineer, or a person with common sense. With experience, we can get over childish absolutions of a tool or service, and look at the broader aspects, unless, of course, we are expecting some kind of monetary gains.
3shv
3 days ago
What are some cheaper and better hosting providers that you can recommend?
benterix
3 days ago
Hetzner.
For most public cloud providers you have to give them your credit card number so they can charge an arbitrary amount.
For Hetzner, instead of CC#, you give a scan of your ID (of course you can attach your CC too or Paypal). Personally I do my payments via a bank transfer. I recently paid for the whole 2025 and 2026 for all my k8s clusters. It gives unimaginable peace of mind when compared to AWS/GCP/Azure.
Plus, their cloud instances often spin up much faster than EC2.
drewnick
3 days ago
For bare metal I’ve been using tier.net to get 192 GB RAM, 4TB NVME and 32 cores for $219/mo.
Data centers all over the country and I get to locate under 10ms from my regional audience.
Just a data point if you want some bigger iron than a VM.
t_mahmood
3 days ago
I have used Knownhost previously, it served me really well.
Before that, I used to go for Linode, but I think they've become more pricey?
LamaOfRuin
3 days ago
Linode was bought by Akamai. They immediately raised prices, and they have been, if anything, less reliable.
t_mahmood
3 days ago
Ahh, yes, I remember now! I think it's almost 8 years now? Stopped using them after the buy out.
Too bad, actually, their service was pretty good.
ferngodfather
3 days ago
Hetzner! They do ask for ID though.
mr_toad
3 days ago
Saving $75 a month at what cost in labour?
andersmurphy
3 days ago
You actually save on labour. A VPS is a lot less work than anything involving AWS console.
andersmurphy
4 days ago
100% this add an embedded database like sqlite and optimise writes to batch and you can go really really far with hetzner. It's also why I find the "what about overprovisioning" argument silly (once you look outside of AWS you can get insane cost/perf ratio).
Also in my experience more complex systems tend to have much less reliability/resilience than simple single node systems. Things rarely fail in isolation.
Demiurge
4 days ago
I think it’s the other way around. I’m a huge fan of Hetzner for small sites with a few users. However, for bigger projects, the cloud seems to offer a complete lack of constraints. For projects that can pay for my time, $200/m or $2000/m in hosting costs is a negligible difference. What’s the development cost difference between AWS CDK / Terraform + GitHub Actions vs. Docker / K8s / Ansible + any CI pipeline? I don’t know; in my experience, I don’t see how “bare metal” saves much engineering time. I also don’t see anything complicated about using an IaC Fargate + RDS template.
Now, if you actually need to decouple your file storage and make it durable and scalable, or need to dynamically create subdomains, or any number of other things… The effort of learning and integrating different dedicated services at the infrastructure level to run all this seems much more constraining.
I’ve been doing this since before the “Cloud,” and in my view, if you have a project that makes money, cloud costs are a worthwhile investment that will be the last thing that constrains your project. If cloud costs feel too constraining for your project, then perhaps it’s more of a hobby than a business—at least in my experience.
Just thinking about maintaining multiple cluster filesystems and disk arrays—it’s just not what I would want to be doing with most companies’ resources or my time. Maybe it’s like the difference between folks who prefer Arch and setting up Emacs just right, versus those happy with a MacBook. If I felt like changing my kernel scheduler was a constraint, I might recommend Arch; but otherwise, I recommend a MacBook. :)
On the flip side, I’ve also tried to turn a startup idea into a profitable project with no budget, where raw throughput was integral to the idea. In that situation, a dedicated server was absolutely the right choice, saving us thousands of dollars. But the idea did not pan out. If we had gotten more traction, I suspect we would have just vertically scaled for a while. But it’s unusual.
runako
4 days ago
> I really don't see how "bare metal" saves any engineering time
This is because you are looking only at provisioning/deployment. And you are right -- node size does not impact DevOps all that much.
I am looking at the solution space available to the engineers who write the software that ultimately gets deployed on the nodes. And that solution space is different when the nodes have 10x the capability. Yes, cloud providers have tons of aggregate capability. But designing software to run on a fleet of small machines is very different from accomplishing the same tasks on a single large machine.
It would not be controversial to suggest that targeting code at an Apple Watch or Raspberry Pi imposes constraints on developers that do not exist when targeting desktops. I am saying the same dynamic now applies to targeting cloud providers.
This isn't to say there's a single best solution for everything. But there are tradeoffs that are now always apparent. The art is knowing when it makes sense to pay the Cloud Tax, and whether to go 100% Cloud vs some proportion of dedicated.
Demiurge
4 days ago
Overall, I agree that most people underestimate the runway that the modern dedicated server can give you.
sevensor
3 days ago
I’ve seen multiple projects founder on the complexity of writing software for the cloud. Moving data from here to there ends up being way harder than anybody expected. Maybe teams with more experience build this into their planning, but from what I’ve seen, if you’re using the cloud, your solution ends up being 95% about getting data where it’s supposed to be and 5% application logic.
Esophagus4
3 days ago
This sounds a people problem, not a technology problem.
I’ve never had an issue with moving data.
benterix
3 days ago
> I’m a huge fan of Hetzner ... I don’t see how “bare metal” saves much engineering time.
I think you confuse Heztner with bare metal. Hetzner has Hetzner Cloud which is like AWS EC2 but much cheaper. (They also have bare metal servers which are even cheaper.) With Heztner Cloud, you can use Terraform, Github Actions and whatever else you mentioned.
Demiurge
3 days ago
Yeah, I do confuse it, because I've been using Hetzner long before they had "cloud".
cnst
3 days ago
> types of solutions engineers even consider
I think the issue is actually the opposite.
With the cloud, the engineers fail to see the actual cost of their inefficient scaled-out code, because someone else (the CFO) pays the bill; and the answer to any issue, is simply adding more "workers" and more "cloud", since they're basically "free" from the perspective of the employee. (And the more "cloud" something is, like, the serverless, the more "free", completely inverting the economics of making a profit on the service — when the CFO tells you that your AWS bill is too high, you move everything from the EC2 to AWS Lambda, since the salesperson from AWS tells you that serverless is far cheaper, only for the bill to get even higher, for reasons unknown, of course.)
Whom the cloud tax actually constrains are the entrepreneurs and solo-preneurs. If you have to pay $5000/mo to AWS just for the infra, you can only go so long without lots of revenue, and you'd need to have a whopping 5k/mo+ worth of revenue before breaking even. Yet with a $200/mo like at OVH or Hetzner, you can afford to let it grow at negligible cost to yourself, and it can basically start being profitable with the first few users.
Don't believe this? Look at the blog entries by the guy who bought Yahoo!'s Delicious, written before they went bankrupt and were up for sale. He was basically pointing out that the services have roughly the same number of users, and require the same engineering resources, yet one is being operated at a loss, whereas the other one makes a profit (guess which one, and guess why).
* https://en.wikipedia.org/wiki/Delicious_(website)
* https://en.wikipedia.org/wiki/Pinboard_(website)
* https://news.ycombinator.com/from?site=blog.pinboard.in
So, literally, the difference between the cloud and renting One Big Server, is making a loss and going out of business, and remaining in business and purchasing your underwater competitor for pennies on the dollar.
ldoughty
4 days ago
I agree that AWS EC2 is probably too expensive on the whole. It also doesn't really provide any of the greater benefits of the cloud that come from "someone else's server".
However, to the point of microservices as the article mentions, you probably should look at lambda (or fargate, or a mix) unless you can really saturate the capacity of multiple servers.
When we swapped to ECS+EC2 running microservices over to lambda our costs dropped sharply. Even serving millions of requests a day we spend a lot of time in between idle, especially spread across the services.
Additionally, we have 0 outages now from hardware in the last 5 years. As an engineer, this has made my QoL significantly better.
jgalt212
3 days ago
> I agree that AWS EC2 is probably too expensive on the whole.
Probably? It's about 5-10X more expensive than equivalent services from Hetzner.
Spooky23
4 days ago
It really depends on what you are doing. But when you factor the network features, the ability to scale the solution, etc you get alot of stuff inside that $200/mo EC2 device. The product is more than the VM.
That said, with a defined workload without a ton of variation or segmentation needs there are lots of ways to deliver a cheaper solution.
troupo
4 days ago
> you get alot of stuff inside that $200/mo EC2 device. The product is more than the VM.
What are you getting, and do you need it?
throwaway7783
3 days ago
Probably not for $200/mo EC2, but AWS/GCP in general
* Centralized logging, log search, log based alerting
* Secrets manager
* Managed kubernetes
* Object store
* Managed load balancers
* Database HA
* Cache solutions
... Can I run all these by myself? Sure. But I'm not in this business. I just want to write software and run that.
And yes, I have needed most of this from day 1 for my startup.
For a personal toy project, or when you reach a certain scale, it may makes sense to go the other way. U
eska
3 days ago
Now imagine your solution is not on a distributed system and go through that list. Centralized logging? There is nothing to centralized. Secrets management? There are no secrets to be constantly distributed to various machines on a network. Load balancing? In practice most people for most work don’t use it because of actually outgrowing hardware, but because they have to provision to shared hardware without exclusivity. Caching? Distributed systems create latency that doesn’t need to exist at all, reliability issues that have to be avoided, thundering herd issues that you would otherwise not have, etc.
So while there are areas where you need to introduce distributed systems, this repeated disparaging comment of “toy hobby projects” makes me distrust your judgement heavily. I have replaced many such installations by actually delivering (grand distributed designs often don’t fully deliver), reducing costs, dramatically improving performance, and most importantly reducing complexity by magnitudes.
bbarnett
3 days ago
Not to mention scaling. Most clients I know never, ever have scaled once. Ever. Or if they do, it's to save money.
One server means you can handle the equiv of 100+ AWS instances. And if you're into that turf, then having a rack of servers saves even more.
Big corp is pulling back from the cloud for a reason.
throwaway7783
2 days ago
I mentioned this in an earlier comment. It is dumb to be on the cloud at a large enough scale.
viraptor
3 days ago
> Centralized logging? There is nothing to centralized.
It's still useful to have the various services, background jobs, system events, etc. in one indexed place which can also manage retention and alerting. And ideally in a place reachable even if the main service goes down. I've got centralised logging on a small homelab server with a few services on it and it's worth the effort.
> Load balancing? In practice most people for most work don’t use it because of actually outgrowing hardware, but because they have to provision to shared hardware without exclusivity.
Depending on how much you lose in case of downtime, you may want at least 2x of hardware for redundancy and that means some kind of fancy routing (whether it's LB, shared IP, or something else)
> Secrets management? There are no secrets to be constantly distributed to various machines on a network.
Typically businesses grow to more than one service. For example I've got a slack webhook in 3 services in a small company and I want to update it in one place. (+ many other credentials)
> Caching? Distributed systems create latency that doesn’t need to exist at all
This doesn't solve the need for caching results of larger operations. It doesn't matter how much latency you have or not, you still don't want that rarely-changing 1sec long query to run on every request. Caching is rarely only about network latency.
Spooky23
3 days ago
It sounds like you make a living doing stuff that has an incredibly small, ninja-like team, has a very low change rate, or is something that nobody really cares about. Things like RPO/RTO, multi-tenancy, logging, etc don't matter.
That's amazing. I wish I could do the same.
Unfortunately, I cannot run my business on a single server in a cage somewhere for a multitude of reasons. So I use AWS, a couple of colos and SaaS providers to deliver reliable services to my customers. Note I'm not a dogmatic AWS advocate, I seek out the best value -- I can't do what I do in AWS without alot of capital spend on firewalls and storage appliances, as well as the network infrastructure and people required to make those work.
throwaway7783
2 days ago
Exactly. I don't quite understand how people say you just need a box. It certainly is much higher performant than a cloud VM, but that is not the only thing there is to run a software well. It all adds up bit by bit. It surely seems to be the way to go at some scale (or no customers who care).
throwaway7783
2 days ago
Maybe I'm dumb. I am not even taking about distributed systems here. I'm taking about basic high availability configuration. Two web servers, two (or three) db server instances for HA. I have had paying enterprise customers from day 1, and I don't want them screaming at me for systems going down.
And as soon as you have two of anything, all the above start mattering.
If none of this actually is an issue for you and your customers, I'll say your are very lucky.
doganugurlu
3 days ago
You need database HA and load balancers on day 1?
You must be doing truly a lot of growth prior to building. Or perhaps insisting on tiny VMs for your loads?
swiftcoder
3 days ago
> Or perhaps insisting on tiny VMs for your loads?
This happens way too often. Early-stage startups that build everything on the AWS free tier (t2.micro only!), and then when the time comes they scale everything horizontally
throwaway7783
2 days ago
I'll repeat what I said above. It's for availability (aka I don't want my customers screaming at me if the machine goes down). And no , scaling out was not our first solution, scaling up was. I have considered going bare metal so many times, but the number of things we need to build/manage by ourselves to function is too much right now.
Hopefully when we can afford to do it, we will.
throwaway7783
2 days ago
HA is for availability. I don't want downtime for my enterprise customers. Are your customers okay with downtime? And as soon as you have more than one nodes, you need some kind of a load balancer in the front.
rcxdude
a day ago
In practice, until you're at a certain scale, software bugs are more of a threat to your availability than hardware failures or maintenance downtime, and the cloud does nothing for you there (in fact, the additional complexity is likely to make it worse). Modern hardware is pretty reliable, more so than a given ec2 instance, for example.
runako
3 days ago
> Centralized logging, log search, log based alerting
Do people really use the bare CloudWatch logs as an answer for log search? I find it terrible and pretty much always recommend something like DataDog or Splunk or New Relic.
throwaway7783
2 days ago
we are on GCP. Logs Explorer on GCP is pretty good
troupo
3 days ago
> For a personal toy project,
which in reality is any project under a few hundred thousand users
throwaway7783
2 days ago
.. who are okay with services going down here and there
troupo
2 days ago
Which is the vast majority of services.
throwaway7783
2 days ago
Okay. If the premise is "you don't have to worry about downtimes and only need to serve a few hundred thousand users and no data intensive use cases", then I guess you can do whatever and it'll still be okay.
cedws
3 days ago
I don’t disagree but “cores” is not a good measure of computational power.
christophilus
3 days ago
True, but the cores on a dedicated Hetzner box obliterate the cores on an EC2 machine every time I’ve tested them. So, if anything, it understates the massive performance gap.
andersmurphy
3 days ago
Hetzner also tends to have more modern SSDs with the latest nvme. Which can make a massive difference for your DB.
Nextgrid
3 days ago
It's less about the modernity of SSDs and more about a fundamental difference: all persistent storage on AWS is actually networked - it's exposed to you as NVME but it's actually on a SAN and all IO requests go over the network.
You can get actual direct-attached SSDs on EC2 (and I'd expect performance to be on-par with Hetzner), but those are ephemeral and you lose them on reboot.
andersmurphy
2 days ago
Wow, that's crazy, I was wondering why the numbers I were seeing on AWS were so much worse. I assumed it was the drive modernity. But network makes a lot more sense.
Thanks for the insight!
benjiro
3 days ago
> At Hetzner, you can rent a machine with 48 cores and 128GB of RAM for the same money.
The problem that Hetzner and a lot of hardware providing hosts have, is the lack of affordable flexibility.
Hetzner their design is based upon a base range of standardized products. This can only be upgraded within a pre-approved range of upgrade options (limited to storage/memory).
Upgrades are often a mixed bag of carefully designed "upgrade paths". As you can expect, upgrades are not cheap. Doubling the storage on a base server, often increases the price of your server by 50 to 75%. The typical customizing will cost you dearly.
This is where AWS wins a lot more. Yes, they are expensive as hell, but you often are not stuck to a base config and a limited upgrade path. The ability to scale beyond what Hetzner can offer is there, and your not forced to overbuy from the start. Transferring between servers is a few buttons and done. With Hetzner, if you did not overspec from the start, your going to do those fun server migrations.
The ironic part is, that buying your own hardware and running it yourself, often ends up paying back within a 8~12 month periode (not counting electricity / internet). And you maintain a lot more flexibility.
* You want to use bifurcation, go for it.
* You want to use consumer 4TB nvme's for second layer read storage (what hetzner refuses to offer as they limited those to 2TB and only one a few servers), go for it.
* You want a 10Gbit interlink between your server, go for it. No need to pay a monthly fee! No need to reserve "future space".
* O, you want a 25Gbit, go for it (hetzner = not possible).
* You want 50Gbit ...
* You want to chuck in a few LLM capable GPUs without breaking the bank...
Its ironic that we are 2025 and Hetzner is stil limited to 1Gbit connection on its hardware, when just about any consumer level hardware has 2.5Gbit by default for years.
Your own hardware gives you the flexibility of AWS and the cost saving beyond Hetzner. Maybe its just my environment, but i see more and more smaller to medium companies going back to their own locally run servers. Not even colocation.
The increase in consumer level fiber, what used to be expensive or not available, has opened the doors for businesses. Most companies do not need insane backbones.
The fact that you can get business fiber 10Gbit for a 100 Euro price in some EU countries (of course never the north), is insane. I even seen some folks combining fiber with starlink & 5G as backup in case their fiber fails/is out.
As long as you fit within a specific usage case that is being offered by Hetzner, they are cheap. But its the moment you step outside that comfort zone, ... This is one of Hetzner weaknesses and where AWS or Self hosted comes back.
bluedino
3 days ago
Almost reminds of Rackspace back in...2011
We had a leased server from them, running VMware, and we had Linux virtual machines for our application.
We ran out of RAM. We only had 16 or 32GB at the time. Hey, can we double this? Sure, but our payment would nearly double. How does that make any sense?
If this were a co-located box we owned, I could buy a pair of $125 chips from Crucial (or $250 Dell chips from CDW) and there we go. But we're expected to pay this much more per month?
Their answer was "you can do more with the server so that's what you're paying for"
Storage was a similar situation, we were still on RAID with spinning drives and we wanted to go SSD, not even NVME. Wasn't going to happen. And if we went to a new server we'd have to get all new IP's and stuff. Ugh.
And 10Gb...that was a pipe dream. Costs were insane.
We ended up having to decide between two things:
1. Move to a co-lo and buy a couple servers, ala StackExchange. This is what I wanted to do.
2. Tweak the current application stack, and re-write the next version to run on AWS.
What did we end up doing? Some half ass solution using the existing server for DB and NGINX proxy, while running the sites on (very slow) Slicehost instances (which Rackspace had recently acquired and roughly integrated into their network). So we still had downtime issues, slow databases, etc.
radiator
3 days ago
> Doubling the storage on a base server, often increases the price of your server by 50 to 75%
For storage, Hetzner does offer Volumes, which you can attach to your VM and you can choose exactly how large you want them to be and are charged separately. But your argument about doubling resources and doubling prices still holds for RAM.
Nextgrid
3 days ago
FYI he's talking about dedicated servers (or "root servers" as they call them).
benjiro
3 days ago
> For storage, Hetzner does offer Volumes, which you can attach to your VM
The argument was about dedicated hardware. But it still holds for VPS.
Have you seen the price of Cloud Storage? ARM VPS 40GB is 4.51 (inc tax), for 40GB storage, your paying 2.10 Euro. So my argument still holds as your paying almost 50% more, just to go from 40GB to 80GB. And that ratio gets worse if your renting higher end VPS, and double your storage on them.
Lets be honest, 53.62 Euro for 1TB of SSD storage in 2025, is ridiculous.
Netcup is at 12 Euro/TB for SSD storage (same speed as the VMS as its just localized storage on the server, not network storage). Fyi: A ARM 6 Core 256GB, at netcup is 6.26 Euro.
Hetzner used to be the market leader and pushed others, but you barely see any new products or upgraded from them anymore. I said it before, if Netcup actually invested into a more modern/scalable VPS solution (instead of their 2010 VPS panels), they will eat a lots of Hetzners clients.
themafia
4 days ago
On AWS if you want raw computational capacity you use Lambda and not EC2. EC2 is for legacy type workloads and doesn't have nearly the same scaling power and speed that Lambda does.
I have several workloads that just invoke Lambda in parallel. Now I effectively have a 1000 core machine and can blast through large workloads without even thinking about it. I have no VM to maintain or OS image to consider or worry about.
Which highlights the other difference that you failed to mention. Hertzner charges a "one time setup" fee to create that VM. That puts a lot of back pressure on infrastructure decisions and removes any scalability you could otherwise enjoy in the cloud.
If you want to just rent a server then Hertzner is great. If you actually want to run "in the cloud" then Hertzner is a non-starter.
solid_fuel
4 days ago
Strong disagree here. Lambda is significantly more expensive per vCPU hour and introduces tight restrictions on your workflow and architecture, one of the most significant being maximum runtime duration.
Lambda is a decent choice when you need fast, spiky scaling for a lot simple self-contained tasks. It is a bad choice for heavy tasks like transcoding long videos, training a model, data analysis, and other compute-heavy tasks.
themafia
4 days ago
> significantly more expensive per vCPU hour
It's almost exactly the same price as EC2. What you don't get to control is the mix of vCPU and RAM. Lambda ties those two together. For equivalent EC2 instances the cost difference is astronomically small, on the order of pennies per month.
> like transcoding long videos, [...] data analysis, and other compute-heavy tasks
If you aren't breaking these up into multiple smaller independent segments then I would suggest that you're doing this wrong in the first place.
> training a model
You're going to want more than what a basic EC2 instance affords you in this case. The scaling factors and velocity are far less of a factor.
runako
3 days ago
This is a great example of what I meant when I said that a part of the Cloud Tax is it constrains the solution space available to developers. In an era where one can purchase, off-the-shelf, a 256-core machine with terabytes of RAM, developers are still counting megabytes(!) of file sizes due to the constraints of AWS.
It should be obvious that this is not the best answer for all projects.
jalk
3 days ago
This article (from Nov. 2022) shows that "utilizing Lambda is preferable until Lambda is utilized about 40 to 50 % of the time"
https://medium.com/life-at-apollo-division/compare-the-cost-...
eska
3 days ago
> If you aren't breaking these up into multiple smaller independent segments then I would suggest that you're doing this wrong in the first place.
Care to elaborate?
icedchai
3 days ago
You are expected to work around Lambda limitations because it's the "right way", not because the limitations make things overly complex. /s
icedchai
4 days ago
That's fine, except for all of Lambda's weird limitations: request and response sizes, deployment .zip sizes, max execution time, etc. For anything complicated you'll eventually you run into all this stuff. Plus you'll be locked into AWS.
themafia
4 days ago
> request and response sizes
If either of these exceed the limitations of the call, which is 6MB or 256kB depending on call type, then you can just use S3. For large distributed task coordination you're going to be doing this anyways.
> deployment .zip sizes
Overlays exist and are powerful.
> max execution time
If your workload depends on long uninterrupted runs of time on single CPUs then you have other problems.
> Plus you'll be locked into AWS.
In the world of serverless your interface to the endpoints and semantics of Lambda are minimal and easily changed.
icedchai
3 days ago
Of course, we can generally work around all these things. The point is it is annoying to do so. It adds friction and further couples you to a proprietary platform.
You're better off using ECS / Fargate for application logic.
twotwotwo
3 days ago
> [Hetzner] charges a "one time setup" fee to create that VM. That puts a lot of back pressure on infrastructure decisions and removes any scalability you could otherwise enjoy in the cloud.
Hetzner Cloud, then! In the US, $0.53/hr / $333.59/mo for 48 vCPU/192GB RAM/960GB NVMe. Includes 8 TB/mo traffic, when 8 TB egress would cost $720 on EC2; more traffic is $1.20/TB when the first tier of AWS egress is $90/TB. No setup fee. Not that it's EC2 but there's clearly flexibility there.
More generally, if you want AWS, you want AWS; if you want servers you have options.
matt-p
4 days ago
Very few providers charge setup, some will provision a server within a 90s of an api call.
themafia
4 days ago
Hertzner does on the server the OP was referencing:
Aeolun
3 days ago
If you are scared off by the €80 setup on a server that costs €200 a month, it seems like the setup fee did its intended job no?
ferngodfather
3 days ago
Most providers do for dedicated servers, or make you agree to a fixed term. I don't believe they do the same for VPS / Cloud servers.
benjiro
3 days ago
> I don't believe they do the same for VPS / Cloud servers.
Because its backed into the price. If you run a VPS for a month, you get the listed monthly price. But if you run a VPS for a shorter time, the hourly billing price is a lot more expensive.
The ironic part being, that your better off keeping a VPS active until the end of your month periode (if you already crossed 2/3), then its is to cancel early.
Noticed that few people realize that the hourly price != the monthly price.
matt-p
4 days ago
I don't think that negates the point I was making. Most don't, for example none of the providers on https://www.serversearcher.com/ seem to charge setup.
lachiflippi
3 days ago
Hetzner does not charge any provisioning fees for VMs and never has.