Migrating from AWS to Hetzner

787 pointsposted 6 hours ago
by pingoo101010

436 Comments

adamcharnock

4 hours ago

I cannot overstate the performance improvement of deploying onto bare metal. We typically see a doubling of performance, as well as extremely predictable baseline performance.

This is down to several things:

- Latency - having your own local network, rather than sharing some larger datacenter network fabric, gives around of order of magnitude reduced latency

- Caches – right-sizing a deployment for the underlying hardware, and so actually allowing a modern CPU to do its job, makes a huge difference

- Disk IO – Dedicated NVMe access is _fast_.

And with it comes a whole bunch of other benefits:

- Auto-scalers becomes less important, partly because you have 10x the hardware for the same price, partly because everything runs 2x the speed anyway, and partly because you have a fixed pool of hardware. This makes the whole system more stable and easier to reason about.

- No more sweating the S3 costs. Put a 15TB NVMe drive in each server and run your own MinIO/Garage cluster (alongside your other workloads). We're doing about 20GiB/s sustained on a 10 node cluster, 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).

- You get the same bill every month.

- UPDATE: more benefits - cheap fast storage, run huge Postgresql instances at minimal cost, less engineering time spend working around hardware limitations and cloud vagaries.

And, if chose to invest in the above, it all costs 10x less than AWS.

Pitch: If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too):

https://lithus.eu

Email: adam@ above domain

torginus

an hour ago

Yup, I hope to god we are moving past the age of 'everything's fast if you have enough machines' and 'money is not real' era of software development.

I remember the point in my career when I moved from a cranky old .NET company, where we handled millions of users from a single cabinent's worth of beefy servers, to a cloud based shop where we used every cloud buzzword tech under the sun (but mainly everything was containerized node microservices).

I shudder thinking back to the eldritch horrors I saw on the cloud billing side, and the funny thing is, we were constantly fighting performance problems.

bombcar

2 minutes ago

My conspiracy theory is that "cloud scaling" was entirely driven by people who grew up watching sites get slash dotted and thought it was the absolute most important thing in the world that you can quickly scale up to infinity billion requests/second.

rightbyte

4 hours ago

What is old is new again.

My employer is so conservative and slow that they are forerunning this Local Cloud Edge Our Basement thing by just not doing anything.

darkwater

11 minutes ago

> What is old is new again.

I think there is a generational part as well. The ones of us that are now deep in our 40s or 50s grew up professionally in a self-hosted world, and some of us are now in decision-making positions, so we don't necessarily have to take the cloud pill anymore :)

Half-joking, half-serious.

radu_floricica

3 hours ago

> What is old is new again.

Over the years I tried occasionally to look into cloud, but it never made sense. A lot of complexity and significantly higher cost, for very low performance and a promise of "scalability". You virtually never need scalability so fast that you don't have time to add another server - and at baremetal costs, you're usually about a year ahead of the curve anyways.

hibikir

an hour ago

A nimble enough company doesn't need it, but I've had 6 months of lead time to request one extra server in an in-house data center due to sheer organizational failure. The big selling point of the cloud really was that one didn't have to deal with the division lording over the data center, or have any and all access to even log in by their priesthood who knew less unix than the programmers.

I've been in multiple cloud migrations, and it was always solving political problems that were completely self inflicted. The decision was always reasonable if you looked just at the people the org having to decide between the internal process and the cloud bill. But I have little doubt that if there was any goal alignment between the people managing the servers and those using them, most of those migrations would not have happened.

mgkimsal

21 minutes ago

I've been in projects where they're 'on the cloud' to be 'scalable', but I had to estimate my CPU needs up front for a year to get that in the budget, and there wasn't any defined process for "hey, we're growing more than we assumed - we need a second server - or more space - or faster CPUs - etc". Everything that 'cloud' is supposed to allow for - but ... that's not budgeted for - we'll need to have days of meetings to determine where the money for this 'upgrade' is coming from. But our meetings are interrupted by notices from teams that "things are really slow/broken"...

0cf8612b2e1e

9 minutes ago

The management overhead in requesting new cloud resources is now here. Multiple rounds of discussion and TPS reports to spin up new services that could be a one click deploy.

The bureaucracy will always find a way.

AtlasBarfed

30 minutes ago

Yeah, clouds are such a huge improvement over what was basically an industry standard practice to say oh you want a server fill out this 20 page form and will get you your server in 6 to 12 months.

But we don't need one minute response times from the cloud really. So something like hetzner that may just be all right. We'll get it to you within an hour. It's still light years ahead of what we used to be.

And if it makes the entire management and cost side and performance with bare metal or closer to bare metal on the provider side, then that is all good.

And this doesn't even address the fact that yeah, AWA has a lot of hidden costs, but a lot of those managed data center outsourcing contracts where you were subjected to those lead times for new servers... really weren't much cheaper than AWS back in the day.

odie5533

2 hours ago

Complexity? I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches. Or a highly available load balancer with infinite scale.

codegeek

2 hours ago

This is how the cloud companies keep you hooked on. I am not against them of course but the notion that no one can self host in production because "it is too complex" is something that we have been fed over the last 10-15 years. Deploying a production db on a dedicated server is not that hard. It is about the fact that people now think that unless they do cloud, they are amateurs. It is sad.

speleding

2 hours ago

I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.

I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.

klooney

18 minutes ago

It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.

benjiro

5 minutes ago

> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

I haven ever setup a AWS postgres and redis, and know its more then a few clicks. there is simply basic information that you need to link between services, where it does not matter if its cloud or hardware, you still need to do the same steps, be it from CLI or WebInterface.

And frankly, these days with LLMs, its no excuse anymore. You can literally ask a LLM to do the steps, explain them to you, and your off to the races.

> I don't have to worry about OS upgrades and patches

Single command and reboot...

> Or a highly available load balancer with infinite scale.

Unless your google, overrated ...

You literally rent from places like Hetzner for 10 bucks a load balancer, and if your old fascion, you can even do a DNS balancing.

Or you simply rent a server 10x the performance what Amazon gives (for the same price or less), and you do not need a load balancer. I mean, for 200 bucks, you rent a 48 core 96 thread server at Hetzner... Who needs a load balancer again... You will do millions or requests on a single machine.

AznHisoka

an hour ago

As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS.

Like, where do I go? Do i search for Postgres? If so where? Does the IP of my cluster change? If so how to make it static? Also can non-aws servers connect to it? No? Then how to open up the firewall and allow it? And what happens if it uses too much resources? Does it shutdown by itself? What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

Meanwhile, all that time finding out, and I could ssh into a server, code and run a simple bash script to download, compile, run. Then another script to replicate. And i can check the logs, change any config parameter, restart etc. no black box to debug if shit hits the fan

infecto

an hour ago

This smells like “Dropbox is just rsync”. No skin in the game I think there are pros and cons to each but a Postgres cluster can be as easy as a couple clicks or an entry into a provisioning script. I don’t believe you would be able to architect the same setup with a simple single server ssh and a simple bash script. Unless you already wrote a bash script that magically provisions the cluster across various machines.

cortesoft

12 minutes ago

Your comment seems much more in the vain "I already learned how to do it this way, and I would have to learn something to do it the other way"

Which is of course true, but it is true for all things. Provisioning a cluster in AWS takes a bit of research and learning, but so did learning how to set it up locally. I think most people who know how to do both will agree it is simpler to learn how to use the AWS version than learning how to self host it.

nkozyra

an hour ago

Having lived in both worlds, there are services wherein, yeah, host it yourself. But having done DB on-prem/on-metal, dedicated hosting, and cloud, databases are the one thing I'm happy to overpay for.

The things you describe involve a small learning curve, each different for each cloud environment, but then you never have to think about it again. You don't have to worry about downtime (if you set it up right), running a bash script ... literally nothing else has to be done.

Am I overpaying for Postgres compared to the alternatives? Hell yeah. Has it paid off? 100%, would never want to go back.

Volundr

an hour ago

> Do i search for Postgres?

Yes. In your AWS console right after logging in. And pretty much all of your other setup and config questions are answered by just filling out the web form right there. No sshing to change the parameters they are all available right there.

> And what happens if it uses too much resources?

It can't. You've chosen how much resources (CPU/Memory/Disk) to give it. Run away cloud costs are bill by usage stuff like redshift, s3, lambda, etc.

I'm a strong advocate for self (for some value of self) hosting over cloud, but your making cloud out to be far more difficult than it is.

pavel_lishin

an hour ago

> As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS. Like, where do I go? Do i search for Postgres? If so where?

Anything you don't know how to do - or haven't even searched for - either sounds incredibly complex, or incredibly simple.

trenchpilgrim

an hour ago

A fun one in the cloud is "when I upgrade to a new version of Postgres, how long is the downtime and what happens to my indexes?"

mschuster91

26 minutes ago

For AWS RDS, no big deal. Bare metal or Docker? Oh now THAT is a world of pain.

Seriously I despise PostgreSQL in particular in how fucking annoying it is to upgrade.

mschuster91

27 minutes ago

Actually... for Postgres specifically, it's less than 5 minutes to do so in AWS and you get replication, disaster recovery and basic monitoring all included.

I hated having to deal with PostgreSQL on bare metal.

To answer your questions should someone ask these as well and wish answers:

> Does the IP of my cluster change? If so how to make it static?

Use the DNS entry that AWS gives you as the "endpoint", done. I think you can pin a stable Elastic IP to RDS as well if you wish to expose your RDS DB to the Internet although I have really no idea why one would want that given potential security issues.

> Also can non-aws servers connect to it? No?

You can expose it to the Internet in the creation web UI. I think the default the assistant uses is to open it to 0.0.0.0/0 but the last time I did that is many years past so I hope that AWS asks you about what you want these days.

>Then how to open up the firewall and allow it?

If the above does not, create a Security Group, assign the RDS server to that Security Group and create an Ingress rule that either only allows specific CIDRs or a blanket 0.0.0.0/0.

> And what happens if it uses too much resources? Does it shutdown by itself?

It just gets dog slow if your I/O quota is exhausted, it goes into an error state when the disk goes full. Expand your disk quota and the RDS database becomes accessible again.

> What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

No SSH at all, not even for manually unfucking something, for that you need the assistance of the AWS support - but in about six years I never had a database FUBAR'ing itself.

As for config parameters, there's an UI for this called "parameter/option groups", you can set almost all config parameters there, and you can use these as templates for other servers you need as well.

wahnfrieden

an hour ago

It is not as simple as you describe to set up HA multi-region Postgres

If you don't care about HA, then sure everything becomes easy! Until you have a disaster to recover and realize that maybe you do care about HA. Or until you have an enterprise customer or compliance requirement that needs to understand your DR and continuity plans.

Yugabyte is the closest I’ve seen to achieving that simplicity with self host multi region and HA Postgres and it is still quite a bit more involved than the steps you describe and definitely more work than paying for their AWS service. (I just mention instead of Aurora because there’s no self host process to compare directly there as it’s proprietary.)

whstl

an hour ago

If you are talking about RDS and ElasticCache, it’s definitely NOT a few clicks if you want it secure and production-ready, according to AWS itself in their docs and training.

And before someone says Lightsail: is not meant for highly availability/infinite scale.

binary132

29 minutes ago

If you don’t find AWS complicated you really haven’t used AWS.

lelanthran

an hour ago

> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

It's "only a few clicks" after you have spent a signficant amount of time learning AWS.

trenchpilgrim

2 hours ago

If you were personally paying the bill, you'd probably choose the self host on cost alone. Deploying a DB with HA and offsite backups is not hard at all.

fun444555

2 hours ago

I have done many postgres deploys on bare metal. The IOPS and storage space saved (zfs compression because psql is meh) is huge. I regularly used hosted dbs but largely for toy DBs in GBs not TBs.

Anyway, it is not hard and controlling upgrades saves so much time. Having a clients db force upgraded when there is no budget for it sucks.

Anyway, I encourage you to learn/try it when you have opportunity

naasking

2 hours ago

> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches

Last I checked, stack overflow and all of the stack exchange sites are hosted on a single server. The people who actually need to handle more traffic than that are in the 0.1% category, so I question your implicit assumption that you actually need a Postgres and Redis cluster, or that this represents any kind of typical need.

trenchpilgrim

an hour ago

SO was hosted on a single rack last I checked, not a single box. At the time they had an MS SQL cluster.

Also, databases can easily see a ton of internal traffic. Think internal logistics/operations/analytics. Even a medium size company can have a huge amount of data, such as tracking every item purchased and sold for a retail chain.

kitd

2 hours ago

People are usually the biggest cost in any organisation. If you can run all your systems without the sysadmins & netadmins required to keep it all upright (especially at expensive times like weekends or run up to Black Friday/Xmas), you can save yourself a lot more than the extra it'll cost to get a cloud provider to do it all for you.

ecshafer

2 hours ago

Every large organization that is all in on cloud I have worked at has several teams doing cloud work exclusively (CICD, Devops, SRE, etc), but every individual team is spending significant amounts of their time doing cloud development on top of that work.

rcxdude

2 hours ago

This. There's a lot of talk of 'oh you will spend so much time managing your own hardware' when I've found in practice it's much less time than wrangling the cloud infrastructure. (Especially since the alternatives are usually still a hosting provider that mean you don't have to physically touch the hardware at all, though frankly that's often also an overblown amount of time. The building/internet/cooling is what costs money but there's already a wide array of co-location companies set up to provide exactly that)

epistasis

an hour ago

I think you are very right, and to be specific, IAM roles, connecting security groups, terraform plan/apply cycles, running Atlantis through GitHub, all that takes tremendous amounts of time and requires understanding a very large set of technologies on top of the basic networking/security/PostGRES knowledge.

ecshafer

2 hours ago

The cost to run data-centers for a large company that is past the co-location phase, I am not sure where those calculations come out to. But yeah in my experience, running even a fairly large amount of bare metal nix servers in colocation facilities are really not that time consuming.

chatmasta

2 hours ago

I can’t believe this cloud propaganda remains so pervasive. You’re just paying DevOps and “cloud architects” instead.

codegeek

2 hours ago

Exactly. It's sad that we have been brain washed by the cloud propaganda long enough now. Everyone and their mother thinks that to setup anything in production, you need cloud otherwise it is amaeteurish. Sad.

Ekaros

16 minutes ago

Wouldn't you want someone watching over cloud infra at those times too? So maybe slightly less, but still need some people being ready.

mjr00

2 hours ago

Yeah I always just kinda laugh at these comparisons, because it's usually coming from tech people who don't appreciate how much more valuable people's time is than raw opex. It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.

wredcoll

40 minutes ago

If "cloud" took zero time, then sure.

It actually takes a lot of time.

mjr00

19 minutes ago

"It's actually really easy to set up Postgres with high availability and multi-region backups and pump logs to a central log source (which is also self-hosted)" is more or less equivalent to "it's actually really easy to set up Linux and use it as a desktop"

In fact I'd wager a lot more people have used Linux than set up a proper redundant SQL database

grim_io

an hour ago

What is this?!

You are self-managing expensive dedicated hardware in form of MacBooks, instead of renting Azure Windows VM's?!

Shame!

lozf

41 minutes ago

Don't be silly, - the MacBook Pro's are just used to RDP to the Azure Windows VMs ;)

Arch-TK

2 hours ago

What is more likely to fail? The hardware managed by Hetzner or your product?

I'm not saying that you won't experience hardware failures, I am just saying that you also need to remember that if you want your product to keep working over the weekend then you must have someone ready to fix it over the weekend.

grim_io

an hour ago

Cloud providers and even cloudflare go down regularly. Relax.

fwip

38 minutes ago

Sure - but when AWS goes down, Amazon fixes it, even on the weekends. If you self-host, you need to pay a person to be on call to fix it.

rypskar

16 minutes ago

Not only that. When your self-host goes down your customers complain that you are down. When AWS goes down your customers complain that internet is down

CursedSilicon

21 minutes ago

AWS doesn't have to pay people (LOTS OF PEOPLE) to keep things running over the weekends?

And they aren't...just passing those costs on to their customers?

wredcoll

37 minutes ago

I mean, yes, but also I get "3 nines" uptime by running a website on a box connected to my isp in my house. (it would easily be 4 or 5 nines if I also had a stable power grid...)

There's a lot, a lot of websites where downtime just... doesn't matter. Yes it adds up eventually but if you go to twitter and its down again you just come back later.

HPsquared

2 hours ago

That's how they can get away with such seemingly high prices.

exe34

2 hours ago

except you now have your developers chasing their own tails figuring out how to insert the square peg in the round hole without bankrupting the company. cloud didn't save time, it just replaced the wheels for the hamsters.

binary132

30 minutes ago

It’s kinda good if your requirements might quadruple or disappear tonight or tomorrow, but you should always have a plan to port to reserved / purchased capacity.

Lalabadie

2 hours ago

I'm a designer with enough front-end knowledge to lead front-end dev when needed.

To someone like me, especially on solo projects, using infra that effectively isolates me from the concerns (and risks) of lower-level devops absolutely makes sense. But I welcome the choice because of my level of competence.

The trap is scaling an org by using that same shortcut until you're bound to it by built-up complexity or a persistent lack of skill/concern in the team. Then you're never really equipped to reevaluate the decision.

f1shy

an hour ago

If everything is properly done, it should be next to trivial to add a server. When I was working on that we had a written procedure, when followed strictly, it would just take less than an hour

ep103

3 hours ago

The benefit of cloud has always been that it allows the company to trade capex for opex. From an engineering perspective, it trades scalability for complexity, but this is a secondary effect compared to the former tradeoff.

PeterStuer

2 hours ago

"trade capex for opex"

This has nothing to do with cloud. Businesses have forever turned IT expenses from capex to opex. We called this "operating leases".

et1337

2 hours ago

I’ve heard this a lot, but… doesn’t Hetzner do the same?

radiator

2 hours ago

Hetzner is also a cloud. You avoid buying hardware, you rent it instead. You can rent either VMs or dedicated servers, but in both cases you own nothing.

throwaway894345

2 hours ago

If you’re just running some CRUD web service, then you could certainly find significantly cheaper hosting in a data center or similar, but also if that’s the case your hosting bill is probably a very small cost either way (relative to other business expenses).

> You virtually never need scalability so fast that you don't have time to add another server

What do you mean by “time to add another server?” Are you thinking about a minute or two to spin up some on-demand server using an API? Or are you talking about multiple business days to physically procure and install another server?

The former is fine, but I don’t know of any provider that gives me bare metal machines with beefy GPUs in a matter of minutes for low cost.

Aissen

3 hours ago

As an infrastructure engineer (amongst other things), hard disagree here. I realize you might be joking, but a bit of context here: a big chunk of the success of Cloud in more traditional organizations is the agility that comes with it: (almost) no need to ask permission to anyone, ownership of your resources, etc. There is no reason that baremetal shouldn't provide the same customer-oriented service, at least for the low-level IaaS, give-me-a-VM-now needs. I'd even argue this type of self-service (and accounting!) should be done by any team providing internal software services.

rcxdude

2 hours ago

I think also this was only a temporary situation caused by the IT departments in these organisations being essentially bypassed. Once it became a big important thing then they have basically started to take control of it and you get the same problems (in fact potentially more so because the expense means there's more pressure cut down resources).

abujazar

3 hours ago

The permissions and ownership part has little to do with the infrastructure – in fact I've often found it more difficult to get permissions and access to resources in cloud-heavy orgs.

ambicapter

2 hours ago

I'm at a startup and I don't have access to the terraform repo :( and console is locked down ofc.

michaelt

3 hours ago

"No need to ask permission" and "You get the same bill every month" kinda work against one another here.

Aissen

2 hours ago

I should have been more precise… Many sub-orgs have budget freedom to do their job, and not having to go through a central authority to get hardware is often a feature. Hence why Cloud works so well in non-regulatory heavy traditional orgs: budget owner can just accept the risks and let the people do the work. My comment was more of a warning to would-be infrastructure people: they absolutely need to be customer-focused, and build automation from the start.

blibble

2 hours ago

don't underestimate the ability of traditional organisations to build that process around cloud

you keep the usual BS to get hardware, plus now it's 10x more expensive and requires 5x the engineering!

kccqzy

2 hours ago

That's a cultural issue. Initially at my workplace people needed to ask permissions to deploy their code. The team approving the deployment got sick of it and built a self-service deployment tool with security controls built in and now deployment is easy. All it matters is a culture of trusting other fellow employees, a culture of automating, and a culture of valuing internal users.

Aissen

2 hours ago

Agreed, that's exactly what I was aiming at. I'm not saying that it's the only advantage of Cloud, but that orgs with a dysfunctional resource-access culture were a fertile ground for cloud deployments.

Basically: some managers gets fed-up with weeks/months of delays for baremetal or VM access -> takes risks and gets cloud services -> successful projects in less time -> gets promoted -> more cloud in the org.

alexchantavy

3 hours ago

> no need to ask permission to anyone, ownership of your resources, etc

In a large enough org that experience doesn’t happen though - you have to go through and understand how the org’s infra-as-code repo works, where to make your change, and get approval for that.

misiek08

3 hours ago

You also need to get budget, few months earlier, sometimes even legal approval. Then you have security rules, „preferred” services and the list goes on..

rightbyte

2 hours ago

Well ye it is more like I frame it as a joke but I do mean it.

I don't argue there aren't special cases for using fancy cloud vendors, though. But classical datacentre rentals get you almost always there for less.

Personally I like being able to touch and hear the computers I use.

Damogran6

2 hours ago

As a career security guy, I've lost count of the battles I've lost in the race to the cloud...now it's 'we have to up the budget $250k a year to cover costs' and you just shrug.

The cost for your first on-prem datacenter server is pretty steep...the cost for the second one? Not so much.

marcosdumay

an hour ago

> What is old is new again.

It's not really. It just happens that when there is a huge bullshit hype out there, people that fall for it regret and come back to normal after a while.

Better things are still better. And this one was clearly only better for a few use-cases that most people shouldn't care about since the beginning.

kccqzy

2 hours ago

My employer also resisted using cloud compute and sent staff explanations why building our own data centers is a good thing.

rixed

19 minutes ago

I do not disagree, but just for the record, that's not what the article is about. They migrated to Hetzner cloud offering.

If they had migrated to a bare metal solution they would certainly have enjoyed an even larger increase in perf and decrease in costs, but it makes sense that they opted for the cloud offering instead given where they started from.

realitysballs

3 hours ago

Ya but then you need to pay for a team to maintain network and continually secure and monitor the server and update/patch. The salaries of those professionals , really only make sense for a certain sized organization.

I still think small-midsized orgs may be better off in cloud for security / operations cost optimization.

esskay

2 hours ago

You still need those same people even if you're running on a bunch of EC2 and RDS instances, they aren't magically 'safer'.

lnenad

2 hours ago

I mean, by definition yes they are. RDS is locked down by default. Also if you're using ECS/Fargate (so not EC2) as the person writing the article does, it's also pretty much locked down outside of your app manifest definitions. Also your infra management/cost is minimal compared to running k8s and bare metal.

abenga

2 hours ago

This implies cloud infrastructure experts are cheaper than bare metal Linux/networking/etc experts. Probably in most smaller organizations, you have the people writing the code manage the infra, so it's an "invisible cost", but ime, it's easy to outgrow this and need someone to keep cloud costs in check within a couple of years, assuming you are growing as fast as an average start-up.

adamcharnock

an hour ago

I very much understand this, and that is why we do what we do. Lots of companies feel exactly as you say. I.e. Sure it is cheaper and 'better', but we'll pay for it in salaries and additional incurred risk (what happens if we invest all this time and fail to successfully migrate?)

This is why we decided to bundle engineering time with the infrastructure. We'll maintain the cluster as you say, and with the time left over (the majority) we'll help you with all your other DevOps needs too (CI/CD pipelines, containerising software, deploying HA Valkey, etc). And even after all that, it still costs less than AWS.

Edit: We also take on risk with the migration – our billing cycle doesn't start until we complete the migration. This keeps our incentives aligned.

dorkypunk

an hour ago

Then you have to replace those professionals with even more specialized and expensive professionals in order be able to deploy anything.

DisabledVeteran

2 hours ago

That used to be the case until recently. As much as neither I nor you want to admit it -- the truth is ChatGPT can handle 99% of what you would pay for "a team to maintain network and continually secure and monitor the server and update/patch." Infact, ChatGPT surpasses them as it is all encompassing. Any company now can simply pay for OpenAI's services and save the majority of the money they would have spent on the, "salaries of those professionals." BTW, ChatGPT Pro is only $200 a month ... who do you think they would rather pay?

tayo42

2 hours ago

You have a link to some proof that chat gpt is patching servers running databases with no down time or data loss?

Yiin

2 hours ago

I think the argument is that dev with some vibe coding can successfully setup servers that are good enough already for 10x less cost and 95% reliability

rightbyte

2 hours ago

Isn't most vulnerabilities in your own server software or configs anyways?

Thicken2320

3 hours ago

Using the S3 API is like chopping onions, the more you do it, the faster you start crying.

scns

3 hours ago

Less to no crying when you use a sharp knive. Japanese chefs say: no wonder you are crying, you squash them.

Esophagus4

3 hours ago

Haha!

My only “yes, but…” is that this:

> 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).

kind of smells like abuse of S3. Without knowing the use case, maybe a different AWS service is a better answer?

Not advocating for AWS, just saying that maybe this is the wrong comparison.

Though I do want to learn about Hetzner.

wredcoll

32 minutes ago

You're (probably) not wrong about the abuse thing, but it sure is nice to just not care about that when you have fixed hardware. I find trying to guess which of the 200 aws services is the cheapest kinda stressful.

wiether

2 hours ago

They conveniently provide no detail about the usecase, so it's hard to tell

But, yeah, there's certainly a solution to provide better performances for cheaper, using other settings/services on AWS

adamcharnock

2 hours ago

We're hoping to write a case study down the road that will give more detail. But the short version is that not all parts of the client's organisation have aligned skills/incentives. So sometimes code is deployed that makes, shall we say, 'atypical use' of the resources available.

In those cases, it is great to a) not get a shocking bill, and b) be able to somewhat support this atypical use until it can be remedied.

wiether

an hour ago

Thank you for the reply

I'm honestly quite interested to learn more about the usecase that required those 50k API calls!

I've seen a few cases of using S3 for things it was never intended for, but nothing close to this scale

nikodunk

21 minutes ago

If you’re big, invest in this. If you’re small, slap Dokploy/Coolify on it.

lazyfanatic42

an hour ago

haha this reminds me of when I used to manage Solaris system consisting of 2 servers. Sparc T7, 1 box in one state and 1 box in another. No load balancer.

Thousands and thousands of users depending on that hardware.

Extremely robust hardware.

epistasis

an hour ago

What do you recommend for configuration management? I've had a fairly good experience with Ansible, but that was a long time ago... anything new in that pace?

dijit

an hour ago

"new", I'm not sure, but I deployed 2,500 physical Windows machines with SaltStack and it worked pretty good.

it also handled some databases and webservers on FreeBSD and Windows, I considered it better than Ansible.

chubot

2 hours ago

Does anyone have experience with say Linode and Digital Ocean performance versus AWS and GCE?

They still use VMs, but as far as I know they have simple reserved instances, not “cloud”-like weather?

Is the performance better and more predictable on large VPSes?

(edit: I guess a big difference is that VPS can have local NVMe that is persistent, whrereas EC2 local disk is ephemeral? )

pton_xd

2 hours ago

I can't speak to Linode but in my experience the Digital Ocean VM performance is quite bad compared to bare metal offerings like Hetzner, OVH, etc. It's basically comparable to AWS, only a bit cheaper.

matt-p

15 minutes ago

It's essentially the same product, but you do get lower disk latency. Best performance is always going to be a dedicated server which in the US seem to start around $80-100/month (just checking on serversearcher.com), DO and so on do provide a "dedicated cpu" product if that's too much.

inapis

2 hours ago

No. DO can be equally noisy but I've always tried their regular instances and not their premium AMD/Intel ones.

cess11

2 hours ago

I've left a job because it was impossible to explain this to an ex-Googler on the board who just couldn't stop himself from trying to be a CTO and clownmaker at the company.

The rough part was that we had made hardware investments and spent almost a year setting up the system for HA and immediate (i.e. 'low-hanging fruit') performance tuning and should have turned to architectural and more subtle improvements. This was a huge achievement for a very small team that had neither the use nor the wish to go full clown.

exe34

2 hours ago

I love that you're not just preaching - you're offering the service at a lower cost. (I'm not affiliated and don't claim anything about their ability/reliability).

jnsaff2

3 hours ago

There is a graph database which does disk IO for database startup, backup and restore as single threaded, sequential, 8kb operations.

On EBS it does at most 200MB/s disk IO just because the EBS operation latency even on io2 is about 0.5 ms. Even though the disk can go much faster, disk benchmarks can easily do multi-GB/s on nodes that have enough EBS throughput.

On instance local SSD on the same EC2 instance it will happily saturate the whatever instance can do (~2GB/s in my case).

anonzzzies

3 hours ago

What graph db is that?

jnsaff2

3 hours ago

neo4j

the_arun

2 hours ago

What is the cost of running Neo4j on aws vs using aws Neptune? Related to disk I/o?

api

an hour ago

How do you deprogram your devs and ops people from the learned helplessness of cloud native ideology?

I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting. This is deeply bizarre for someone who installed Linux from floppy disks in 1994, but most modern devs have fully swallowed the idea that cloud handles things for them that mere mortals cannot handle.

This, in turn, is a big reason why companies use cloud in spite of the insane markup: it's hard to staff for anything else. Cloud has utterly dominated the developer and IT mindset.

CursedSilicon

14 minutes ago

>I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting

Are y'all hiring? [1]

I did 15 months at AWS and consider it the worst career move of my life. I much prefer working with self-hosting where I can actually optimize the entire hardware stack I'm working with. Infrastructure is fun to tinker with. Cloud hosting feels like a miserable black box that you dump your software into and "hope"

[1] https://cursedsilicon.net/resume.pdf

awestroke

41 minutes ago

So you'd rather self host a database as well? How do you prevent data loss? Do you run a whole database cluster in multiple physical locations with automatic failover? Who will spend time monitoring replication lag? Where do you store backups? Who is responsible for tuning performance settings?

7bit

28 minutes ago

I really don't understand this comment. The cloud doesn't protect you from data loss or provide any of the things you named.

baby_souffle

18 minutes ago

Yes it does? For a fraction of a dollar per hour, AWS will give me a URI that I can connect to. On the other end is a postgres instance that already has authentication, backups handled for me. It's also backed by a storage layer that is far more robust than anything I can get together in my rented cage with my corporate budget.

theideaofcoffee

22 minutes ago

Hosting a database is no different than self-hosting any other service. This viewpoint hath what cloud wrought, this atrophying of the most basic operational skills, as if running these magic services are only achievable by the hyperscalers who said they are the only ones capable.

The answers to all of your questions are a hard: it depends. What are your engineering objectives? What are your business requirements? Uptime? Performance? Cost constraints and considerations? The cloud doesn't take away the need to answer these questions, it's just that self-hosting actually requires you to know what you are doing versus clicking a button and just hoping for the best.

belter

3 hours ago

> If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too

You might not realize but you are actually increasing the business case for AWS :-) Also those hardware savings will be eaten away by two days of your hourly bill. I like to look at my project costs across all verticals...

dlisboa

2 hours ago

> Also those hardware savings will be eaten away by two days of your hourly bill

Doubt it. I've personally seen AWS bills in the tens of thousands, he's probably not that costly for a day.

whstl

an hour ago

I don't think I have joined a startup that pays less than 20k/month to AWS or any cloud in almost a decade.

Biggest recent ones were ~200k and ~100k that we managed to lower to ~80k with a couple months of work (but it went back up again after I left).

I fondly remember lowering our Heroku bill from 5k to 2k back in 2016 after a day of work. Management was ecstatic.

theideaofcoffee

20 minutes ago

Same, but in the hundreds of thousands monthly and growing at steady clip, and AWS extending credits worth -millions-, just to keep them there because their margins are so fat and juicy they can afford that insane markup.

That's where the real value lies. Not paying these usurious amounts.

adamcharnock

3 hours ago

I understand the concern for sure. But we don't bill hourly in that way, as one thing our clients really appreciate is predictable costs. The fixed monthly price already includes engineering time to support your team.

lisperforlife

5 hours ago

I think you can get much farther with dedicated servers. I run a couple of nodes on Hetzner. The performance you get from a dedicated machine even if it is a 3 year old machine that you can get on server auction is absolutely bonkers and cannot be compared to VMs. The thing is that most of the server hardware is focused towards high core count, low clock speed processors that optimize for I/O rather than compute. It is overprovisioned by all cloud providers. Even the I/O part of the disk is crazy. It uses all sorts of shenanigans to get a drive that sitting on a NAS and emulating a local disk. Most startups do not need the hyper virtualized, NAS based drive. You can go much farther and much more cost-effectively with dedicated server rentals from Hetzner. I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner. I know of OVH but I would love to know others in the same space.

ozim

3 hours ago

As mentioned multiple times in other comments and places people think that doing what Google or FB is doing should be what everyone else is doing.

We are running modest operations on European VPS provider where I work and whenever we get a new hire (business or technical does not matter) it is like a Groundhog day - I have to explain — WE ALREADY ARE IN THE CLOUD, NO YOU WILL NOT START "MIGRATING TO CLOUD PROJECT" ON MY WATCH SO YOU CAN PAD YOUR CV AND MOVE TO ANOTHER COMPANY TO RUIN THEIR INFRA — or something along those lines but asking chatgpt to make it more friendly tone.

PeterStuer

2 hours ago

The number of times I have seen fresh "architects" come in with an architectural proposal for a 10 user internal LoB app that they got from a Meta or Microsoft worldscale B2C service blueprint ...

kccqzy

2 hours ago

> doing what Google or FB is doing

Google doesn't even deploy most of its own code to run on VMs. Containers yes but not VMs.

dijit

an hour ago

Yeah, the irony being Google runs VMs in Containers but not the other way around.

jwr

5 hours ago

I actually benchmarked this and wrote an article several years back, still very much applicable: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...

kees99

4 hours ago

Did you "preheat" during those tests? It is very common for cloud instances to have "burstable" vCPUs. That is - after boot (or long idle), you get decent performance for first few minutes, then performance gradually tanks to a mere fraction of the initial burst.

fakwandi_priv

3 hours ago

> The total wall clock time for the build was measured. The smaller the better. I always did one build to prime the caches and discarded the first result.

The article is worth the read.

eahm

5 hours ago

I recently rediscovered this website that might help: https://vpspricetracker.com

Too cool to not share, most of the providers listed there have dedicated servers too.

CaptainOfCoit

5 hours ago

Great website, but what a blunder to display the results as "cards" rather than a good old table so you can scan the results rather than having to actually read it. Makes it really hard to quickly find what you're looking for...

Edit: Ironically, that website doesn't have Hetzner in their index.

dizhn

5 hours ago

That is weird indeed. But I bet you are getting Hetzner results indirectly through resellers :) (Yeah I checked one Frankfurt based datacenter named FS1 - probably for Falkenstein. They might be colo or another datacenter there of course)

aantix

an hour ago

What a great site. Thanks for sharing!

chromehearts

4 hours ago

Amazing website, glad to know that I already have a super great offer! But will definitely share this

63stack

2 hours ago

This is an amazing site

ta12653421

5 hours ago

++1

excellent website, thanks.

codethief

5 hours ago

> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner

FWIW, Hetzner has two data centers in the US, in case you're just looking for "Hetzner quality but in the US", not for "American/Canadian companies similar to Hetzner".

CaptainOfCoit

4 hours ago

IIRC, Hetzners dedicated instances are only available in their German and Finnish data centers, not anywhere else sadly :/

joshstrange

4 hours ago

This is correct, they only offer VPS in the US.

atonse

2 hours ago

But are the VPSs also similarly much better performing than AWS?

ccakes

2 hours ago

latitude.sh do bare metal in the US well

GordonS

an hour ago

Yes, but they are vastly more expensive than Hetzner (looks like pricing stats at just under $200/m for 6 cores).

matt-p

2 hours ago

Yeah no dedicated severs in the US sadly. I'm not aware of anyone who can quite match hetzners pricing in the US (but if someone does I'd love to know!). https://www.serversearcher.com throws up clouvider and latitiude at good pricing but.. not hetzner levels by any means.

MrPowerGamerBR

5 minutes ago

I haven't checked Hetzner's prices in a while, but OVHcloud has dedicated servers and they do have dedicated servers in the US and in Canada (I've been using their dedicated servers for years already and they are pretty dang good)

wongarsu

4 hours ago

> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner.

In a thread two days ago https://ioflood.com/ was recommended as US-based alternative

amelius

4 hours ago

But I'm looking more for "compute flood" ...

deaux

4 hours ago

On a similar note, I'm looking for a "Hetzner, but in APAC, particularly East Asia". I've struggled to find good options for any of JP, TW or KR.

b0ner_t0ner

an hour ago

LayerStack is very fast in APAC:

    https://www.layerstack.com/en/dedicated-cloud

citrin_ru

4 hours ago

VMs are middle ground between AWS and dedicated hardware. With hardware you need to monitor it, report problems/failures to the provider, make necessary configuration changes (add/remove node to/from a cluster e. t. c.). If a team is coming from AWS it may have no experience with monitoring/troubleshooting problems caused by imperfect hardware.

torginus

an hour ago

Virtualization has a crazy overhead - when we moved to metal instances in AWS, we gained like 20-25% performance. I thought that since AWS has the smartest folks in the business and Intel & co. has been at this for decades, it'd be like a couple percent overhead at most, but no.

matt-p

3 hours ago

Quite a few options on https://serversearcher.com that sell in US/CA.

Clouvider is available in alot of US DCs, 4GB ram/2cpu/80GB NVME and a 10Gb port for like $6 a month.

CaptainOfCoit

5 hours ago

> . I know of OVH but I would love to know others in the same space.

When I've needed dedicated servers in the US I've used Vultr in the past, relatively nice pricing, only missing unmetered bandwidth for it to be my go-to. But all those US-specific cases been others paying for it, so hasn't bothered me, compared to personal/community stuff I host at Hetzner and pay for myself.

rapind

2 hours ago

I've been eying Vultr for dedicated metal in Canada (Toronto Datacenter). How do they measure up to Hertzner? I'm not looking to get the best possible deal, but just better value than EC2 (which costs me a fair amount of egress).

micw

4 hours ago

I have seen hetzners cloud block storage to be quite slow. It became soon a bottleneck on our timescale databases. Now we're testing on netcup.com's "root servers" which are VPS with dedicated CPU cores and lots of very fast storage.

nik736

3 hours ago

They limit them to 7500 IOPS, as stated in their docs. It also doesn't scale with size, the limit is there for every volume of any size.

zakki

5 hours ago

Try www.wowrack.com or www.serverstadium.com. (I work for them).

bakugo

5 hours ago

Be warned though that, when renting dedicated servers, there are certain issues you might have to deal with that usually aren't a factor when renting a VPS.

For example, I got a dedicated server from Hetzner earlier this year with a consumer Ryzen CPU that had unstable SIMD (ZFS checksums would randomly fail, and mprime also reported errors). Opened a ticket about it and they basically told me it wasn't an issue because their diagnostics couldn't detect it.

CaptainOfCoit

5 hours ago

Yeah, their support, for better or worse, is really technical and you need to send all the evidence of any faults to convince them. But when I've had random issues happening, I've sent them all the troubleshooting and evidence I came across, and a couple of hours later they had provisioned a new host for me with the same specs.

And based on our different experiences, the quality of care you receive could differ too :)

bakugo

5 hours ago

> and a couple of hours later they had provisioned a new host for me with the same specs.

To be fair, they probably would've done the same for me if I'd pushed the issue further, but after over a week of trying to diagnose the issue and convince them that it wasn't an problem with the hard drives (they said one of the drives was likely faulty and insisted on replacing it and having me resilver the zpool to see if it fixed the issue. spoiler: it didn't) I just gave up, disabled SIMD in ZFS and moved on.

CaptainOfCoit

5 hours ago

> but after over a week of trying to diagnose the issue and convince them that it wasn't an problem

That sucks big time :( In the most recent case I can recall, I successfully got access, noticed weirdness, gathered data and sent an email, and had a new instance within 2-3 hours.

Overall, based on comments here on HN and otherwhere, the quality and speed of support is really uneven.

vanviegen

4 hours ago

> based on comments here on HN and otherwhere, the quality and speed of support is really uneven.

Can you name one tech company that's scaled passed the point where the founders are closely involved with support that has consistently good tech support? I think this is just really hard to get right, as many customers are not as knowledgeable as they think they are.

CaptainOfCoit

3 hours ago

"Consistently" is hard, people's experiences tend to differ with every company out there, even by what country you're currently in. For example, I've always had quick and reasonable replies from Coinbase support, but I know friends who've had the complete opposite experience with Coinbase, so won't claim they're consistent. But their replies to me has been consistent at least.

Probably the company most people have had any sort of consistency from would be Stripe I think. Of course, there are cases where they haven't been great, but if you ask me for a company with the best tech support, Stripe comes to mind first.

I'm not sure it's active anymore, but there used to be a somewhat hidden and unofficial support channel in #stripe@freenode back in the day, where a bunch of Stripe developers hanged out and helped users in an in-official capacity. That channel was a godsend more than once.

ta1243

4 hours ago

For self hosted / cohosting my own kit, I buy refurbed servers from https://www.etb-tech.com/ because I can spec exactly what I want and see how the cost varies, what the delivery time is, etc.

Years ago Broadberry has a similar thing with Supermicro, but not any more. You have to talk to a sales person about how they can rip you off. Then they don't give you what you specced anyway -- I spec 8x8G sticks of ram, they provide 2x32G etc.

shrubble

5 hours ago

I have used wholesaleinternet.net and they are centrally located in the USA.

yread

5 hours ago

ugh 235$ a month for a 4TB SSD?! You can buy one for that price and have some money left over

michalsustr

5 hours ago

Interserver. But I don’t have personal experience (yet)

yread

5 hours ago

I used GTHost in the US. Performance is not bad but you do end up paying more if you need 1gbit/s link.

hshdhdhehd

5 hours ago

It can affect system design. Just chuck it all on one box! And it will be crazy fast.

lossolo

4 hours ago

I've been using dedicated servers for 20 years. Here's my top list:

Hetzner, OVH, Leaseweb, and Scaleway (EU locations only).

I've used other providers as well, but I won't mention them because they were either too small or had issues.

tonyhart7

5 hours ago

Yeah this is what they do in "high perfomance" server, they just use gaming cpu

pwmtr

5 hours ago

We’ve been seeing the same trend. Lots of teams moving to Hetzner for the price/performance, but then realizing they have to rebuild all the Postgres ops pieces (backups, failover, monitoring, etc.).

We ended up building a managed Postgres that runs directly on Hetzner. Same setup, but with HA, backups, and PITR handled for you. It’s open-source, runs close to the metal, and avoids the egress/I/O gotchas you get on AWS.

If anyone’s curious, I added here are some notes about our take [1], [2]. Always happy to talk about it if you have any questions.

[1] https://www.ubicloud.com/blog/difference-between-running-pos... [2] https://www.ubicloud.com/use-cases/postgresql

normie3000

4 hours ago

This is one key draw to Big Cloud and especially PaaS and managed SQL for me (and dev teams I advise).

Not having an ops background I am nervous about:

* database backup+restore * applying security patches on time (at OS and runtime levels) * other security issues like making sure access to prod machines is restricted correctly, access is logged, ports are locked down, abnormal access patterns are detected * DoS and similar protections are not my responsibility

It feels like picking a popular cloud provider gives a lot of cover for these things - sometimes technically, and otherwise at least politically...

DanielHB

4 hours ago

There must be SaaS services offering managed databases on different providers, like you buy the servers they put the software and host backups for you. Anyone got any tips?

swiftcoder

3 hours ago

to be fair, AWS' database restore support is generally only a small part of the picture - the only option available is to spin an entirely new DB cluster up from the backup, so if your data recovery strategy isn't "roll back all data to before the incident", you have to build out all your own functionality for merging the backup and live data...

matt-p

3 hours ago

I think the "strategy" for most people is to do it manually, or make the decision to just revert wholesale to a particular time.

swiftcoder

3 hours ago

Yeah, and that default strategy tends to become very, very painful the first time you encounter non-trivial database corruption.

For example, one of my employers routinely tested DB restore by wiping an entire table in stage, and then having the on call restore from backup. This is trivial because you know it happened recently, you have low traffic in this instance, and you can cleanly copy over the missing table.

But the last actual production DB incident they had was a subtle data corruption bug that went unnoticed for several weeks - at which point restoring meant a painful merge of 10s of thousands of records, involving several related tables.

matt-p

3 hours ago

Yeah, but automating a solution for all possible "one off subtle data corruption bugs" is a lot of energy and effort to be honest.

swiftcoder

2 hours ago

For sure. It's more about having a pipeline for pulling data from multiple sources - rather than spin up a whole new DB cluster, you usually want to pull the data into new tables in your existing DB, so that you can run queries across old & new data simultaneously

ozim

4 hours ago

Applying security patches on time is not much problem. Ones that you need to apply ASAP are rare and for DB engine you never put it on public access, most of the time exploit is not disclosed publicly and PoC code is not available for patched RCE right on day of patch release.

Most of the time you are good if you follow version updates for major releases as they come you do regression testing and put it on prod in your planned time.

Most problems come from not updating at all and having 2 or 3 year old versions because that’s what automated scanners will be looking for and after that much time someone much more likely wrote exploit code and shared it.

ksajadi

4 hours ago

I can attest to that. At Cloud 66 a lot of customers tell us that while the PaaS experience on Hetzner is great, they benefit from our managed DBs the most.

gizzlon

34 minutes ago

What's the "the PaaS experience on Hetzner" ? Link?

baobun

4 hours ago

In the adjacent category of self-managed omakase postgres: https://www.elephant-shed.io/

slig

39 minutes ago

Also, Pigsty [1]. Feels too bloated for my taste, but I'd love to hear any experience from fellow HNers.

[1] https://pigsty.io/

bdcravens

4 hours ago

While I'm sure it's a great project, a few issues in the README gave me pause to think about how well it's kept up to date. Around half of the links in the list of dependencies are either out of date or just plain don't work, and referencing Vagrant with no mention of Docker.

baobun

3 hours ago

It's indeed undermaintaned so it's not a case of only plug-and-play and automated pulls for production. Still a solid base to build from when setting up on VMs or dedicated and I'm yet to find something better short of DIYing everything.

andybak

4 hours ago

I love how few comments on this and similar posts give much context along with their advice. Are you hosting a church newsletter in your spare time or a resource intensive web app with millions of paying enterprise customers and a dedicated dev ops team in 3 continents?

Any advice on price / performance / availability is meaningless unless you explain where you're coming from. The reason we see people overcomplicating everything to do with the web is that they follow advice from people with radically different requirements.

cube00

an hour ago

> The reason we see people overcomplicating everything to do with the web is that they follow advice from people with radically different requirements.

Or they've had cloud account managers sneaking into your C-suite's lunchtime meetings.

Other comments in this thread say they get directives to use AWS from the top.

Strangely that directive often comes with AWS's own architects embedded into your team and even more strangely they seem to recommend the most expensive server-less options available.

What they don't tell is you you'll be rebuilding and redeploying your containerised app daily with new Docker OS base images to keep up with the security scanners just like patching an OS on a bare metal server.

Hasz

an hour ago

Different requirements, different skillsets, different costs, different challenges. AWS is only topically the same product as Hetzner, coming from someone who has used both quite a bit.

DarkNova6

3 hours ago

Tech industry in a nutshell

casparvitch

2 hours ago

IDK mate, my personal pastebin needs to run on bare metal or it can't keep up

js4ever

an hour ago

We've helped quite a few teams move from AWS to Hetzner (and Netcup) lately, and I think the biggest surprise for people isn't the cost or the raw performance, it’s how simple things become when you remove 15 layers of managed abstractions.

You stop worrying about S3 vs EFS vs FSx, or Lambda cold starts, or EBS burst credits. You just deploy a Docker stacks on a fast NVMe box and it flies. The trade-off is you need a bit more DevOps discipline: monitoring, backups, patching, etc. But that's the kind of stuff that's easy to automate and doesn't really change week to week.

At Elestio we leaned into that simplicity, we provide fully managed open-source stacks for nearly 400 software and also cover CI/CD (from Git push to production) on any provider, including Hetzner.

More info here if you're curious: https://elest.io

(Disclosure: I work at Elestio, where we run managed open-source services on any cloud provider including your own infra.)

jwr

5 hours ago

I've been running my SaaS on Hetzner servers for over 10 years now. Dedicated hardware, clusters in DE and FI, managed through ansible. I use vpncloud to set up a private VPN between the servers (excellent software, btw).

My hosting bill is a fraction of what people pay at AWS or other similar providers, and my servers are much faster. This lets me use a simpler architecture and fewer servers.

When I need to scale, I can always add servers. The only difference is that with physical servers you don't scale up/down on demand within minutes, you have to plan for hours/days. But that's perfectly fine.

I use a distributed database (RethinkDB, switching to FoundationDB) for fault tolerance.

withinboredom

3 hours ago

Similar setup to me (including rethinkdb). Why choose FoundationDB? RethinkDb is still maintained and features added occasionally (I'm on the rethinkdb slack and maintain an async php driver). It just is one guy though, working on it part time.

jwr

26 minutes ago

RethinkDB is somewhat maintained, and while it is a very good database and works quite well, it is not future-proof. But the bigger reason is that I need better performance, and by now (after 10 years) I know my data access patterns well, so I can make really good use of FoundationDB.

The reason for FoundationDB specifically is mostly correctness, it is pretty much the only distributed database out there that gives you strict serializability and delivers on that promise. Performance is #2 on the list.

boobsbr

3 hours ago

Nice to see someone still using RethinkDB.

da02

3 hours ago

You use vpncloud to connect across different Hetzner data centers (DE + FI)? I thought/assumed Hetzner provided services to do this at little-to-no cost.

GordonS

an hour ago

Not the GP, but I also use Hetzner, but use Tailscale to connect securely across different Hetzner regions (and indeed other VPS providers).

Hetzner does provide free Private Networks, but they only work within a single region - I'm not aware of them providing anything (yet) to securely connect between regions.

jwr

an hour ago

No, I use vpncloud for a local (within a datacenter) VPN. This lets me move more configuration into ansible (out of the provider's web interfaces), avoid additional fees, and have the same setup usable for any hosting provider, including virtual clouds. Very flexible.

999900000999

37 minutes ago

Long long ago, at the start of my career I was at a great company. We were using a Postgres DB version not supported by RDS. So I had to manually set up postgres over and over again on EC2 instances. This was before Docker was reliable/standard.

I wasted hours on this, and the moment RDS starts to support the postgres version we need it everything was much easier.

I still remember staying up till 3:00 a.m. installing postgres, repeatedly.

While this article is nice, they only save a few hundred dollars a month. If a single engineer has to spend even an hour a month maintaining this, it's probably going to be a wash.

And that's assuming everything goes right, the moment something goes wrong you can easily wipe out a year saving in a single day ( if not an hour depending on your use case).

This is probably best for situations where your time just isn't worth a whole lot. For example let's say you have a hobbyist project, and for some reason you need a very large capacity server.

This can easily cost hundreds of dollars a month on AWS, and since it's coming out of your own pocket it might be worth it to spend that extra time on bare metal.

But, at a certain point you're going to think how much is my time really worth. For example, and forgive me for mixing up terms and situations, ghost blog is about $10 a month via their hosted solution. You can probably run multiple ghost blogs on a single Hetzner instance.

But, and maybe it was just my luck, eventually it's just going to stop working. Do you feel like spending two or three hours fixing something over just spending the $20 a month to host your two blogs ?

CaptainOfCoit

5 hours ago

Best feature of (some) the dedicated servers Hetzner offers is the unmetered bandwidth. I'm hosting a couple of image-heavy websites (mostly modding related) and since moving to Hetzner I sleep much better knowing I'll pay the same price every single month, and have been for the ~3 years I've been a happy Hetzner customer.

breadislove

5 hours ago

Hetzner is really great until you try to scale with them. We started building our service on top of Hetzner and had couple 100s of VMs running and during peak time we had to scale them to over 1000 VMs. And here couple of problems started, you get pretty often IPs which are black listed, so if you try to connect to services hosted by Google, AWS like S3 etc. you can't reach them. Also at one point there were no VMs available anymore in our region, which caused a lot of issues.

But in general if you don't need to scale crazy Hetzner is amazing, we still have a lot of stuff running on Hetzner but fan out to other services when we need to scale.

jakewins

5 hours ago

> Also at one point there were no VMs available anymore in our region, which caused a lot of issues.

I'm not sure if this is a difference between other clouds, at least a few years ago this was a weekly or even daily problem in GCP; my experience is if you request hundreds of VMs rapidly during peak hours, all the clouds struggle.

GordonS

43 minutes ago

I don't use Azure much anymore, but I used to see this problem regularly on Azure too, especially in the more "niche" regions like UK South.

antonvs

3 hours ago

We launch 30k+ VMs a day on GCP, regularly launching hundreds at a time when scheduled jobs are running. That’s one of the most stable aspects of our operation - in the last 5 years I’ve never seen GCP “struggle” with that except during major outages.

At the scale of providers like AWS and even the smaller GCP, “hundreds of VMs” is not a large amount.

Macha

2 hours ago

If you’re deploying something like 100 m5.4xlarge in us-east-1, sure, AWS’s capacity seems infinite. Once you get into high memory instances, GPU instances, less popular regions etc, it drops off.

Now maybe after the AI demand and waves of purchases of systems appropriate for that things have improved, but it definitely wasn’t the case at the large scale employer I worked at in 2023 (my current employer is much smaller, so doesn’t have those needs, so I can’t comment)

jamesblonde

2 hours ago

The blocking of services on Hetzner and Scaleway by Microsoft is well known -

https://www.linkedin.com/posts/jeroen-jacobs-8209391_somethi...

I didn't know AWS and GCP also did it. Not surprised.

The problem is that European regulators do nothing about such anti-competitive dirty tricks. The big clouds hide behind "lots of spam coming from them", which is not true.

lossyalgo

an hour ago

First comment on that post claims that according to Mimecast, 37% of EU-based spam originates from Hetzner and Digital Ocean. People have been asking for 3 days for a link to the source (I can't find it either).

On the other hand, someone linked a report from last year[0]:

> 72% of BEC attacks in Q2 2024 used free webmail domains; within those, 72.4% used Gmail. Roughly ~52% of all BEC messages were sent from Gmail accounts that quarter.

[0] https://docs.apwg.org/reports/apwg_trends_report_q2_2024.pdf

jwr

5 hours ago

Note that we might be talking about two different things here: some of us use physical servers from Hetzner, which are crazy fast, and a great value. And some of us prefer virtual servers, which (IMHO) are not that revolutionary, even though still much less expensive than the competition.

GordonS

40 minutes ago

I've ran into the IP deny list problem too, but for Windows VMs - you spin them up, only to realise that you can't get Windows Updates, can't reach the Powershell gallery etc.

And just deleting it and starting again is just going to give you the exact same IP again!

I ended up having to buy a dozen or so IPs until I found one that wasn't blocked, and then I could delete all the blocked ones.

CaptainOfCoit

5 hours ago

Worth noting that this seems to be about Hetzners cloud product, not the dedicated servers. The cloud product is relatively new, and most of the people who move to Hetzner do so because of the dedicated instances, not to use their cloud.

drcongo

5 hours ago

Hetzner's cloud offering is probably a decade old by now - I've been a very happy customer for 8 years.

watermelon0

5 hours ago

Hetzner was founded in '97, so cloud offering could technically still be considered relatively new. :D

CaptainOfCoit

5 hours ago

You're right! I seem to have mixed it with some other dedi provider that added "cloud" recently. Thanks for the correction!

My point of people moving to Hetzner for the dedicated instances rather than the cloud still remains though, at least in my bubble.

drcongo

5 hours ago

No problem (I'm just glad you didn't read it as snark)! I mean, even 8 years is relatively new compared to their dedicated box offering so technically you were still correct.

V__

2 hours ago

This sound really intriguing, and I am really curious. What kind of service do you run where you need a 100s of VMs? Was there a reason for not going dedicated? Looking at their offering their biggest VM is (48 CPU, 192 GB RAM, 960 GB SSD). I can't even imagine using that much. Again, I'm really curious.

breadislove

an hour ago

we have extremely processing heavy jobs where user upload large collection of files (audios, pdfs, videos etc.) and expect to get fast processing. its just that we need to fan out sometimes, since a lot of our users a sensitive to processing times.

matt-p

2 hours ago

I think they're great but it's unfortunate they don't have more locations which would at least enable you to spin VMs up in different locations during a shortage. If you rely on them it might be wise to have a second cloud provider that you can use in a pinch, there's many options.

jgalt212

4 hours ago

1000 VMs?

So you have approx 1MM concurrent customers? That's a big number. You should definitely be able to get preferred pricing from AWS at that scale.

breadislove

3 hours ago

We have extremely processing heavy jobs where user upload large collection of files (PDFs, audios, videos etc.) and expect to get fast processing.

FBISurveillance

3 hours ago

We scaled to ~1100 bare metal servers with them and it worked perfectly.

atonse

3 hours ago

Username checks out.

croes

5 hours ago

Blacklisted by whom?

Hikikomori

5 hours ago

AWS at least maintains IP lists of bots, active exploiters, ddos attackers, etc, that you can use to filter/rate limit traffic in WAF. Not so much AWS that blocks you but customers that decide to use these lists.

croes

4 hours ago

So AWS could list some IPs of competitors, just enough to make them look unreliable.

drcongo

5 hours ago

Ironic, given how often the attacks I spend time fending off are coming from AWS.

IshKebab

4 hours ago

Nice IPs you've got there Hetzner, shame if...

jerf

2 hours ago

Saving $400/month covers about 3-5 hours of engineering time per month. In a year, call it 30-50 hours. Did this project take more than 30-50 person-hours?

(The obvious argument about how it might pay off more in the future are dependent on the startup surviving long enough for that future to arrive.)

bearjaws

an hour ago

I feel like this is left out of the story too often - people tend to compare the most optimistic "self-hosted", usually just one or two servers at best, to a less than ideal cloud installation.

My parent company (Healthcare) uses all on prem solutions, has 3 data centers and 10 sys admins just for the data centers. You still need DevOps too.

I don't know how much it would cost to migrate their infra to AWS, but ~ $1.3M (salary) in annual spend buys you a ton of reserved compute on AWS.

$1.3M is 6000 CPU cores, 10TiB of RAM 24/7 with 100TB of storage.

I know for a fact due to redundancy they have no where near that, AND they have to pay for Avamar, VMWare, (~$500k) etc.

There's no way its cheaper than AWS, not even close.

So sure someones self hosted PHP BB forum doesn't need to be on AWS, but I challenge someone to run a 99.99% uptime infra significantly cheaper than the cloud.

jerf

an hour ago

I didn't try to hit that because that's harder to call, especially at a small startup. If what is probably "the guy doing all this" happens to be more comfortable with k8s than the AWS stack you can end up winning by going with a nominally more complicated k8s stack that doesn't force you to spend dozens of hours learning more new things and instead just using what you already know. For a small startup those training costs are proportionally huge compared to a more established larger going concern already making money. Startups should generally always go with whatever their engineers already know unless there is a damned good reason not to. (And "I just wanted to learn it" is not a good reason for the startup.)

But monetarily, even for a startup, $400/month savings is something you shouldn't be pouring the equivalent of $5000 (or more, just picking a reasonable concrete number to anchor the point) into. You really need to solve a $400/month problem by putting your time into something, anything that will promote revenue growth sooner and faster rather than optimizing that particular cost.

mbesto

43 minutes ago

Exactly this. I know people don't like to use this term (because it comes from traditional IT), but this is effectively known as TCO (total cost of ownership). The whole "bare metal" versus the well known hyperscalers debate often misses this with a hand-wavy "just get better devops people and its cheaper".

czhu12

an hour ago

We discovered a similar cost saving and I ended up building an internal PaaS that i ended up open sourcing, that works well on hetzner.

The biggest downside to hetzner only is that it’s really annoying to wrangle shell scripts and GitHub actions to drive all the automations to deploy code.

The portainer team recently started sponsoring the project so Ive been able to dedicate a lot more time to it, close to full time.

https://canine.sh

sytse

34 minutes ago

The cost improvements are great. If you miss the automation that AWS does for database servers consider using something like https://www.ubicloud.com/ that is great for PostgreSQL servers. On bare metal these typically also support 5x the number of IOPS without paying through the nose.

1a527dd5

3 hours ago

Love it!

We are unfortunately moving away from self-hosted bare metal. I disagree with the transition to AWS. But it's been made several global layers above me.

It's funny our previous AWS spend was $800 per month and has been for almost 6 years.

We've just migrated some of our things to AWS and the spend is around $4,500 per month.

I've been told the company doesn't care until our monthly is in excessive of five figures.

None of this makes sense to me.

The only thing that makes sense is our parent company is _huge_ and we have some really awesome TAMs and our entire AWS spend is probably in the region of a few million a month, so it really is pennies behind the sofa when global org is concerned.

Terretta

3 hours ago

There are many other costs besides that AWS bill. Naming two it's hard to put a number on, but get discussed at board room or senior exec level:

- client confidence

- labor pool

aunty_helen

2 hours ago

And to add to that second one, ability to bring in a third party contractor to reduce headcount when needed.

cube00

an hour ago

> None of this makes sense to me.

OpEx good, CapEx bad.

Sammi

3 hours ago

I read some many stories like this and every time I think of the "your margins are my opportunity" quote, and think there must be so many inefficient enterprises that are ripe for disruption by a small efficient team.

CuriouslyC

5 hours ago

I use Hetzner for this reason, but there are caveats. They're great but their uptime isn't as good as AWS and they don't have great region coverage. I strongly advise people to pair them with Cloudflare. Use Hetzner for your core with K8s, and use R2/D1/KV with Container Durable Objects to add edge coprocessing. I also like to shard customer data to individual DOs, this takes a ton of scaling pressure off your data layer, while being more secure/private.

geenat

5 hours ago

AWS has certainly had some pretty public facing downtime ;) I'd say its been roughly the same in my experience- the only way to avoid it IMHO is multi-region.

CaptainOfCoit

5 hours ago

I do this too. Hetzner dedicated servers for the "core" and data-storage basically, and thin/tiny edge-nodes hosted at OVH across the globe as my homebrew CDN.

BoredPositron

5 hours ago

That's exactly how we do it we have Gcore in the mix for GPU compute though.

likium

4 hours ago

If customer data is considered edge, then what’s core?

CuriouslyC

3 hours ago

Everything that's shared between customers, internal system state and customer metadata. I use Postgres with FDWs + Steampipe + Debezium to integrate everything, it's more like a control plane than a database. This model lets you go web scale with one decently sized database and a read replica, since you're only hitting PG for fairly static shared data, Cloudflare Hyperdrive gives insane performance.

sergioisidoro

5 hours ago

I really liked Hetzner but I got burned by one issue. I had some personal projects running there and the payment method failed. Automated email communications also failed among so much spam and email notifications I receive, and when I noticed the problem they had wiped all my data without possibility of recovery.

It was a wake up moment for me about keeping billing in shape, but also made me understand that a cloud provider is as good as their support and communications when things go south. Like an automated SMS would be great before you destroy my entire work. But because they are so cheap, they probably can't do that for every 100$/month account.

I've had similar issues with AWS, but they will have much friendlier grace periods.

dotancohen

4 hours ago

  > It was a wake up moment for me about keeping billing in shape
It should be a wake up moment about keeping backups as well.

sergioisidoro

2 hours ago

Yep. And importantly - backups on different cloud providers, with different payment methods.

roflmaostc

4 hours ago

Sorry to hear that.

But if you do not pay and you do not check your e-mails, it's basically your fault. Who is using SMS these days even?

oefrha

3 hours ago

I had payment issues with Hetzner too, that was back in 2018, haven’t used them since. At least back then, and at least for me, they were unlike any other provider I’ve used which would send you plenty of warnings if they fail to bill you. The very first email I got from them that smelt of trouble was “Cancellation of Contract”, at which point my account was closed and I could only pay by international bank wire. (Yes I just checked all my correspondence with them to make sure I’m not smearing them.) Amusingly they did send payment warning after account closure. Why not before? No effing clue. That was some crazy shit.

sergioisidoro

4 hours ago

Yes, absolutely my fault. But these problems happen. Credit cards expire, people change companies or go on leaves, off boarding processes are not always perfect, spam filters exist.

Add to that the declining experience of email with so much marketing and trash landing in the inbox (and sometimes Gmail categorizing important emails as "Updates")

That's why grace periods for these situations are important.

Who uses SMS? This might be a cultural difference, but in Europe they are still used a lot. And would you be ok if your utility company cut your electricity bill just with an email warning? Or being asked to appear to court by email?

account42

3 hours ago

> Add to that the declining experience of email with so much marketing and trash landing in the inbox (and sometimes Gmail categorizing important emails as "Updates")

This is also something under your control - you don't have to use Gmail as your email provider for important accounts and you can whitelist the domains of those service providers if you don't rely on a subpar email service.

amelius

4 hours ago

How long after shutting you down did they delete your data?

That period should definitely be longer than a few days.

debazel

4 hours ago

Hetzner will almost immediately nuke your data if you miss a payment and often outright ban you and your business from ever using them again.

Hetzner is great for cheap personal sites but I would never use them for any serious business use-cases. Other than failed payments, Hetzner also has very strict content policies and they use user reports to find offenders. This means that if just a few users report your website, everything is deleted and you're banned with zero warning or support, whether the reports are actually true or not. (This also means you can never use Hetzner for anything that has user uploaded content, it doesn't matter if you actively remove offending material because if it ever reaches their servers you're already SOL.)

patapong

3 hours ago

Hmm this sounds scary, even though I've had very positive experience with them. Any alternatives with similarly priced offerings that do not face this issue?

amelius

3 hours ago

That sounds really bad.

matdehaast

4 hours ago

I've had billing issues, and they have let it be resolved a couple of weeks later.

whstl

3 hours ago

After being immersed in cloud-native hell for a few years, I'll say it:

This setup is probably also easier to reason about and easier to make secure than the messy garbage pushed by Amazon and other cloud providers.

People see Cloud providers with rose-colored glasses, but even something like RDS requires VPCs, subnets, route tables, security groups, Internet/NAT gateways, lots of IAM roles, and CloudWatch to be usable. And to make it properly secure (meaning: not just sharing the main DB password with the team) you need way more as well, and it's hard to orchestrate, it's not just an option in a CloudFormation script.

Sure securing a server is hard too, but people 1. actually share this info and 2. don't have illusions about it.

Terretta

3 hours ago

> This setup is probably also easier to reason about and easier to make secure than the messy garbage pushed by Amazon and other cloud providers.

Ability to do anything doesn't mean do everything.

It's straightforward to be simple on AWS, but if you have trouble denying yourself, consider Lightsail to start: https://aws.amazon.com/lightsail/

spinningslate

5 hours ago

Related: Michael Kennedy moved TalkPython [0] hosting to Hetner in 2024. There's a blog about the move here [1] and a follow up after Hetzner changed some pricing policy [2].

He's also just released a book on hosting scale production Python apps [3]. Haven't read yet though would assume it'll get covered there in more detail too.

--

[0] https://talkpython.fm/

[1] https://talkpython.fm/blog/posts/we-have-moved-to-hetzner/

[2] https://talkpython.fm/blog/posts/update-on-hetzner-changes-p...

[3] https://talkpython.fm/books/python-in-production

mikeckennedy

3 minutes ago

Thanks for the shoutout @spinningslate. :)

rixed

23 minutes ago

The title should have been "...from AWS and DigitalOcean".

If it were only from AWS, they would probably also have mentionned a drastic reduction of API complexity.

jurschreuder

17 minutes ago

Woooowww we literally today switched Object Storage prod from AWS S3 to Hetzner.

Now #1 on HN. Destiny.

LunaSea

5 hours ago

This is also what we did at my company.

We kept most smaller-scale, stateless services in AWS but migrated databases and high-scale / high-performance services to bare metal servers.

Backups are stored in S3 so we still benefit from their availability.

Performance is much higher thanks to physically attached SSDs and DDR5 on-die RAM.

Costs are drastically lower and for much larger server sizes which means we are no getting stressed about eventually needing to scale up our RDS / EC2 costs.

jasonthorsness

an hour ago

With AWS you are paying for the ancillary things; pretty good network, reliable portal, reliable management APIs, nearly perfectly commoditized products (one VM like any other, though in practice I observe minor exceptions to this all the time predominantly with worse connectivity to other services or disk). But the performance bare metal is good (AWS offers bare metal too but at insane price point because they only buy enormous servers).

time4tea

an hour ago

Hetzner are great.

Cloud was a reaction to overlong procurement timeline in company managed DC. This is still a thing, it still takes half a year to get a server into a DC!!

However probably 99% of use cases dont need servers in your own DC, they work just fine on a rented server.

One thing though, a rented server can still have hardware failure, and it needs to be fixed, so deployment plans need to take that into account - fargate will do that for you.

redbell

an hour ago

Nice writeup, I like it and would definitely bookmark for future reference.

One weak take however in the article that I felt not quite right is the pricing saving part. By saying 1/4 of the price. I was expecting to see the AWS bill in the range of $10k/month, or more but it turned out to be just around ~$550 or, a total saving of $420.

Whith the above said, it does really make me questioning whether it worth the hassle of migration because, probably, one of the main reasons to move away from AWS is to save the cost.

Finally, let me conclude with this comment from /r/programminghumor:

    You're not a real engineer until you've accidentally sponsored Amazon's quarterly earnings

lkrubner

34 minutes ago

They went from $559.36 to $132 a month on Hetzner, and they seem happy about the performance. This matches my own experience as well, I have been stunned regarding Hetzner and how cheap it can be.

dwrowe

an hour ago

In my experience, there has been an interesting ebb and flow on the use of 'dedicated' hardware and auto-scaling power of AWS/cloud instances. When first starting out, super cost-conscious, you're getting ideal performance. As soon as you start experiencing interesting traffic patterns, auto-scaling makes sense. Then, your bill grows to such a point as it makes economical sense to pull back to more powerful dedicated services, then lean a little into auto-scaling, etc. A pendulum of underlying services and people to support it. A balancing act of finding efficiencies during wild growth periods, and reaching some sense of stability.

groestl

an hour ago

Deploying to bare metal Hetzner for years, very happy with the service. Smart hands work fast and reliable. As soon as you got your stack worked out (which admittedly takes a while), maintaining bare metal is not much different from maintaining cloud, apart from swapping disks sometimes. And the cost is so low, in comparison, it's ridiculous.

dielll

2 hours ago

Only problem with Hetzner is that they don't seem to accept Accounts created from African countries. Tried creating an account with them twice from Kenya, and in both instances my account got blocked 5 minutes after creation with zero explanation. tried reaching out to support and got zero reply.

I get it's their business and they can do as they please with it, however, maybe tell me before I create an account that you don't accept accounts from my continent

esskay

2 hours ago

Hetzner is very cautious who they accept as a customer these days as the downside of being a low-cost hosting provider is you get a ton of signups from people looking to use it for seed boxes and other illegal activities.

It sucks for legitimate customers, but you can sometimes plead your case directly as long as you are willing to provide id and such, but ultimately like you say, it's their business.

mythz

3 hours ago

We've moved most of our Apps off AWS to Hetzner years ago by switching to SQLite/Litestream > Cloudflare R2 replication to avoid needing to using a managed RDBMS and saved a bunch of $$$ [1].

Although for our latest App we've switched to using local PostgreSQL (i.e. app/RDBMS on same server) with R2 backups for its better featureset, same cost as we only pay for the 1x Hetzner VM and Cloudflare R2 storage is pretty cheap.

[1] https://docs.servicestack.net/ormlite/litestream

liampulles

2 hours ago

There are footguns to be found with a self operated k8s cluster, and definitely (costly) ones with cloud native database orchestration. Not to mention all the risks which come with migrating to "new things" in general. I know for me that when I moved away from those things to managed AWS versions, I could definitely see the value of a cloud solution.

But that cost difference is huge...

It is a interesting tradeoff to consider I think (I'm not criticizing either Hetzner or AWS or any team's decision, provided they've thought the tradeoffs through).

natnat

3 hours ago

Are there decent US based alternatives to Hetzner? I'd like to have my servers located in the US for a variety of reasons, but most of the alternatives I've seen to Hetzner seem to be pretty fly-by-night shops.

NoiseBert69

3 hours ago

Hetzner has 2 DCs in the US with solid peerings and transit.

CodesInChaos

3 hours ago

Which do not offer Hetzner's biggest selling point: Cheap dedicated servers

ants_everywhere

3 hours ago

The public cloud is priced as if availability, reliability, durability, and latency matter to your business. If they don't, you can get far with just a single machine.

A great deal of the work in cloud engineering is ensuring the abstractions meet the service guarantees. Similarly you can make a car much cheaper if you don't need to guarantee the driver will survive a collision. The cost of providing a safety guarantee is much higher than providing a hand-wavy "good enough" feeling.

If your business isn't critical then "good enough" vibes may be all you need, and you can save some money.

sealeck

3 hours ago

You do know you can have high availability without using cloud providers? E.g. you run a second server in a different database as a standby that can take over, etc.

ants_everywhere

an hour ago

I mean the (virtual) machine itself has these guarantees. You can set the entire rack on fire and your VM will continue to operate or else you're owed compensation for the SLA violation.

You can add redundant machines with a failover. You then need to calculate how likely the failover is to fail, how likely the machines are to fail, etc. How likely is the switch to fail. You need engineers with pager rotations to give 24 hour coverage, etc.

What I'm saying is that the cloud providers give you strong guarantees and this is reflected in their pricing. The guarantees apply to every service you consume because with independent failures, the probability of not failing is multiplicative. If you want to build a reliable system out of N components then you need to have bounds on the reliability of each of the components.

Your business may not need this. Your side project almost certainly doesn't. But the money you save isn't free, it's compensation for taking on the increased risk of downtime, losing customer data, etc.

I would be interested to see a comparison of the costs of running a service on Hetzner with the same reliability guarantees as a corresponding cloud service. On the one hand we expect some cloud service markup for convenience. On the other hand they have economies of scale. So it's not obvious to me which is cheaper.

Terretta

2 hours ago

just, and, and, and ...

IF you need it, soon you wish the lego blocks pulled IAM all the way through and worked with a common API

alberth

3 hours ago

There's a value curve for infrastructure, I'll use an analogy...

  Low Cost                                           High Cost
  ==============================================================
  FARM     WHOLESALER     GROCERY     RESTAURANT     DOORDASH    
  BUILD    CO-LOCATION    HETZNER     AWS            VERCEL           
While it's not a perfect analogy, in principle it holds true.

As such, it should come as no surprise that eating at a restaurant every day is going to be way more expensive.

whstl

3 hours ago

AWS is more of high-scale cook-your-own pizza restaurant where you can't see the bill until the end, and you often have to mop the floor and wash the latrines yourself too. And washing the latrine costs money too of course, but you don’t wanna get salmonella, right?

alberth

an hour ago

That's were my analogy isn't perfect.

There's different tiers of restaurants.

There's the luxury premium restaurants (Michelin star rated, like AWS), but there is also local dinners that arguably have phenomenal food too (maybe someone like DigitalOcean/Linode).

Terretta

2 hours ago

Exactly.

I hadn't seen your comment when I wrote this, below: https://news.ycombinator.com/item?id=45616366

I love your farm-to-table grid: works for everyone not just HN commenters. And putting DOORDASH on the right is truer from cost perspective than the metaphor I'd used.

For HN, I'd compared to a pricing grid (DIY, Get Started, Pro, Team, Enterprise) with the bottom line that if YAGNI, don't choose it.

Your grid emphasizes my other point, it's about your own labor.

kachapopopow

5 hours ago

This is pretty bad still, with colocation you can get the costs down to 1/100th with good deals at datacenters especially ones that are struggling to attract customers. Most of your bill is power so if you rack efficiency optimized servers you can have a lot of compute for very cheap.

In terms of networking many offer no-headache solutions with some kind of transit blend.

<rant>I recently had to switch away from hetzner due to random dhclient failures causing connectivity loss once ip's expired, complete failure of the loadbalancer - stopped forwarding traffic for around 6 hours and the worst part is that there was no acknoledgement from hetzner about any of these issues so at some point I was going insane over trying to find what is the issue when in the end it was hetzner. (US VA region)

9cb14c1ec0

3 hours ago

Cogent just offered me this colo deal:

Full Rack = $100/month* with $500 install, Power (20A) = $350/month with $500 install, DIA (1Gbps) = $300/month

Total = $750/month plus $1,000 Install on 12 month term

tom1337

4 hours ago

But when you are Colocating you have higher upfront costs as you need to acquire hardware and also need to have somebody nearby the datacenter for hardware swaps in case of a failure, no?

scjon

3 hours ago

There's higher upfront costs, but typically we find that we are breakeven for the cost of hardware in 12 months or less. In my experience, colos will have techs available for hardware swaps / remote hands troubleshooting if needed. It's not free for that but it solves that problem. I think it really just depends on your company's needs and skillset. For our company it makes sense to colocate. We are a VOIP service provider, so we also have multiple IP transit providers and our own /22 subnet. We use BGP to change / pull routes quickly when there's outages with an ISP or cloud provider. I know AWS supports a setup like that, but you're relying on them for announcing route changes.

kachapopopow

2 hours ago

using decomissioned hardware saves you 90% of the costs and you usually colocate where you live or just have one of your tech friends help out :)

most datacenters do offer remote hands which is a bit pricey, but since they're only needed in emergencies in a redundant setup it is just not required.

hyperionplays

4 hours ago

There's tonnes of companies out there who have smart remote hands in the major cities who can respond in sub 1hour to an outage at your choosen DC.

Refurb servers will still blast AWS, and spares are easy to source.

I know HE.net does a rack for like $500/mo intro price and that comes with a 1G internet feed as well.

dboreham

3 hours ago

You need to buy the hardware. However, you don't really need a dude on-hand to swap stuff on a daily basis, unless you're trying to host backblaze. The approach we take (with our data center 1000 miles away) is to provision excess machines. So if we need 6 machines we'll provision, say 8. Failure modes are always assumed to be "the whole machine" -- so a machine is either in service or not. Over time (years) one or two machines might fail in one way or another. Every couple of years we mount a rescue mission to repair/replace the bad machines, do some upgrades etc. We have redundant switches and routers and would make a special trip to replace one of those if there were a failure. The entire deployment has a "scaled to zero" cloud hot standby in place for the eventuality that the whole setup gets nuked somehow.

geenat

5 hours ago

Yup. It's very good for the ecosystem for AWS to have good competition.

Amazon gets far too greedy- particularly bad when you need egress.

Also an "amazon core" is like 1/8th of a physical cpu core.

vidarh

5 hours ago

My favorite Jeff Bezos quote is one that applies very much to AWS: "your margin is my opportunity".

Clearly when Amazon realised the enormous potential in AWS, they scrapped that principle. But the idea behind it - that an organisation used to fat margins will not be able to adapt in the face of a competitor built from the ground to live of razor thing margins - still applies.

AWS is ripe for the picking. They "can't" drop prices much, because their big competitors have similar margins, and a price war with them would devastate the earnings of all of them no matter how much extra market share they were to win.

The challenge is the enormous mindshare they have, and how many people are emotionally invested even in believing AWS is actually cost effective.

master_crab

5 hours ago

"your margin is my opportunity"

Yup, that phrase was running through my head as I skimmed the comments.

To that, an interesting observation I’ve made is that their frequency for service price cuts have dropped in the past several years. And the instances of price increases have started to trickle in (like the public IP cost).

If core compute and network keep getting cheaper faster than inflation, and they never drop their prices (or drop them by less relatively) the margins are growing.

hylaride

2 hours ago

The worst aspect of AWS is that once you get to a certain size, you can negotiate bulk agreements, especially for things like bandwidth. At a previous job, we cut our bill down by quite a bit this way, but it was annoying to have to schmooze with sales people.

vidarh

an hour ago

Great you're pointing it out, as this is also something a lot of organisations are entirely unaware of in my experience.

If you're paying more than a few hundred k/year (worth starting to try below that; success rates will vary greatly) and are still paying the list prices, you might as well set fire to money.

CaptainOfCoit

5 hours ago

Using a dedicated server for the first time after using VPSes or similar since learning programming and infrastructure is like a whole new world. Suddenly, you feel like the application is running in molasses, and the whole idea of "We need 10 VPS instances" seems so stupid...

heavyset_go

an hour ago

The cloud makes sense when you have someone else's money to burn, and don't care about being held hostage with lock-in if you aren't careful.

sharpfuryz

3 hours ago

AWS/GCP/Azure Cloud turns the audit beast into a house-cat: one IAM rule, one log stream, one firewall and etc. Otherwise, you need to fill out a lot of documents to prove that your bare metal is safe to host, for example, cardholder data.

ripped_britches

2 hours ago

Who could have guessed that servicing compliance requirements at scale would create such a good business model

liendolucas

4 hours ago

What is an actual and solid reason to choose or stay AWS these days?

The topic of paying hefty amounts of money to AWS when other options are available has been discussed many times before.

My view of AWS is that you have bazillions of things that you might never use, need to learn about it, you are tied to a company across the Atlantic that can basically shut you down anytime they want for whatever reason and finally the cost.

negendev

an hour ago

Shhhh don't talk too loud about Hetzner!

bzmrgonz

an hour ago

OP Can you expand a little on kustomize? I think it could use a little more real estate on your article!!

eyk19

5 hours ago

We've experienced something similar: for compute-heavy rendering tasks, AWS just wasn't good enough. EC2 machines with the same spec perform much worse than Hetzner machines

CaptainOfCoit

5 hours ago

> EC2 machines with the same spec perform much worse than Hetzner machines

Yeah, even when you move to "EC2 Dedicated Instances" you end up sharing the hardware with other instances, unless you go for "EC2 Dedicated Hosts", and even then the performance seems worse than other providers.

Not sure how they managed to do so for even the dedicated stuff, would require some dedicated effort.

Qasaur

4 hours ago

Hetzner is great for dedicated servers, but for those of us who need smaller-scale secure/confidential VMs I'm afraid that there isn't really any other choice than hyperscalers.

Does anyone know if there is a VM vendor that sits somewhere in between a dedicated server host like Hetzner in terms of performance + cost-effectiveness and AWS/GCP in terms of security?

Basically TPM/vTPM + AMD SEV/SEV-SNP + UEFI Secure Boot support. I've scoured the internet and can't seem to find anyone who provides virtualised trusted computing other than AWS/GCP. Hetzner does not provide a TPM for their VMs, they do not mention any data-in-use encryption, and they explicitly state that they do not support UEFI secure boot - all of these are critical requirements for high-assurance use cases.

Tepix

2 hours ago

Have you looked at colocation?

dboreham

3 hours ago

Interested to hear more about your use case and threat model, if you are willing to share. I ask because although I've looked into (and done some prototyping) with secure cloud hosting, I/we came to the conclusion that there's no current technology that is "actually secure" and so abandoned the approach. Curious if things have improved now, or if you're operating in some security theater context where it's ok.

Qasaur

3 hours ago

The basic principle is to ensure that any machine/workload which joins the network (and processes customer data, in this case extremely sensitive PII) has a cryptographically verified chain of trust from boot to the application-layer to guarantee workload integrity.

NixOS is used for declarative and more importantly deterministic OS state and runtime environment, layered with dm-verity to prevent tampering of the Nix store. The root partition, aside from whatever is explicitly configured in the nix store, is wiped on every reboot. The ephemerality prevents persistence of any potential attacker, and the state of the machine is completely identical to whatever you have configured in your NixOS configuration, which is great for audibility. This OS image + boot loader is signed with organisation-private keys, and deployed to machines preloaded with UEFI keys to guarantee boot integrity and preventing firmware-level attacks (UEFI secure boot).

At this point you need to trust the cloud provider to not tamper with the UEFI keys or otherwise compromise memory confidentiality through a malicious or insecure hypervisor, unless the provider supports memory encryption through something like AMD SEV-SNP. The processor provides an AMD-signed attestation that is provided to the guest OS that states "Yes, this guest is running in a trusted execution environment, and here are the TPM measurements for the boot" and you can use this attestation to determine whether or not the machine should join your network and that it is running the firmware, kernel, and initramfs that you expect AND on hardware that you expect.

I think I'll put together a write-up on this architecture once I launch the service. There is no such thing as perfect security, of course, but I think this security architecture prevents many classes of attacks. Bootkits and firmware-level attacks are exceedingly difficult or even impossible with this model, combine this with an ephemeral root filesystem and any attacker would be effectively unable to gain persistence in the system.

manawyrm

3 hours ago

+1, if your threat model is actually this severe, use physical hardware with physical interlocks and physical security mechanisms.

Software/virtualization is just helpless against such a threat model.

webprofusion

4 hours ago

Context is important for stuff like this, I've served 350M requests (with db interaction) for $49 via cloudflare before, but it all depends what you're trying to do.

Abstracted infrastructure like Kubernetes is expensive by default, so design has an impact.

ukd1

2 hours ago

My last co runs hundreds of servers with Hetzner for a semi-stateless workload. With AWS, the pricing let alone the performance wouldn't be viable. At some point we also used Heroku for the application (more recently EKS); the combo drove folks nuts as, it was "weird".

stego-tech

3 hours ago

Really digging these post-mortems on major public cloud migrations of late, either to smaller providers like Hetzner or privately-owned solutions in data centers. It gives me more ammo when an organization tasks me with saving them money, by showing that these are, in fact, perfectly viable options whose tradeoffs may be worthwhile for specific organizational needs.

tmdetect

4 hours ago

+1 to running services on physical servers, OVH in my case. I'm really enjoying CI pushing to servers and having managed database provided by a 3rd party like Mongo Atlas.

cyberpunk

4 hours ago

Isn’t there quite a significant latency problem if you’re going across the internet for db instead of say, the same switch?

baobun

4 hours ago

No experience with Mongo Atlas but other managed DB providers will IME be transparent about where they host and you can often request resources in an appropriate DC, sometimes even the same. Businesses providing this in Hetzner, OVH etc too. If you plan accordingly you can eat your cake and have it too.

mustaphah

3 hours ago

You've saved $426/mo but inherited a $10k/mo full-time DevOps job.

NDizzle

3 hours ago

Do you think Devops is not required when you use cloud providers or something? Of course it is...

mustaphah

2 hours ago

I meant +1 DevOps engineer dedicated to managing the added operational complexity.

NDizzle

2 hours ago

Why would it be +1? The Devops duties that were performed on AWS are no longer being performed... wouldn't it simply shift to the new stack?

futurecat

4 hours ago

Questions for people who migrate off-cloud:

1. How many nodes do you have? 2. Did you install anything to monitor your node(s) and the app deployed on these nodes? If so, which software?

GordonS

21 minutes ago

1. Around 30, a mix of both x64 and ARM. But planning on switching heavy workloads to physical machines at some point, which would take total node count down to around 16.

2. OpenTelemetry Collector installed on all nodes, sending data to a self-hosted OpenObserve instance. UI is a little clunky, but it's been an invaluable tool, and it handles everything in one place - logs, traces, metrics, alerts.

CaptainOfCoit

4 hours ago

1. In total, maybe 10-15, but managing a lot of it for others, my own stuff is hosted across two.

2. Yes, TLDR: Prometheus + Grafana + AlertManager + ELK. I think it's a fairly common setup.

e12e

5 hours ago

Very interesting and detailed article!

I'd love to hear more about how you use terraform and helm together.

Currently our major friction in ops is using tofu (terraform) to manage K8s resources. Avoiding yaml is great - but both terraform and K8s maintaining state makes the deployment of helm from terraform feel fragile; and vice-versa depending on helm directly in a mostly terraform setup also feels fragile.

baobun

4 hours ago

Not OP but I've lived through this too and my conclusion from that is that if you're doing tofu/terraform you're better off not introducing helm at all. Just tf the k8s.

krowek

4 hours ago

I tried my best to start using Hetzner, but they wouldn't let me.

I got my account validation rejected despite having everything "in norm" and tried 3 times, they wouldn't give me a reason why it ended up rejected.

I think it's better that way, I wouldn't like to get the surprise my account was terminated at some point after that.

moomoo11

23 minutes ago

Cloud was never about performance. It was about offloading costs involved in admin stuff like backups and stuff.

nibab

3 hours ago

It was never about price or performance. Price and performance may be things you care about as a hobbyist, but as a business you have a lot of other considerations.

abujazar

3 hours ago

In my case, availability is crucial as well, and AWS simply doesn't offer good enough SLAs.

eric_khun

5 hours ago

AWS won't raise the limits on our new account (we're stuck at 1GB RAM in Lightsail after 2 months, even though we need to launch this month).

Looking at Hetzner or Vultr as alternatives. A few folks mentioned me Infomaniak has great service and uptime, but I haven't heard much about them otherwise.

Anyone used Infomaniak in production? How do they compare to Hetzner/Vultr?

Terretta

2 hours ago

I haven't checked recently, but previously a Lightsail account was a full AWS account. Tie route 53, app or API gateway, and some instances.

That said, for your use case, you might want the predictability and guarantee of having no "noisy neighbors" on an instance. While most VM providers don't offer that (you have to go to fully dedicated machine), AWS does, so keep that in mind as well.

For BYOL (bring your own hosting labor), Vultr is a lesser known but great choice.

CaptainOfCoit

5 hours ago

Just curious, what are you building/launching that requires more than 1GB of RAM at launch? 1GB is a lot of memory for most use cases, guessing something involving graphics or maybe simulations? In those cases, dedicated instances with proper hardware will give you enormous performance benefits, FYI.

Both Vultr and Hetzner are solid options, I'd go for Hetzner if I know the users are around Europe or close to it, and I want to run tiny CDN-like nodes myself across the globe. Also, Hetzner if you don't wanna worry about bandwidth costs. Otherwise go for Vultr, they have a lot more locations.

Macha

2 hours ago

A Wordpress install that makes it to the top of HN can use 1GB of RAM

eric_khun

4 hours ago

appreciate the advice! Launching a 2D game generator with an editor, and expecting those people to share the games . Not multiplayer yet.

The lightsail instance sometimes just hangs and we have to reboot it when people performing simple action like login or queryng API (we have a simple express / nextjs app)

slyall

4 hours ago

Are you using Lightsail rather than normal EC2 and other AWS services?

Just wondering if your limits just apply to lightsail or normal stuff too.

AmazingTurtle

5 hours ago

CaptainOfCoit

5 hours ago

Less biased view of "Hetzner on HN": https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... (skipping all comments from this submission)

In the end, Hetzner is a provider of "cheap but not 100% uptime" infrastructure, probably why it's so cheap in the first place.

As every other provider, if you want 100% uptime (or getting close to it), you really need at least N+1 instances of everything, as every hosting provider end up fucking something up, sooner or later.

esskay

2 hours ago

You make it sound like they are in some way less reliable or you've got more downtime - neither of which is true. You've got just as much chance of having downtime there as you have with any other provider.

croes

5 hours ago

Can you name a provider with 100% uptime? Or is it 100%¹?

CaptainOfCoit

5 hours ago

No, which is why I wrote "As every other provider, if you want 100% uptime ..."

master_crab

5 hours ago

No one provides 100% uptime for core compute. That’s their point. Also, good luck extracting anything out of those companies that offer 99.99% and don’t meet it.

Sure they’ll throw you some service credits. But it’ll always be magnitudes less than the cost of their disruption to you.

drchaim

3 hours ago

I dream of the day when I can have my servers at home with solar power and batteries. I still have a ways to go, but it will come.

Tepix

2 hours ago

For many projects, this is already possible. 50MBit/s of upstream bandwidth is not that uncommon if you have something better than DSL and that's probably the bottleneck.

vivzkestrel

2 hours ago

i ll immediately do the switch but please tell me how do I use CDK on hetzner

warrenmiller

an hour ago

its also very easy to run windows on their linux cloud VPS's...if you need to run windows.

naiv

5 hours ago

Just yesterday they released a new 'Shared regular performance' offering https://www.hetzner.com/cloud/

littlecranky67

5 hours ago

Thanks for that link. It seems with that introduction, they also lowered prices on the dedicated-core on their vservers - at least I was paying 15€/month and now they seem to offer it for 12€/month. I will try to see if shared performance is an option for the future.

faizshah

4 hours ago

Can anyone recommend a good cloud for GPU instances?

Was trying to find a good one for 30B quants but there’s so many now and the pricing is all over the place.

dboreham

3 hours ago

What kind of GPU are you looking for?

DoctorOW

5 hours ago

I use Hetzner for personal projects and love it, the one thing stopping me from pushing it up the chain at work, is that we're based out of the US exclusively and AZs are pretty sparse.

Havoc

5 hours ago

Makes sense. If you don't need the redundancy, certification/legals or the big cloud 100s of integrated other lego blocks then big cloud vps prices just are a rip off.

mentalgear

2 hours ago

No mention of SST.dev running on Hetzner ?

Havoc

5 hours ago

Their Storage box offerings are great too. Think like a big ftp drive except supports lots of transfer protocols

createaccount99

2 hours ago

Frankly the idea of managing the db myself seems like a horrible idea and a lot of work. But perhaps I've been indoctrinated by the big cloud.

lunias

3 hours ago

The world is healing.

matesz

3 hours ago

Second hand hardware for the win

oneplane

2 hours ago

Gee, another "we did not need cloud, so by not using cloud, we stopped spending on something we did not need"-story. Duh. The real story is why someone who doesn't need cloud services starts using them anyway.

If you need it, use it, if you don't need it, don't use it. It's not the big revelation people seem to think it is.

drcongo

5 hours ago

Hetzner's ARM servers are the best kept secret in tech. Unbelievably capable and mindbogglingly cheap.

GordonS

36 minutes ago

They really are great, I just wish they'd make them available in the US regions too.

dizhn

4 hours ago

Have you encountered any software that wasn't compatible ?

drcongo

3 hours ago

Only a couple of times, but nothing that I use on production servers, only things that were very much hobby projects. For the sort of things we build an 8 core Hetzner ARM server outperforms an 8 core Digital Ocean x86 server by 10-20% for about a tenth of the cost.

tinyspacewizard

2 hours ago

> Why two cloud providers? Initially we used only DigitalOcean, but a data intensive SaaS like tap needs a lot of cloud resources and AWS have a generous $1,000 credit package for self-funded startups.

So some Kubernetes experts migrated to AWS for $1k in credits. This is madness. That's weeks of migration work to save the equivalent of a day of contracting.

wahnfrieden

an hour ago

Is anyone using Yugabyte self host? It looks like maybe the best solution these days for HA Postgres on bare metal or VPS. (And very friendly open source license.)

JCM9

4 hours ago

As competition heats up, the relative “enshitifcation” of AWS is real.

There just isn’t a compelling story to go “all in on AWS” anymore. For anything beyond raw storage and compute the experience elsewhere is consistently better, faster, cheaper.

It seems AWS leadership got caught up trying to have an answer for every possible computing use case and broadly ended up with a bloated mess of expensive below-bar products. The recent panicked flood of meh AI slop products as AWS tries to make up for its big miss on AI is one such example.

Would like to see AWS just focus on doing core infrastructure and doing it well. Others are simply better at everything that then layers on top of that.

Pxtl

an hour ago

I've always thought these services seemed overpriced, like users were subsidizing a ferocious amount of R&D and speculative expansion off their fees. I mean, it's just hosting webservices - this feels like it should be a commodity at this point.

jedisct1

5 hours ago

Of course.

A dedicated server or VPS from OVH, Hetzner, Scaleway, etc., or even Docker containers on Koyeb, will give you way more bang for your buck.

Call me a dinosaur, but I’ve never used any of the big cloud providers like AWS. They’re super expensive, and it’s hard to know what you’ll actually end up paying at the end of the month.

objektif

2 hours ago

Startup founders and employees are you really paying insane bills to AWS? How? We are really paying peanuts compared to other expenses.

ripped_britches

2 hours ago

Probably depends on what your business is and the LTV of your customers

objektif

25 minutes ago

May be but not sure. How many Figma type startups out there paying $500k a day? As opposed to just traditional SaaS/OpenAi wrapper. I just don’t see it.

tstrimple

an hour ago

In a lot of cases it's because they don't know how to build using cloud services effectively so they just spin up a lot of VMs because that's the only tool they know how to use. Running VMs 24/7 is just about the most expensive thing you can do in the "cloud". But doing anything else is "too complicated".

iamleppert

2 hours ago

$500 a month for two environments with 4 CPUs and 8 GB of memory is diabolical. The only thing more expensive and with worse performance than AWS is Azure.

jokethrowaway

5 hours ago

Having deployed servers well before AWS was a thing, AWS alwyas felt incredibly overpriced.

The only benefit you get is reliability, temporary network issues on AWS are not a thing.

On DigitalOcean they are fairly bad (I lose thousands of requests almost every month and I get pennies in credit back when I complain - while my users churning cost way more), on Hetzner I've heard mixed reviews.

Some people complains, some say it's extremely reliable.

I'm looking forward to try Hetzner out!

CaptainOfCoit

5 hours ago

> Having deployed servers well before AWS was a thing, AWS alwyas felt incredibly overpriced.

Yeah, I remember when AWS first appeared, and the value proposition was basically "It's expensive but you can press a button and a minute later you have a new instance, so we can scale really quickly". For the companies that know more or less the workload they have during a week don't really get any benefits, just more expensive monthly bills.

But somewhere along the line, people started thinking it was easier to use AWS than the alternatives, and I even heard people saying it's cheaper...

vidarh

4 hours ago

You'll even see people in a lot of the HN threads on the subject refusing to believe AWS is expensive, even in the face of a lot of us with expensive (EDIT: meant "extensive". but will leave it there as the tpo is also apt...) experience running systems on both AWS and alternatives.

The biggest innovation AWS delivered was to convince engineers they are cheap, while wresting control of provisioning away from the people with actual visibility into the costs.

piokoch

5 hours ago

Well, from what see the authors exchanged AWS managed Kubernetes cluster with self-hosted Kubernetes cluster on Talos Linux. Question is if $449.50/month paid for AWS will cover additional work needed for self-hosting.

vidarh

4 hours ago

As someone who do consulting in this space, and has for a decade: Clients usually end up needing more help to run their AWS setup than to self-host.

mystifyingpoi

5 hours ago

All the effort that previously wasn't required for operating EKS, but now is required to operate self-hosted Kubernetes, will be pushed to existing engineers as a bit of extra work, with no extra pay.

In the best case scenario. In the worst, some cluster f-up will eat 10x that in engineering time.

dboreham

3 hours ago

Quick note that while I hear nothing but good things about Hetzner, the approach is generic -- applies to the many other bare metal "SmallCo" providers. For example we use Hivelocity and have been very happy. Perhaps also worth mentioning that you can literally buy computers and rent space to plug them in and hook up to the internet in a place we call a data center. That's even cheaper when you know your workload long term and have access to capital.

YetAnotherNick

4 hours ago

You are doing calculations all wrong if you think saving $500/month is 75% of your cost.

Also first three lines of new stack is a sure shot way to get PTSD. You shouldn't manage database in your plane, unless you really know the internals of the tools you are using. Once you get off AWS then you really start to see the value of things like documentation.

silexia

4 hours ago

At my home, I have the PUD fiber Internet with Starlink as the backup WAN. I have two non mission critical servers we were using on AWS for and I set up two old laptops locally to run. Saving $1,000 a month now. I am looking at my $4,000 a month of mission critical servers now.

dboreham

3 hours ago

I began years ago hosting at home (because the business grew unexpectedly and initially the test server had been in my home). I wouldn't recommend it. Better to rent colocation space. The problem apart from having to provision reliable internet is you also need to provision reliable power, and cooling. And it gets noisy.

ed_mercer

5 hours ago

Great! Now if you go full homelab, you can get 1/30th of the price ;)

whalesalad

2 hours ago

The struggle is real. A lot of people think cloud lock-in is due to using cloud-specific services like SQS... but its the data. Try and exfil 300TB out of S3 without paying an enormous transfer cost =(

I want to move our infra out of AWS but at the end of the day we have too much data there and it is a non starter.

Terretta

3 hours ago

AWS and DigitalOcean*: $559.36, Hetzner: $132.96

Perspective: this difference is one hour of US fintech engineer time a month. If you have to self-build a single thing on Hetzner you get as "built-in" on AWS, are you ahead?

If this is your price range, and you're spending time thinking about how to save that $400/month (three Starbucks a day) instead of drive revenue or deliver client joy, you likely shouldn't be on AWS in the first place.

AWS is for when you need the most battle tested business continuity through automations driving distributed resilience, or if you have external requirements for security built into all infra, identity and access controls built into all infra at all layers, compliance and governance controls across all infra at all layers, interop with others using AWS (private links, direct connects, sure, but also permission-based data sharing instead of data movement, etc.). If your plans have those in your future, you should start on AWS and learn as you grow so you never have a "digital transformation" in your future.

Whether you're building a SaaS for others or a platform for yourself, “Enterprise” means more than just SSO tax and a call us button. There can be real requirements that you are not going to be able to meet reasonably without AWS's foundational building blocks that have this built in at the lego brick level. Combine that with "cost of delay" to your product and "opportunity cost" for your engineering (devs, SREs, users spending time doing undifferentiated heavy lifting) and those lego blocks can quickly turn out less expensive. Any blog comparing pricing not mentioning these things means someone didn't align their infra with their business model and engineering patterns.

Put another way, think of the enterprise column in the longest pricing grid you've ever seen – the AWS blocks have everything on the right-most column built in. If you don't want those, don't pick that column. Google and Azure are in the Team column second from right. Digital Ocean, CloudFlare, the Pro column third from right. Various Heroku-likes in the Getting Started column at the left, and SuperMicro and Hetzner in the Self-Host column, as in, you're buying or leasing the hardware either way, it's just whose smart hands you're using. ALL of these have their place, with the Getting Started and Pro columns serving most folks on HN, Team best for most SMB, and Enterprise best for actual enterprise but also Pro and Team that need to serve enterprise or intend to grow to that.

Note that if you don't yet need an enterprise column on your own pricing grid, K8s on whoever is a great way to Get Started and go Pro yourself while learning things needed for continuous delivery and system resilience engineering. Those same patterns then can be shifted onto on the Team and Enterprise column offerings from the big three (Google, Azure, AWS).

Here's my TL;DR blog post distilling all this:

If YAGNI, don't choose it.

wltr

2 hours ago

I’d like to point that exaggerations like $430 an hr (isn’t some average salary) or three Starbucks a day being something everyone casually does, they weaken your point.

As the rest of your comment, personally, I see it more like a pitch to use AWS, rather than some conversion whether everyone really needs that enterprise tier. Me, I’d prefer to control as much of my infra as possible, rather than offloading it to others for an insane price tag.

axus

5 hours ago

I'm going to be that guy and ask which service is the cheapest for AI to bring up new infrastructure and deploy to it?

OutOfHere

5 hours ago

The part they never tell you is how Hetzner has a substantially higher unfair risk of account termination without warning. If you are okay with your account being terminated like that with zero notice or reason, then Hetzner is cheap.

PeterStuer

5 hours ago

Do you have a substantiated source for the "12x the unfair risk of account termination without warning". I tried looking for it but all I could find were unsubstantiated grapevine (I heard they ...) posts with lots of people stating the opposite.

vidarh

4 hours ago

They're cheap, so I'd expect they get a substantially higher proportion of users who might think their account termination is unfair, but that were actually flouting the rules, so I wouldn't be surprised if there were a higher proportion of claims of unfair account termination... I've recommended Hetzner to people since 2008-2009, and know lots of people who use it, and I've never heard first-hand accounts of termination of any kind. But anecdotes vs. data and all that.

OutOfHere

4 hours ago

If you haven't read first-hand reports, then you haven't read all that much. If you search on Reddit, you may see many reports. It happened to me personally after I started using their CPU at nearly 100% for about two months. That's a report for you. They're cheap because they don't actually want people using what they're paying for. This is a theme that I have seen with German services.

Speaking of their rules, those are a bit insane too. Speaking of "flouting rules", any prospective user should think about whether it's okay for a cloud vendor to keep spying on which processes the user is running, even without a court order; it is not okay.

If you keep moving the goalpost, then you will understand nothing. You might as well be an employee of Hetzner.

vidarh

4 hours ago

I've maxed out lots of CPUs at Hetzner over many years, and across multiple companies, and had clients do the same, so I find your claim to be dubious unless you're talking about shared CPU cloud instances in which case I wouldn't be surprised but also wouldn't consider it unfair.

So let me revise that to say I haven't seen any reports I can 1) verify are first hand, and 2) know accurately reflect an actual unfair termination. That is also why I don't bother going around reading accounts on Reddit.

PeterStuer

4 hours ago

Just curious: Was this a colo, dedicated server, managed server or VPS? And since you mention "CPU at nearly 100% for about two months", was this potentially crypto mining?

OutOfHere

4 hours ago

Edited. Not only have I read numerous reports of it both on this site and on Reddit, but it personally happened to me around 2022. The number of such reports that I have read is easily 12x that of AWS. In contrast, AWS or any other cloud never did anything like this to me.

This is risked if the CPU or another resource is using close to 100% for a couple of months. Hetzner likes customers that pay for what they don't use.

gdulli

2 hours ago

Reddit anecdotes are not a good way to sell a given argument lol.

buyucu

5 hours ago

aws and azure are massively overpriced. there is no reason to use them.

nodesocket

5 hours ago

Saving $426/mo for a business seems like a waste of time and resources. The excessive frugal developer complex. How many hours did it take to do the migration?

fauigerzigerk

4 hours ago

Self-funded startups need to be frugal. And self-funded startups serving the not-for-profit sector in Europe need to be extra frugal.

The hours they put into not wasting money on AWS today could pay off many times if it makes their SaaS economically viable for their target audience.

Propelloni

4 hours ago

A quick background check on digitalsociety.coop reveals that 5000 US$/year was a significant sum for them in 2024 and that opportunity costs were probably negligible. Not everybody has money to burn.

gdulli

2 hours ago

As the business grows so will that inefficiency and migrating now is much less work than migrating later.

CaptainOfCoit

5 hours ago

> Saving $426/mo for a business seems like a waste of time and resources

How come? The baseline for that comparison will also stay static, regardless of how many TPS or whatever is going on, meanwhile the AWS price they're comparing to would only increase the more people use whatever they deploy.

rs_rs_rs_rs_rs

5 hours ago

You got me with the title and I was curious at first but then I got to the part where it shows the bill and realized this is just toy project.

CaptainOfCoit

5 hours ago

You can tell if a project is a toy or not based on the bill? How about actually looking at what they do? https://digitalsociety.coop/

It's literally a agency doing professional development for others, among other services. Clearly not "toys".

HN dismissals are going down in quality, at least they used to be well researched some years ago. Now people just spew out the first thing that comes up in their mind, and zero validation before hitting that "reply" button.

endymion-light

5 hours ago

It's really dismissive and frankly quite ignorant to have an attitude that just because a product doesn't have a massive AWS bill it's a toy project.

It's a rotten attitude, and judging a projects worth by an AWS bill is a very poor comparator. I could spin up a massive aws bill doing some pointless machine learning workloads, is that suddenly a valid project in your eyes?

rs_rs_rs_rs_rs

4 hours ago

>I could spin up a massive aws bill doing some pointless machine learning workloads, is that suddenly a valid project in your eyes?

Can you spin it on a AWS competitor for a fraction of a cost? Absolutely yes I would be interested in reading about it!

endymion-light

3 hours ago

I will do - but my latest ML model is attempting to create leylines of different mcdonalds across the country, i don't think it's worthy of being considered product

cisophrene

5 hours ago

Dedicated servers on a host like Hetzer and OVH surely beats any virtualization based cloud offering on price and performance. The tradeoff is availability. It's a great choice for entities that are optimizing on cost, but not a great choice if your business cannot tolerate downtime.

A good example is a the big lichess outage from last year [1]. Lichess is a non-profit, and also must serve a huge user base. Given their financials, they have to go the cheap dedicated server route (they host on OVH). They publish an Excel sheet somewhere with every resources they use to run the services and last year, I had fun calculating how much it would cost them if they were using an hyperscaler cloud offering instead. I don't remember exactly but it was 5 or 6x the price they currently pay OVH.

The downside, is that when you have an outage, your stuff is tied to physical servers and they can't easily be migrated, when cloud provider on the opposite can easily move around your workload. In the case of Lichess outage, it was some network device they had no control of that went bad, and lichess was down until OVH could fix it, that is many hours.

So, yes you get a great deal, but for a lot of businesses, uptime is more important than cost optimization and the physicality of dedicated servers is actually a serious liability.

[1]: https://lichess.org/@/Lichess/blog/post-mortem-of-our-longes...

CaptainOfCoit

5 hours ago

> It's a great choice for entities that are optimizing on cost, but not a great choice if your business cannot tolerate downtime.

Even hosting double of everything when you're doing dedicated servers will let you have cheaper monthly bills, compared to the same performance/$ you could get with AWS or whatever.

But Hetzner does seem a bit worse than other providers in that they have random failures in their own infrastructure, so you do need to take care if you wanna avoid downtime. I'm guessing that's how they can keep the prices so low.

> is that when you have an outage, your stuff is tied to physical servers and they can't easily be migrated

I think that's a problem in your design/architecture, if you don't have backups that live outside the actual servers you wanna migrate away from, or at least replicate the data to some network drive you can easily attach to a new instance in an instant.

yomismoaqui

5 hours ago

You can have reliability with physical servers.

When you pay 1/4 for 3X the performance you can duplicate your servers and then be paying 1/2 for 3X the performance.

I find baffling that people forget about how things were done before the cloud.

CodesInChaos

3 hours ago

Hetzner only has one Datacenter/AZ per region. So you either risk a single region failure taking you down, or you lose performance from transferring data to another location.

abujazar

2 hours ago

These physics are exactly the same with AWS et al.

PeterStuer

5 hours ago

"5 or 6x the price they currently pay OVH"

So they could have had 100% redundant systems at OVH and still be under half the cost of a traditional "cloud" provider?

I would look at architecture and operations first. Their "main" node went down, and they did not have a way they could just bring another instance of it online fast on a fresh OVH machine (typically provisioned in a few minutes, assuming they had no hot standby). If the same happened to their "main" VM at a "hyperscaler" , I would guess they also would have been up the same creek. It is not the difference between 120 and 600 seconds to provision a new machine that caused their 10 hrs downtime.

wolfi1

5 hours ago

is it really redundant when you host at the same provider?

CaptainOfCoit

5 hours ago

If you're doing VPSes, then maybe, as long as they're not under the same node. If it's dedicated servers, then probably.

But I think "redundancy" is more like a spectrum, rather than a binary thing. You can be more or less redundant, even within the same VPS if you'd like, but that of course be less redundant than hosting things across multiple data centers.

vidarh

4 hours ago

And it's cheap enough that you can have replicated setup across two different providers and still be cheaper than one expensive cloud provider.

While AWS is probably towards the safer end if you want to put all your eggs in one basket, people are still putting all their eggs in one basket if they have everything at AWS as well...

PeterStuer

4 hours ago

But that question remains the same whether you are renting bare metal or VMs. You can rent OVH servers located at different datacentres all over the globe, and their Cloud SLA has higher uptime guarantees than AWS (what that is worth depends on the value you place on an SLA ofc.)

namibj

4 hours ago

So host one on OVH and one on Hetzner?

jwr

5 hours ago

> when you have an outage, your stuff is tied to physical servers and they can't easily be migrated

I don't see how that follows? Could you please explain?

I run my stuff on Hetzner physical servers. It's deployed/managed through ansible. I can deploy the same configuration on another Hetzner cluster (say, in a different country, which I actually do use for my staging cluster). I can also terraform a fully virtual cloud configuration and run the same ansible setup on that. Given that user data gets backed up regularly across locations, I don't see the problem you are describing?

abujazar

2 hours ago

My experience is exactly the opposite. None of the cloud vendors are actually resilient, every single one of them have had major global outages. And when it happens you've got no influence on how fast it gets fixed. The only way of building a truly resilient infrastructure eith cloud vendors is mirroring across vendors. But it happens to be easier to mirror a private cloud between e.g. Hetzner and OVH than maintaining parallel setups in Azure and AWS.

LaurensBER

2 hours ago

This is a very good point but even with dedicated servers it's doable to build a resilient HA architecture.

OVH offers a managed kubernetes solution which for a team experienced with Kubernetes and/or already using containers would be a fairly straightforward way to get a solid HA setup up and running. Kubernetes has its downsides and complexity but in general it does handle hardware failures very well.

lossolo

5 hours ago

> The tradeoff is availability.

This is a myth, created so cloud providers can sell more, and so those who overpay can feel better. I've been using dedicated servers since 2005, so for 20 years across different providers. I have machines at these providers with 1000-1300 days of uptime.

debian3

4 hours ago

Same here, been running dedicated servers with OVH since 2009, if anything bare metal server are more stable than before. I just replaced a set of servers that was from 2018, I didn’t have any hardware problems during their 8 years of working under significant load. During that time I had 2 or 3 power outages, a few more network outages. Usually problems come in a cluster. I had a few years that I had nothing to report, 100% uptime. Dedicated are nice, but I guess it scares people. Hetzner use lower hardware quality than OVH on some of their offerings, so your experience may vary. One of the most important thing is to check that your server use datacenter SSD/HDD with ECC ram, it saves you a lot of problems.

petit_robert

an hour ago

> I have machines at these providers with 1000-1300 days of uptime

You did not say what system you use on them, but don't you need to reboot them to apply kernel upgrades, for instance?

lossolo

17 minutes ago

Most of them run Debian (some have Windows VMs running on those Debian machines), while a minority use Ubuntu. I reboot them once every few years when I upgrade the OS, kernel, or migrate to newer machine types.

I run most of the workloads in containers, but there are also some VMs (mostly Windows) and some workloads use Firecracker micro vms in containers. A small number of machines are rebooted more often because they occasionally need new kernel features, and their workloads aren't VM friendly, so they run on bare metal.

dizhn

4 hours ago

In fairness they might have been inaccessible during that time. :)