adamcharnock
4 hours ago
I cannot overstate the performance improvement of deploying onto bare metal. We typically see a doubling of performance, as well as extremely predictable baseline performance.
This is down to several things:
- Latency - having your own local network, rather than sharing some larger datacenter network fabric, gives around of order of magnitude reduced latency
- Caches – right-sizing a deployment for the underlying hardware, and so actually allowing a modern CPU to do its job, makes a huge difference
- Disk IO – Dedicated NVMe access is _fast_.
And with it comes a whole bunch of other benefits:
- Auto-scalers becomes less important, partly because you have 10x the hardware for the same price, partly because everything runs 2x the speed anyway, and partly because you have a fixed pool of hardware. This makes the whole system more stable and easier to reason about.
- No more sweating the S3 costs. Put a 15TB NVMe drive in each server and run your own MinIO/Garage cluster (alongside your other workloads). We're doing about 20GiB/s sustained on a 10 node cluster, 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).
- You get the same bill every month.
- UPDATE: more benefits - cheap fast storage, run huge Postgresql instances at minimal cost, less engineering time spend working around hardware limitations and cloud vagaries.
And, if chose to invest in the above, it all costs 10x less than AWS.
Pitch: If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too):
Email: adam@ above domain
torginus
an hour ago
Yup, I hope to god we are moving past the age of 'everything's fast if you have enough machines' and 'money is not real' era of software development.
I remember the point in my career when I moved from a cranky old .NET company, where we handled millions of users from a single cabinent's worth of beefy servers, to a cloud based shop where we used every cloud buzzword tech under the sun (but mainly everything was containerized node microservices).
I shudder thinking back to the eldritch horrors I saw on the cloud billing side, and the funny thing is, we were constantly fighting performance problems.
bombcar
2 minutes ago
My conspiracy theory is that "cloud scaling" was entirely driven by people who grew up watching sites get slash dotted and thought it was the absolute most important thing in the world that you can quickly scale up to infinity billion requests/second.
rightbyte
4 hours ago
What is old is new again.
My employer is so conservative and slow that they are forerunning this Local Cloud Edge Our Basement thing by just not doing anything.
darkwater
11 minutes ago
> What is old is new again.
I think there is a generational part as well. The ones of us that are now deep in our 40s or 50s grew up professionally in a self-hosted world, and some of us are now in decision-making positions, so we don't necessarily have to take the cloud pill anymore :)
Half-joking, half-serious.
radu_floricica
3 hours ago
> What is old is new again.
Over the years I tried occasionally to look into cloud, but it never made sense. A lot of complexity and significantly higher cost, for very low performance and a promise of "scalability". You virtually never need scalability so fast that you don't have time to add another server - and at baremetal costs, you're usually about a year ahead of the curve anyways.
hibikir
an hour ago
A nimble enough company doesn't need it, but I've had 6 months of lead time to request one extra server in an in-house data center due to sheer organizational failure. The big selling point of the cloud really was that one didn't have to deal with the division lording over the data center, or have any and all access to even log in by their priesthood who knew less unix than the programmers.
I've been in multiple cloud migrations, and it was always solving political problems that were completely self inflicted. The decision was always reasonable if you looked just at the people the org having to decide between the internal process and the cloud bill. But I have little doubt that if there was any goal alignment between the people managing the servers and those using them, most of those migrations would not have happened.
mgkimsal
21 minutes ago
I've been in projects where they're 'on the cloud' to be 'scalable', but I had to estimate my CPU needs up front for a year to get that in the budget, and there wasn't any defined process for "hey, we're growing more than we assumed - we need a second server - or more space - or faster CPUs - etc". Everything that 'cloud' is supposed to allow for - but ... that's not budgeted for - we'll need to have days of meetings to determine where the money for this 'upgrade' is coming from. But our meetings are interrupted by notices from teams that "things are really slow/broken"...
0cf8612b2e1e
9 minutes ago
The management overhead in requesting new cloud resources is now here. Multiple rounds of discussion and TPS reports to spin up new services that could be a one click deploy.
The bureaucracy will always find a way.
AtlasBarfed
30 minutes ago
Yeah, clouds are such a huge improvement over what was basically an industry standard practice to say oh you want a server fill out this 20 page form and will get you your server in 6 to 12 months.
But we don't need one minute response times from the cloud really. So something like hetzner that may just be all right. We'll get it to you within an hour. It's still light years ahead of what we used to be.
And if it makes the entire management and cost side and performance with bare metal or closer to bare metal on the provider side, then that is all good.
And this doesn't even address the fact that yeah, AWA has a lot of hidden costs, but a lot of those managed data center outsourcing contracts where you were subjected to those lead times for new servers... really weren't much cheaper than AWS back in the day.
odie5533
2 hours ago
Complexity? I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches. Or a highly available load balancer with infinite scale.
codegeek
2 hours ago
This is how the cloud companies keep you hooked on. I am not against them of course but the notion that no one can self host in production because "it is too complex" is something that we have been fed over the last 10-15 years. Deploying a production db on a dedicated server is not that hard. It is about the fact that people now think that unless they do cloud, they are amateurs. It is sad.
speleding
2 hours ago
I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.
I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.
fisf
26 minutes ago
If you are running k8s on prem, the "easy" way is to use a mature operator, taking care of all of that.
https://github.com/percona/percona-xtradb-cluster-operator https://github.com/mariadb-operator/mariadb-operator or CNPG for Postgres needs. They all work reasonable well, and cover all the basic (HA, replication, backups, recovery, etc).
klooney
18 minutes ago
It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.
benjiro
5 minutes ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks
I haven ever setup a AWS postgres and redis, and know its more then a few clicks. there is simply basic information that you need to link between services, where it does not matter if its cloud or hardware, you still need to do the same steps, be it from CLI or WebInterface.
And frankly, these days with LLMs, its no excuse anymore. You can literally ask a LLM to do the steps, explain them to you, and your off to the races.
> I don't have to worry about OS upgrades and patches
Single command and reboot...
> Or a highly available load balancer with infinite scale.
Unless your google, overrated ...
You literally rent from places like Hetzner for 10 bucks a load balancer, and if your old fascion, you can even do a DNS balancing.
Or you simply rent a server 10x the performance what Amazon gives (for the same price or less), and you do not need a load balancer. I mean, for 200 bucks, you rent a 48 core 96 thread server at Hetzner... Who needs a load balancer again... You will do millions or requests on a single machine.
AznHisoka
an hour ago
As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS.
Like, where do I go? Do i search for Postgres? If so where? Does the IP of my cluster change? If so how to make it static? Also can non-aws servers connect to it? No? Then how to open up the firewall and allow it? And what happens if it uses too much resources? Does it shutdown by itself? What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?
Meanwhile, all that time finding out, and I could ssh into a server, code and run a simple bash script to download, compile, run. Then another script to replicate. And i can check the logs, change any config parameter, restart etc. no black box to debug if shit hits the fan
infecto
an hour ago
This smells like “Dropbox is just rsync”. No skin in the game I think there are pros and cons to each but a Postgres cluster can be as easy as a couple clicks or an entry into a provisioning script. I don’t believe you would be able to architect the same setup with a simple single server ssh and a simple bash script. Unless you already wrote a bash script that magically provisions the cluster across various machines.
cortesoft
12 minutes ago
Your comment seems much more in the vain "I already learned how to do it this way, and I would have to learn something to do it the other way"
Which is of course true, but it is true for all things. Provisioning a cluster in AWS takes a bit of research and learning, but so did learning how to set it up locally. I think most people who know how to do both will agree it is simpler to learn how to use the AWS version than learning how to self host it.
nkozyra
an hour ago
Having lived in both worlds, there are services wherein, yeah, host it yourself. But having done DB on-prem/on-metal, dedicated hosting, and cloud, databases are the one thing I'm happy to overpay for.
The things you describe involve a small learning curve, each different for each cloud environment, but then you never have to think about it again. You don't have to worry about downtime (if you set it up right), running a bash script ... literally nothing else has to be done.
Am I overpaying for Postgres compared to the alternatives? Hell yeah. Has it paid off? 100%, would never want to go back.
Volundr
an hour ago
> Do i search for Postgres?
Yes. In your AWS console right after logging in. And pretty much all of your other setup and config questions are answered by just filling out the web form right there. No sshing to change the parameters they are all available right there.
> And what happens if it uses too much resources?
It can't. You've chosen how much resources (CPU/Memory/Disk) to give it. Run away cloud costs are bill by usage stuff like redshift, s3, lambda, etc.
I'm a strong advocate for self (for some value of self) hosting over cloud, but your making cloud out to be far more difficult than it is.
pavel_lishin
an hour ago
> As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS. Like, where do I go? Do i search for Postgres? If so where?
Anything you don't know how to do - or haven't even searched for - either sounds incredibly complex, or incredibly simple.
trenchpilgrim
an hour ago
A fun one in the cloud is "when I upgrade to a new version of Postgres, how long is the downtime and what happens to my indexes?"
mschuster91
26 minutes ago
For AWS RDS, no big deal. Bare metal or Docker? Oh now THAT is a world of pain.
Seriously I despise PostgreSQL in particular in how fucking annoying it is to upgrade.
mschuster91
27 minutes ago
Actually... for Postgres specifically, it's less than 5 minutes to do so in AWS and you get replication, disaster recovery and basic monitoring all included.
I hated having to deal with PostgreSQL on bare metal.
To answer your questions should someone ask these as well and wish answers:
> Does the IP of my cluster change? If so how to make it static?
Use the DNS entry that AWS gives you as the "endpoint", done. I think you can pin a stable Elastic IP to RDS as well if you wish to expose your RDS DB to the Internet although I have really no idea why one would want that given potential security issues.
> Also can non-aws servers connect to it? No?
You can expose it to the Internet in the creation web UI. I think the default the assistant uses is to open it to 0.0.0.0/0 but the last time I did that is many years past so I hope that AWS asks you about what you want these days.
>Then how to open up the firewall and allow it?
If the above does not, create a Security Group, assign the RDS server to that Security Group and create an Ingress rule that either only allows specific CIDRs or a blanket 0.0.0.0/0.
> And what happens if it uses too much resources? Does it shutdown by itself?
It just gets dog slow if your I/O quota is exhausted, it goes into an error state when the disk goes full. Expand your disk quota and the RDS database becomes accessible again.
> What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?
No SSH at all, not even for manually unfucking something, for that you need the assistance of the AWS support - but in about six years I never had a database FUBAR'ing itself.
As for config parameters, there's an UI for this called "parameter/option groups", you can set almost all config parameters there, and you can use these as templates for other servers you need as well.
wahnfrieden
an hour ago
It is not as simple as you describe to set up HA multi-region Postgres
If you don't care about HA, then sure everything becomes easy! Until you have a disaster to recover and realize that maybe you do care about HA. Or until you have an enterprise customer or compliance requirement that needs to understand your DR and continuity plans.
Yugabyte is the closest I’ve seen to achieving that simplicity with self host multi region and HA Postgres and it is still quite a bit more involved than the steps you describe and definitely more work than paying for their AWS service. (I just mention instead of Aurora because there’s no self host process to compare directly there as it’s proprietary.)
whstl
an hour ago
If you are talking about RDS and ElasticCache, it’s definitely NOT a few clicks if you want it secure and production-ready, according to AWS itself in their docs and training.
And before someone says Lightsail: is not meant for highly availability/infinite scale.
binary132
29 minutes ago
If you don’t find AWS complicated you really haven’t used AWS.
lelanthran
an hour ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks
It's "only a few clicks" after you have spent a signficant amount of time learning AWS.
trenchpilgrim
2 hours ago
If you were personally paying the bill, you'd probably choose the self host on cost alone. Deploying a DB with HA and offsite backups is not hard at all.
fun444555
2 hours ago
I have done many postgres deploys on bare metal. The IOPS and storage space saved (zfs compression because psql is meh) is huge. I regularly used hosted dbs but largely for toy DBs in GBs not TBs.
Anyway, it is not hard and controlling upgrades saves so much time. Having a clients db force upgraded when there is no budget for it sucks.
Anyway, I encourage you to learn/try it when you have opportunity
naasking
2 hours ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches
Last I checked, stack overflow and all of the stack exchange sites are hosted on a single server. The people who actually need to handle more traffic than that are in the 0.1% category, so I question your implicit assumption that you actually need a Postgres and Redis cluster, or that this represents any kind of typical need.
trenchpilgrim
an hour ago
SO was hosted on a single rack last I checked, not a single box. At the time they had an MS SQL cluster.
Also, databases can easily see a ton of internal traffic. Think internal logistics/operations/analytics. Even a medium size company can have a huge amount of data, such as tracking every item purchased and sold for a retail chain.
kitd
2 hours ago
People are usually the biggest cost in any organisation. If you can run all your systems without the sysadmins & netadmins required to keep it all upright (especially at expensive times like weekends or run up to Black Friday/Xmas), you can save yourself a lot more than the extra it'll cost to get a cloud provider to do it all for you.
ecshafer
2 hours ago
Every large organization that is all in on cloud I have worked at has several teams doing cloud work exclusively (CICD, Devops, SRE, etc), but every individual team is spending significant amounts of their time doing cloud development on top of that work.
rcxdude
2 hours ago
This. There's a lot of talk of 'oh you will spend so much time managing your own hardware' when I've found in practice it's much less time than wrangling the cloud infrastructure. (Especially since the alternatives are usually still a hosting provider that mean you don't have to physically touch the hardware at all, though frankly that's often also an overblown amount of time. The building/internet/cooling is what costs money but there's already a wide array of co-location companies set up to provide exactly that)
epistasis
an hour ago
I think you are very right, and to be specific, IAM roles, connecting security groups, terraform plan/apply cycles, running Atlantis through GitHub, all that takes tremendous amounts of time and requires understanding a very large set of technologies on top of the basic networking/security/PostGRES knowledge.
ecshafer
2 hours ago
The cost to run data-centers for a large company that is past the co-location phase, I am not sure where those calculations come out to. But yeah in my experience, running even a fairly large amount of bare metal nix servers in colocation facilities are really not that time consuming.
chatmasta
2 hours ago
I can’t believe this cloud propaganda remains so pervasive. You’re just paying DevOps and “cloud architects” instead.
codegeek
2 hours ago
Exactly. It's sad that we have been brain washed by the cloud propaganda long enough now. Everyone and their mother thinks that to setup anything in production, you need cloud otherwise it is amaeteurish. Sad.
Ekaros
16 minutes ago
Wouldn't you want someone watching over cloud infra at those times too? So maybe slightly less, but still need some people being ready.
mjr00
2 hours ago
Yeah I always just kinda laugh at these comparisons, because it's usually coming from tech people who don't appreciate how much more valuable people's time is than raw opex. It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.
wredcoll
40 minutes ago
If "cloud" took zero time, then sure.
It actually takes a lot of time.
mjr00
19 minutes ago
"It's actually really easy to set up Postgres with high availability and multi-region backups and pump logs to a central log source (which is also self-hosted)" is more or less equivalent to "it's actually really easy to set up Linux and use it as a desktop"
In fact I'd wager a lot more people have used Linux than set up a proper redundant SQL database
grim_io
an hour ago
What is this?!
You are self-managing expensive dedicated hardware in form of MacBooks, instead of renting Azure Windows VM's?!
Shame!
lozf
41 minutes ago
Don't be silly, - the MacBook Pro's are just used to RDP to the Azure Windows VMs ;)
Arch-TK
2 hours ago
What is more likely to fail? The hardware managed by Hetzner or your product?
I'm not saying that you won't experience hardware failures, I am just saying that you also need to remember that if you want your product to keep working over the weekend then you must have someone ready to fix it over the weekend.
grim_io
an hour ago
Cloud providers and even cloudflare go down regularly. Relax.
fwip
38 minutes ago
Sure - but when AWS goes down, Amazon fixes it, even on the weekends. If you self-host, you need to pay a person to be on call to fix it.
rypskar
16 minutes ago
Not only that. When your self-host goes down your customers complain that you are down. When AWS goes down your customers complain that internet is down
CursedSilicon
21 minutes ago
AWS doesn't have to pay people (LOTS OF PEOPLE) to keep things running over the weekends?
And they aren't...just passing those costs on to their customers?
wredcoll
37 minutes ago
I mean, yes, but also I get "3 nines" uptime by running a website on a box connected to my isp in my house. (it would easily be 4 or 5 nines if I also had a stable power grid...)
There's a lot, a lot of websites where downtime just... doesn't matter. Yes it adds up eventually but if you go to twitter and its down again you just come back later.
HPsquared
2 hours ago
That's how they can get away with such seemingly high prices.
exe34
2 hours ago
except you now have your developers chasing their own tails figuring out how to insert the square peg in the round hole without bankrupting the company. cloud didn't save time, it just replaced the wheels for the hamsters.
binary132
30 minutes ago
It’s kinda good if your requirements might quadruple or disappear tonight or tomorrow, but you should always have a plan to port to reserved / purchased capacity.
Lalabadie
2 hours ago
I'm a designer with enough front-end knowledge to lead front-end dev when needed.
To someone like me, especially on solo projects, using infra that effectively isolates me from the concerns (and risks) of lower-level devops absolutely makes sense. But I welcome the choice because of my level of competence.
The trap is scaling an org by using that same shortcut until you're bound to it by built-up complexity or a persistent lack of skill/concern in the team. Then you're never really equipped to reevaluate the decision.
f1shy
an hour ago
If everything is properly done, it should be next to trivial to add a server. When I was working on that we had a written procedure, when followed strictly, it would just take less than an hour
ep103
3 hours ago
The benefit of cloud has always been that it allows the company to trade capex for opex. From an engineering perspective, it trades scalability for complexity, but this is a secondary effect compared to the former tradeoff.
PeterStuer
2 hours ago
"trade capex for opex"
This has nothing to do with cloud. Businesses have forever turned IT expenses from capex to opex. We called this "operating leases".
et1337
2 hours ago
I’ve heard this a lot, but… doesn’t Hetzner do the same?
radiator
2 hours ago
Hetzner is also a cloud. You avoid buying hardware, you rent it instead. You can rent either VMs or dedicated servers, but in both cases you own nothing.
throwaway894345
2 hours ago
If you’re just running some CRUD web service, then you could certainly find significantly cheaper hosting in a data center or similar, but also if that’s the case your hosting bill is probably a very small cost either way (relative to other business expenses).
> You virtually never need scalability so fast that you don't have time to add another server
What do you mean by “time to add another server?” Are you thinking about a minute or two to spin up some on-demand server using an API? Or are you talking about multiple business days to physically procure and install another server?
The former is fine, but I don’t know of any provider that gives me bare metal machines with beefy GPUs in a matter of minutes for low cost.
Aissen
3 hours ago
As an infrastructure engineer (amongst other things), hard disagree here. I realize you might be joking, but a bit of context here: a big chunk of the success of Cloud in more traditional organizations is the agility that comes with it: (almost) no need to ask permission to anyone, ownership of your resources, etc. There is no reason that baremetal shouldn't provide the same customer-oriented service, at least for the low-level IaaS, give-me-a-VM-now needs. I'd even argue this type of self-service (and accounting!) should be done by any team providing internal software services.
rcxdude
2 hours ago
I think also this was only a temporary situation caused by the IT departments in these organisations being essentially bypassed. Once it became a big important thing then they have basically started to take control of it and you get the same problems (in fact potentially more so because the expense means there's more pressure cut down resources).
abujazar
3 hours ago
The permissions and ownership part has little to do with the infrastructure – in fact I've often found it more difficult to get permissions and access to resources in cloud-heavy orgs.
ambicapter
2 hours ago
I'm at a startup and I don't have access to the terraform repo :( and console is locked down ofc.
michaelt
3 hours ago
"No need to ask permission" and "You get the same bill every month" kinda work against one another here.
Aissen
2 hours ago
I should have been more precise… Many sub-orgs have budget freedom to do their job, and not having to go through a central authority to get hardware is often a feature. Hence why Cloud works so well in non-regulatory heavy traditional orgs: budget owner can just accept the risks and let the people do the work. My comment was more of a warning to would-be infrastructure people: they absolutely need to be customer-focused, and build automation from the start.
blibble
2 hours ago
don't underestimate the ability of traditional organisations to build that process around cloud
you keep the usual BS to get hardware, plus now it's 10x more expensive and requires 5x the engineering!
kccqzy
2 hours ago
That's a cultural issue. Initially at my workplace people needed to ask permissions to deploy their code. The team approving the deployment got sick of it and built a self-service deployment tool with security controls built in and now deployment is easy. All it matters is a culture of trusting other fellow employees, a culture of automating, and a culture of valuing internal users.
Aissen
2 hours ago
Agreed, that's exactly what I was aiming at. I'm not saying that it's the only advantage of Cloud, but that orgs with a dysfunctional resource-access culture were a fertile ground for cloud deployments.
Basically: some managers gets fed-up with weeks/months of delays for baremetal or VM access -> takes risks and gets cloud services -> successful projects in less time -> gets promoted -> more cloud in the org.
alexchantavy
3 hours ago
> no need to ask permission to anyone, ownership of your resources, etc
In a large enough org that experience doesn’t happen though - you have to go through and understand how the org’s infra-as-code repo works, where to make your change, and get approval for that.
misiek08
3 hours ago
You also need to get budget, few months earlier, sometimes even legal approval. Then you have security rules, „preferred” services and the list goes on..
rightbyte
2 hours ago
Well ye it is more like I frame it as a joke but I do mean it.
I don't argue there aren't special cases for using fancy cloud vendors, though. But classical datacentre rentals get you almost always there for less.
Personally I like being able to touch and hear the computers I use.
infogulch
3 hours ago
Like https://oxide.computer/ ?
Damogran6
2 hours ago
As a career security guy, I've lost count of the battles I've lost in the race to the cloud...now it's 'we have to up the budget $250k a year to cover costs' and you just shrug.
The cost for your first on-prem datacenter server is pretty steep...the cost for the second one? Not so much.
marcosdumay
an hour ago
> What is old is new again.
It's not really. It just happens that when there is a huge bullshit hype out there, people that fall for it regret and come back to normal after a while.
Better things are still better. And this one was clearly only better for a few use-cases that most people shouldn't care about since the beginning.
kccqzy
2 hours ago
My employer also resisted using cloud compute and sent staff explanations why building our own data centers is a good thing.
HPsquared
3 hours ago
"Do nothing, Win"
rixed
19 minutes ago
I do not disagree, but just for the record, that's not what the article is about. They migrated to Hetzner cloud offering.
If they had migrated to a bare metal solution they would certainly have enjoyed an even larger increase in perf and decrease in costs, but it makes sense that they opted for the cloud offering instead given where they started from.
realitysballs
3 hours ago
Ya but then you need to pay for a team to maintain network and continually secure and monitor the server and update/patch. The salaries of those professionals , really only make sense for a certain sized organization.
I still think small-midsized orgs may be better off in cloud for security / operations cost optimization.
esskay
2 hours ago
You still need those same people even if you're running on a bunch of EC2 and RDS instances, they aren't magically 'safer'.
lnenad
2 hours ago
I mean, by definition yes they are. RDS is locked down by default. Also if you're using ECS/Fargate (so not EC2) as the person writing the article does, it's also pretty much locked down outside of your app manifest definitions. Also your infra management/cost is minimal compared to running k8s and bare metal.
abenga
2 hours ago
This implies cloud infrastructure experts are cheaper than bare metal Linux/networking/etc experts. Probably in most smaller organizations, you have the people writing the code manage the infra, so it's an "invisible cost", but ime, it's easy to outgrow this and need someone to keep cloud costs in check within a couple of years, assuming you are growing as fast as an average start-up.
adamcharnock
an hour ago
I very much understand this, and that is why we do what we do. Lots of companies feel exactly as you say. I.e. Sure it is cheaper and 'better', but we'll pay for it in salaries and additional incurred risk (what happens if we invest all this time and fail to successfully migrate?)
This is why we decided to bundle engineering time with the infrastructure. We'll maintain the cluster as you say, and with the time left over (the majority) we'll help you with all your other DevOps needs too (CI/CD pipelines, containerising software, deploying HA Valkey, etc). And even after all that, it still costs less than AWS.
Edit: We also take on risk with the migration – our billing cycle doesn't start until we complete the migration. This keeps our incentives aligned.
dorkypunk
an hour ago
Then you have to replace those professionals with even more specialized and expensive professionals in order be able to deploy anything.
DisabledVeteran
2 hours ago
That used to be the case until recently. As much as neither I nor you want to admit it -- the truth is ChatGPT can handle 99% of what you would pay for "a team to maintain network and continually secure and monitor the server and update/patch." Infact, ChatGPT surpasses them as it is all encompassing. Any company now can simply pay for OpenAI's services and save the majority of the money they would have spent on the, "salaries of those professionals." BTW, ChatGPT Pro is only $200 a month ... who do you think they would rather pay?
rightbyte
2 hours ago
Isn't most vulnerabilities in your own server software or configs anyways?
Thicken2320
3 hours ago
Using the S3 API is like chopping onions, the more you do it, the faster you start crying.
scns
3 hours ago
Less to no crying when you use a sharp knive. Japanese chefs say: no wonder you are crying, you squash them.
Esophagus4
3 hours ago
Haha!
My only “yes, but…” is that this:
> 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).
kind of smells like abuse of S3. Without knowing the use case, maybe a different AWS service is a better answer?
Not advocating for AWS, just saying that maybe this is the wrong comparison.
Though I do want to learn about Hetzner.
wredcoll
32 minutes ago
You're (probably) not wrong about the abuse thing, but it sure is nice to just not care about that when you have fixed hardware. I find trying to guess which of the 200 aws services is the cheapest kinda stressful.
wiether
2 hours ago
They conveniently provide no detail about the usecase, so it's hard to tell
But, yeah, there's certainly a solution to provide better performances for cheaper, using other settings/services on AWS
adamcharnock
2 hours ago
We're hoping to write a case study down the road that will give more detail. But the short version is that not all parts of the client's organisation have aligned skills/incentives. So sometimes code is deployed that makes, shall we say, 'atypical use' of the resources available.
In those cases, it is great to a) not get a shocking bill, and b) be able to somewhat support this atypical use until it can be remedied.
wiether
an hour ago
Thank you for the reply
I'm honestly quite interested to learn more about the usecase that required those 50k API calls!
I've seen a few cases of using S3 for things it was never intended for, but nothing close to this scale
nikodunk
21 minutes ago
If you’re big, invest in this. If you’re small, slap Dokploy/Coolify on it.
lazyfanatic42
an hour ago
haha this reminds me of when I used to manage Solaris system consisting of 2 servers. Sparc T7, 1 box in one state and 1 box in another. No load balancer.
Thousands and thousands of users depending on that hardware.
Extremely robust hardware.
epistasis
an hour ago
What do you recommend for configuration management? I've had a fairly good experience with Ansible, but that was a long time ago... anything new in that pace?
dijit
an hour ago
"new", I'm not sure, but I deployed 2,500 physical Windows machines with SaltStack and it worked pretty good.
it also handled some databases and webservers on FreeBSD and Windows, I considered it better than Ansible.
chubot
2 hours ago
Does anyone have experience with say Linode and Digital Ocean performance versus AWS and GCE?
They still use VMs, but as far as I know they have simple reserved instances, not “cloud”-like weather?
Is the performance better and more predictable on large VPSes?
(edit: I guess a big difference is that VPS can have local NVMe that is persistent, whrereas EC2 local disk is ephemeral? )
pton_xd
2 hours ago
I can't speak to Linode but in my experience the Digital Ocean VM performance is quite bad compared to bare metal offerings like Hetzner, OVH, etc. It's basically comparable to AWS, only a bit cheaper.
matt-p
15 minutes ago
It's essentially the same product, but you do get lower disk latency. Best performance is always going to be a dedicated server which in the US seem to start around $80-100/month (just checking on serversearcher.com), DO and so on do provide a "dedicated cpu" product if that's too much.
inapis
2 hours ago
No. DO can be equally noisy but I've always tried their regular instances and not their premium AMD/Intel ones.
cess11
2 hours ago
I've left a job because it was impossible to explain this to an ex-Googler on the board who just couldn't stop himself from trying to be a CTO and clownmaker at the company.
The rough part was that we had made hardware investments and spent almost a year setting up the system for HA and immediate (i.e. 'low-hanging fruit') performance tuning and should have turned to architectural and more subtle improvements. This was a huge achievement for a very small team that had neither the use nor the wish to go full clown.
exe34
2 hours ago
I love that you're not just preaching - you're offering the service at a lower cost. (I'm not affiliated and don't claim anything about their ability/reliability).
jnsaff2
3 hours ago
There is a graph database which does disk IO for database startup, backup and restore as single threaded, sequential, 8kb operations.
On EBS it does at most 200MB/s disk IO just because the EBS operation latency even on io2 is about 0.5 ms. Even though the disk can go much faster, disk benchmarks can easily do multi-GB/s on nodes that have enough EBS throughput.
On instance local SSD on the same EC2 instance it will happily saturate the whatever instance can do (~2GB/s in my case).
anonzzzies
3 hours ago
What graph db is that?
api
an hour ago
How do you deprogram your devs and ops people from the learned helplessness of cloud native ideology?
I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting. This is deeply bizarre for someone who installed Linux from floppy disks in 1994, but most modern devs have fully swallowed the idea that cloud handles things for them that mere mortals cannot handle.
This, in turn, is a big reason why companies use cloud in spite of the insane markup: it's hard to staff for anything else. Cloud has utterly dominated the developer and IT mindset.
CursedSilicon
14 minutes ago
>I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting
Are y'all hiring? [1]
I did 15 months at AWS and consider it the worst career move of my life. I much prefer working with self-hosting where I can actually optimize the entire hardware stack I'm working with. Infrastructure is fun to tinker with. Cloud hosting feels like a miserable black box that you dump your software into and "hope"
awestroke
41 minutes ago
So you'd rather self host a database as well? How do you prevent data loss? Do you run a whole database cluster in multiple physical locations with automatic failover? Who will spend time monitoring replication lag? Where do you store backups? Who is responsible for tuning performance settings?
7bit
28 minutes ago
I really don't understand this comment. The cloud doesn't protect you from data loss or provide any of the things you named.
baby_souffle
18 minutes ago
Yes it does? For a fraction of a dollar per hour, AWS will give me a URI that I can connect to. On the other end is a postgres instance that already has authentication, backups handled for me. It's also backed by a storage layer that is far more robust than anything I can get together in my rented cage with my corporate budget.
theideaofcoffee
22 minutes ago
Hosting a database is no different than self-hosting any other service. This viewpoint hath what cloud wrought, this atrophying of the most basic operational skills, as if running these magic services are only achievable by the hyperscalers who said they are the only ones capable.
The answers to all of your questions are a hard: it depends. What are your engineering objectives? What are your business requirements? Uptime? Performance? Cost constraints and considerations? The cloud doesn't take away the need to answer these questions, it's just that self-hosting actually requires you to know what you are doing versus clicking a button and just hoping for the best.
belter
3 hours ago
> If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too
You might not realize but you are actually increasing the business case for AWS :-) Also those hardware savings will be eaten away by two days of your hourly bill. I like to look at my project costs across all verticals...
dlisboa
2 hours ago
> Also those hardware savings will be eaten away by two days of your hourly bill
Doubt it. I've personally seen AWS bills in the tens of thousands, he's probably not that costly for a day.
whstl
an hour ago
I don't think I have joined a startup that pays less than 20k/month to AWS or any cloud in almost a decade.
Biggest recent ones were ~200k and ~100k that we managed to lower to ~80k with a couple months of work (but it went back up again after I left).
I fondly remember lowering our Heroku bill from 5k to 2k back in 2016 after a day of work. Management was ecstatic.
theideaofcoffee
20 minutes ago
Same, but in the hundreds of thousands monthly and growing at steady clip, and AWS extending credits worth -millions-, just to keep them there because their margins are so fat and juicy they can afford that insane markup.
That's where the real value lies. Not paying these usurious amounts.
adamcharnock
3 hours ago
I understand the concern for sure. But we don't bill hourly in that way, as one thing our clients really appreciate is predictable costs. The fixed monthly price already includes engineering time to support your team.