mfrye0
5 hours ago
I've been keeping an eye on this space for awhile as it matures a bit further. There's been a number of startups that have popped up around this - apart from Temporal and DBOS, Hatchet.run looked interesting.
I've been using BullMQ for awhile with distributed workers across K8 and have hacked together what I need, but a lightweight DAG of some sort on Postgres would be great.
I took a brief look at your docs. What would you say is the main difference of yours vs some of the other options? Just the simplicity of it being a single sql file and a sdk wrapper? Sorry if the docs answer this already - trying to take a quick look between work.
the_mitsuhiko
4 hours ago
> I took a brief look at your docs. What would you say is the main difference of yours vs some of the other options? Just the simplicity of it being a single sql file and a sdk wrapper? Sorry if the docs answer this already - trying to take a quick look between work.
It's really just trying to be as simple as possible. I was motivated by trying to just do the most simple thing I could come up with after I did not really find the other solutions to be something I wanted to build on.
I'm sure they are great, but I want to leave the window open to having people self host what we are building / enable us to deploy a cellular architecture later and thus I want to stick to a manageable number of services until until I can no longer. Postgres is a known quantity in my stack and the only postgres only solution was DBOS which unfortunately did not look ready for prime time yet when I tried it. That said, I noticed that DBOS is making quite some progress so I'm somewhat confident that it will eventually get there.
jedberg
2 hours ago
Could you provide some more specifics as to why DBOS isn’t “ready for prime time”? Would love to know what you think is missing!
FWIW DBOS is already in production at multiple Fortune 500 companies.
biasafe_belm
an hour ago
I'd love to hear both of your thoughts! I'm considering durable execution and DBOS in particular and was pretty happy to see Armin's shot at this.
I'm building/architecting a system which will have to manage many business-critical operations on various schedules. Some will be daily, some bi-weekly, some quarterly, etc. Mostly batch operations and ETL, but they can't fail. I have already designed a semblance of persistent workflow in that any data ingestion and transformation is split into atomic operations whose results are persisted to blob storage and indexed in a database for cataloguing. This means that, for example, network requests can be "replayed", and data transformation can be resumed at any intermediate step. But this is enforced at the design stage, not runtime like other solutions.
My system also needs to be easily auditable and written in Python. There are many, many ways to build this (especially if you include cloud offerings) but, like Armin, I'm trying to find the simplest architecture possible so our very small team can focus on building and not maintaining.