Show HN: Ductwork – A Go platform for running AI agents on autopilot

5 pointsposted 20 hours ago
by dneil8675

8 Comments

john_minsk

11 hours ago

So let's say I run this with a lot of tasks.

- How many tasks/schedules have you tested? - What if some schedule has to run another schedule on demand. Can I clearly see it in a management view? - If you and me are running this. Can my task contact your task and be stay alive while another user finishes his task?

dneil8675

2 hours ago

Only have tested around 4-5 tasks at the same time (since I don't want to end up with a large anthropic bill on this MVP just yet).

Management view isn't there yet — still focused on getting the core execution and scheduling solid before layering in observability.

For the agent communication - at the moment there is no connectivity between two spawned agents, unless one agent spawns the other. The system is task based with the task being the separation boundary, where agents can spawn agents if required by the task. For this what kind of use cases are you imagining ?

Also please feel free to pull it down and tinker with it! This is an MVP and more people playing and updating it will make it that much better!

jlongo78

19 hours ago

Interesting approach. One thing worth considering with autopilot agents: session persistence and context recovery become critical when agents run long tasks and hit failures mid-stream. The ability to resume exactly where a conversation left off, rather than restarting from scratch, saves significant time and cost. Also worth thinking about multi-agent observability in a single view - context switching between isolated agent outputs is a real friction point teams underestimate until they're running several concurrent tasks.

dneil8675

18 hours ago

Those are great points, both session persistence and multi-agent observability are on the roadmap.

Checkpointing conversation state + sandbox filesystem mid-run so agents can resume on failure will be key for operating at scale. And a unified dashboard across all running agents is the goal once the core scheduling and execution layer is solid.

I appreciate the feedback!

jlongo78

11 hours ago

Glad those are on the radar. One thing that might help with prioritization: checkpoint granularity matters a lot in practice. Saving state at major tool call boundaries rather than just at conversation turns tends to give much better recovery points without bloating storage. Learned that the hard way watching agents redo 20 minutes of work after a single API timeout. The unified dashboard will be huge once you get there.

jlongo78

15 hours ago

Great to hear! Good luck with the project. Wish you well!

dneil8675

2 hours ago

Thank you! Again appreciate all of the feedback!

jlongo78

2 hours ago

Of course! And just to follow up on that thought I started - the ability to checkpoint state mid-task and resume cleanly is where a lot of these agent systems quietly fall apart. Worth thinking about how you handle partial failures, especially if an agent is 80% through something expensive. Curious if you have any recovery mechanisms in place already or if that's on the roadmap.