Building a coding agent in Swift from scratch

83 pointsposted 20 hours ago
by vanyaland

19 Comments

mark_l_watson

17 hours ago

I think this is a good learning project, based in a long perusal of the github repo. One suggestion: don’t call the CLI component of the project ‘claude’ - that seems like asking for legal takedown problems.

vanyaland

15 hours ago

Good point, I'll rename the binary. Thanks for actually going through the repo.

scuff3d

18 minutes ago

I'm reading the first of the blog posts. I've never actually seen any Swift code before, but looking at the package definition I'm struck by how much it looks like Zig. I've never heard Andrew Kelly call Swift out as an influence, but it seems some Swift DNA is in Zig.

Also, brave calling it swift-claude-code given Anthropics behavior.

dostick

4 hours ago

It’s not quite clear that this project is- there’s no “Claude code” a program. There’s tui/gui app, harness, prompts, and LLM. so this is a harness part?

vanyaland

4 minutes ago

It's the harness/orchestration layer — the part that runs the agent loop, dispatches tool calls, and manages context.

maxbeech

16 hours ago

the interesting design tension i ran into building in this space is context management for longer sessions. the model accumulates tool call history that degrades output quality well before you hit the hard context limit - you start seeing "let me check that again" loops and increasingly hedged tool selection.a few things that helped: (1) summarizing completed sub-task outputs into a compact working-memory block that replaces the full tool call history, (2) being aggressive about dropping intermediate file read results once the relevant information has been extracted, and (3) structuring the initial system prompt so the model has a clear mental model of what "done" looks like before it starts exploring.the swift angle is actually a nice fit - the structured concurrency model maps well to the agent loop, and the strong type system makes tool schema definition less error-prone than JSON string wrangling in most other languages.

dostick

an hour ago

So that’s what it is! I was wondering why reducing context and summarising still makes it make mistakes and forget the steering. And couldn’t find explanation to why it starts ignoring instructions when context is not full at all. How did you find that tool call is what degrades it? Isn’t this a biggest problem there is and not just “design tension”?

vanyaland

15 hours ago

Yeah, this is basically what I ran into too. I actually wrote about this in Stage 6 (https://ivanmagda.dev/posts/s06-context-compaction/) I went with your option (1): once history crosses a token threshold, the agent asks the model to summarize everything so far, then swaps the full history for that summary. Keeps the context window clean, though you do lose the ability to go back and reference exact earlier tool outputs.

The hard part was picking when to trigger it. Too early and you're throwing away useful context. Too late and the model's already struggling. I ended up just using a simple token count — nothing clever, but it works.

And yeah, the Swift angle was genuinely fun. Defining tool schemas as Codable structs that auto-generate JSON schemas at compile time, getting compiler errors instead of runtime API failures is a huge win.

faangguyindia

7 hours ago

I built my agent in python since agent is CLI.

I used python+rich, but window resize wrecks UI layout

This isn't the issue with nodejs based stuff.

brumbelow

14 hours ago

This is a cool idea. The stage-by-stage build makes the failure modes legible: first the loop, then tool dispatch, then persistence, then subagents/skills/compaction. A nice reminder that most of the magic is in state management and control flow

steve_adams_86

12 hours ago

I wouldn't say most of the magic is there, but I do think a lot of the progress we've seen in the last few years has been external to the models, and people sometimes miss that. For example, Claude Code has improved by leaps and bounds because the tooling has improved so much, from what I can see. But the underlying model is still what makes this relatively simple tooling so useful.

vanyaland

11 hours ago

Agreed. That's the core hypothesis behind this learning project — model is the magic, and the agent loop is just a thin, transparent wrapper around it. The goal of building it stage-by-stage was to prove you don't need a massive, complex framework to get good agentic behavior.

nhubbard

17 hours ago

How practically could we drop in Apple Intelligence once it's using Gemini as its core for a 100% local AI agent in a box?

NitpickLawyer

16 hours ago

IIUC Gemini will run in Apple's cloud infra, not on device. The only "gemini" local model is really old by today's standards, and is not that smart for local inference (newer open source models are better).

nhubbard

16 hours ago

That's what I figured. Some day eventually it will be possible. Until then, it's only LM Studio or Ollama as a potential hookup.

I've got some ideas inspired by this project. It's promising.

lm2s

16 hours ago

Interesting, I'm also building one in Swift :D Seems like a good learning experience.

podlp

7 hours ago

I’m also working on agents in Swift with the AFM, just having it locally already installed is a huge selling point. I think narrowly-focused agents with good tooling and architecture could accomplish quite a bit, with tradeoffs in speed and cost. But I’m under the assumption that local models (like frontier models) will only get better with time

zingar

9 hours ago

What is the appeal of swift for this project? Is it just what you know?