I'm not actually reading the jsonl files. Agents Observe just uses hooks and sends all hook data the server (running as a docker container by default).
Basic flow:
1. Plugin registers hooks that call a dump pipe script that sends hook events data to api server
2. Server parses events and stores them in sqlite by session and agent id - mostly just stores data, minimal processing
3. Dashboard UI uses websockets to get real-time events from the server
4. UI does most of the heavy lifting by parsing events, grouping by agent / sub-agent, extracting out tool calls to dynamically create filters, etc.
It took a lot of iterations to keep things simple and performant.
You can easily modify the app/client UI code to fully customize the dashboard. The API app/server is intentionally unopinionated about how events will be rendered. This was by design to add support for other agent events soon.
The hooks approach seems much cleaner for real-time. Did you run into any issues with the blocking hooks degrading performance before you switched to background?
Sort of. It wasn't really noticeable until I did an intentional audit of performance, then noticed the speed improvements.
Node has a 30-50ms cold start overhead. Then there's overhead in the hook script to read local config files, make http request to server, and check for callbacks. In practice, this was about 50-60ms per hook.
The background hook shim reduces latency to around 3-5ms (10x improvement). It was noticeable when using agent teams with 5+ sub-agents running in parallel.
But the real speed up was disabling all the other plugins I had been collecting. It piles up fast and is easy for me to forget what's installed globally.
I've also started periodically asking claude to analyze it's prompts to look for conflicts. It's shockingly common for plugins and skills to end up with contradictory instructions. Opus works around it just fine, but it's unnecessary overhead for every turn.