gertlabs
9 hours ago
We're still adding samples, but some early takeaways from benchmarking on https://gertlabs.com:
Contrary to the model card, its one-shot performance is more impressive than its agentic abilities. On both metrics, GLM 5.1 is competitive with frontier models.
But keeping in mind this is an open source model operating near the frontier, it's nothing short of incredible.
I suspect 2 issues with the model are keeping it from fully realizing its potential in agentic harnesses: - Context rot (already a common complaint). We are still working on a metric to robustly test and visualize this on the site. - The model was most likely overtrained on standardized toolsets and benchmarks, and isn't as adaptive in using arbitrary tooling in our custom harness simulations. We've decided to commit to measuring intelligence as the ability to use custom, changing tools, instead of being trained to use specific tools (while still always providing a way to run local bash and other common tools). There are arguments to be made for either, but the former is more indicative of general intelligence. Regardless, it's a subtle difference and GLM 5.1 still performs well with tooling in our environments.
Crazy week for open source AI. Gemma 4 has shown that large model density is nowhere near optimized. Moats are shrinking.
If there are more representations of model performance you'd like to see, I'm actively reading your feedback and ideas.
DeathArrow
8 hours ago
It would be nice if you can test the model with different harnesses, Z.ai's own Z Code, Claude Code, Open Code, Pi, Cursor etc.
My impression is that the choice of harness matters a lot.
gertlabs
7 hours ago
Interesting idea. The metric I'd intuitively want to see is low variance between harnesses for a smarter model. But if a large sample of models statistically outperformed with a certain harness, that's indeed a valuable signal for a developer.