Aurornis
2 hours ago
Although I'm interested in both topics (KV compression and attempts to stream MoE models from storage) this is at least the 10th vibecoded project on this topic I've seen today alone across HN, Twitter, and some subreddits I visit.
At least this one gave credit to the upstream projects which it used as a reference.
The llama.cpp project is also getting a wave of vibecoded PRs that are very clearly being produced by pointing claude at the repo and the original paper and having it produce something.
Almost none of these attempts contain information that really matters, like actual benchmark tests with differen KV quantization levels (not just perplexity or KLD).
zozbot234
18 minutes ago
The performance gain in the recent Flash-MoE implementations is seemingly obtained mostly by coalescing the data for each single MoE layer-expert into a single sequential extent which can be read efficiently from SSD. If so, this will actually require some changes in the underlying GGUF format; though the GGUF standard provides explicitly for specifying different data layouts, so the additions are arguably minor.
_zoltan_
2 hours ago
"vibe coded" is NOT the bad thing you think it is.
Going from paper to implementation from scratch in half an hour or so is great.
mjr00
2 hours ago
> "vibe coded" is NOT the bad thing you think it is.
It's not inherently bad in the same way that a first draft of a novel is not inherently bad.
But if someone asked me to read their novel and it was a first draft that they themselves had clearly not bothered reading or editing, I'd tell them to fuck off.
sumeno
an hour ago
At least in the novel example the author had the decency to write what they're asking you to read.
These are more like sending someone who didn't ask you a question a LMGTFY link they didn't ask for and expecting them to read all the results. Just a complete lack of awareness and respect for the maintainers
simonw
an hour ago
Sure, but the problem is when you take that half hour of work and share it with other people without making clear how much effort has gone into it.
Software is valuable if it has been tested and exercised properly by other people. I don't care if you vide coded it provided you then put the real work in to verify that it actually works correctly - and then include the proof that you've done that when you start widely sharing it with the world.
Right now it's impossible to tell which of these projects implementing the paper are worth spending time with.
kristjansson
19 minutes ago
> without making clear how much effort has gone into it
I'm increasingly convinced this is the critical context for sharing LLM outputs with other people. The robots can inflate any old thought into dozens of pages of docs, thousands of lines of MR. That might be great! But it completely severs the connection between the form of a work and the author's assessment/investment/attachment/belief in it. That's something one's audience might like to know!
dalemhurley
17 minutes ago
Is t the point of an MVP to be an MVP?
The OP put together a POC and shared it, showing novel concepts used together. They are not some large R&D lab.
The purist tests being asked for is in contradiction to the ShowHN guidelines.
Aurornis
an hour ago
> Going from paper to implementation from scratch in half an hour or so is great.
This repo isn’t showing that at all. Scroll to the bottom of the README and you’ll see the other project it was based on. It’s a translation of other people’s work.
There have been dozens or perhaps hundreds of vibecoded TurboQuant examples posted around the usual forums in the past few days. This one doesn’t even include anything helpful like benchmarks or tests. It’s just some proof of concept code that doesn’t even work if you try to run it.
My problem with this specific type of vibe coded project is that it’s initially presented as something more novel or polished in order to get more upvotes, karma, likes, or pad a resume. Then you read it and discover they just pointed Claude at some other projects and told it to produce something similar, then posted it as their own work.
brokencode
2 hours ago
That’s a starting spot, but how about some testing and benchmarks?
Where’s the value added if the person just tells Claude to do it and then submits a PR?
The maintainers may as well vibe code it themselves if that’s all the work the would-be contributor is going to put into it.
yieldcrv
2 hours ago
if it works it works
we live in a wholly unoptimized world because the available resources have been so high, while the benefits of optimizing have been so low. that has flipped now and there are tons of low hanging fruit to optimize.
I agree that benchmarks would be great, but thats only relevant to this one topic, not the overall agentic coded pull request concept itself
jmalicki
2 hours ago
It's relevant in that it's an example that people are doing the easy part - the coding - and skipping the hard part - the benchmarking and proving it works and provides value.
A PR without evidence it works and expectations for the benefits using the new feature would bring is kind of worthless.
pqtyw
an hour ago
It might work, but what's the point is sharing it if anyone can do the same in those 30 minutes with minimal effort?
sumeno
an hour ago
> if it works it works
If it works in one case that doesn't mean it works consistently or well in the general case
I've made lots of things with Claude Code that just work... until I do things in a slightly different order and the whole thing explodes
pqtyw
an hour ago
If there is nothing valuable it contributes, though? i.e. its not a novel paper then only value is the whatever you personally learn from it.
sroussey
2 hours ago
The authors of the project have CC as well, so doing this is just eating their time.