Show HN posts p/month more than doubled in the last year

34 pointsposted 12 days ago
by theraven

51 Comments

jacquesm

12 days ago

One of the reasons is that there are a lot of adverts masquerading as Show HN.

hsuduebc2

12 days ago

Some subreddits became polluted the same way. Pretty annoying.

footy

12 days ago

There's a subreddit for software people with adhd and it's terrible, every second post is about a New and Exciting Tool to manage adhd. Most of them are vibe coded, all of them come from randos I wouldn't trust with what I had for breakfast, let alone a whole life management system.

I check in every few weeks and I don't understand how anyone can use that subreddit more frequently.

hsuduebc2

12 days ago

Yea, pretty much. I actually always thought that these were some bots.

y0eswddl

12 days ago

that's eventually what drove me away from adhd_programmers as well, tho I'm sorry they even still allow AI apps anymore cuz it was bad before llms blew up

captn3m0

12 days ago

> Show HN is for something you've made that other people can play with. HN users can try it out, give you feedback, and ask questions in the thread.

This is an interesting post, but not a Show HN.

theraven

12 days ago

I didn’t submit it that way, it got auto massaged into that format because of the ‘Show HN’ prefix I believe

rco8786

12 days ago

The Show HN prefix is submitting it that way.

> To post [to Show HN], submit a story whose title begins with "Show HN".

wumms

12 days ago

Related: "Data on AI-related Show HN posts"

Original title: "Data on AI-related Show HN posts More than 1 in 5 Show HN posts are now AI-related, but get less than half the votes or comments."

6 months ago, 155 comments https://news.ycombinator.com/item?id=44463249

galkk

12 days ago

Laid off people have more time on their hands, while on llm-powered steroids?

etothepii

12 days ago

Do you have any numbers on the number that get some number of upvotes? What about a chart of upvotes on Show HN?

I assume the vast, vast majority never get any upvotes.

rfarley04

12 days ago

I have a little bit of data on that from my post last summer. It's pretty easy to query the data: ryanfarley.co/ai-show-hn-data/

wheybags

12 days ago

Would be nice to see some qualitative analyis to know if it's just slop, or actually more interesting projects. Not sure how to do that though. I think just looking at votes wouldn't work. I would guess more posts causes lower average visibility per post which should cause upvotes to slump naturally regardless of quality.

Edit: maybe you could:

- remove outliers (anything that made the front page)

- normalise vote count by expected time in the first 20 posts of shownew, based on the posting rate at the time

Normal_gaussian

12 days ago

a weighted sampling method is probably the best. Segment by time period and vote count or vote rate, then human evaluate. This could be done in a couple of hours and it gives a higher degree of confidence than any automated analysis.

carimura

12 days ago

sentiment analysis of comments?

alberto-m

12 days ago

I think I read some days ago another stat, that the average rating of “Show HN” posts is going down. So the pessimistic take is that people feel the bar to present their product in a “Show HN” is lowering.

(edit: striked) <strike>Is it deliberate that this post appears as “Show HN” itself? I hope not to be too negative, but to qualify as such I would expect much more that a page with two graphs.</strike>

theraven

12 days ago

No, it wasn’t deliberate to be a Show HN itself, it seemed to be auto updated to that based on the prefix. I’ve tried updating it back.

alberto-m

12 days ago

Thanks for the reply. I'd strike that part of my comment if I could. Consider it taken back.

jacquesm

12 days ago

You can, click 'edit'. Up to one hour after posting.

alberto-m

12 days ago

I can edit the post and delete that part, but this would make OP's reply seem out of context. But HN markup does not support striking. I put some HTML tags there anyway, the sense should be clear.

jacquesm

12 days ago

Just mark your edit 'edit:'. I often have the same happen where I write a comment and then think 'oh, that's no good, I need to improve on it' and then in the meantime multiple people will comment and hopefully quote the original. That way at least you can see what they replied to rather than the version that I'm finally happy with. I've suggested HN increase the default size of the reply box but so far no takers on that.

peteforde

12 days ago

I suspect that this will drive the folks who insist LLM productivity gains are the real hallucinations truly bonkers.

anonymous908213

12 days ago

No, the fact that Show HN is spammed with LLM-generated garbage is what drives me bonkers. The Show HNs are in fact living proof of how illusory LLM productivity gains are, because we are overwhelmed with trivial proof-of-concepts that have no merit, not even the merit of a human having put effort into creating something neat, rather than actually interesting software anybody would try or discuss.

CuriouslyC

12 days ago

Counterpoint: You won't admit anything generated with LLMs is good? I don't see any evidence of your fairness in your comment, so why should I consider you any differently than the angry dude at the bar complaining over his drinks about how things were in his day?

anonymous908213

12 days ago

> You won't admit anything generated with LLMs is good?

Nowhere in my comment did I say this, so this is quite a non-sequitur you've based the following personal attack upon. Regardless of whether it's possible to use LLMs to generate good things, the vast majority of things generated with them are not good, and if the good things exist, they are being drowned out in a sea of spam, increasingly difficult to discover along with the good human-generated content.

I have to say, I would characterise both your comment and the original comment I replied to as being considerably more "unfair" than mine. The first comment was clearly written in such a way to get a rise out of people. Your reply is directly insinuating that I'm out-of-touch and ranting at clouds.

sepositus

12 days ago

This is a valid observation. I wonder though if people who have been coding for decades, but choose to use AI assistance, would fall under the same AI slop category. It’s an interesting dilemma because the overwhelming amount of content getting posted just ends up breeding a ton of negative feelings towards any amount of AI usage.

jacquesm

12 days ago

It will if you let it. The number of times the AI has come up with 'I can write you 'x', 'y' or 'z' in a heartbeat, just say the word' and I keep on having to steer it back to the track of being a repository of knowledge rather than an overeager very junior co-worker that can't help themselves to want to show off their skills.

It's very tiresome. Like an idiot/savant, they're an idiot most of the time and every 10th try you go 'oh, but that's neat and clever'.

7777777phil

12 days ago

I feel like HN is quite divided about that actually, A couple of days I started a survey which I plan to run monthly to see how the community feels about "LLM productivity etc". Now I have ~250 answers, need a couple more to make it significant but as of now it looks like >90% report productivity gains from AI tools - happy if you participate, only takes a minute: https://agentic-coding-survey.pages.dev/

anonymous908213

12 days ago

Note that self-reporting productivity gains is a completely unreliable and unscientific metric. One study[1], small in scope but a noteworthy data point, found that over the course of the study that LLMs reduced productivity by ~20% but even after the fact the participants felt that on average their productivity had increased by ~20%. This study is surely not the end-all be-all and you could find ways to criticise it or say it doesn't apply or they were doing it wrong or whatever reason you think the developers should have had increased productivity, but the point is that people cannot accurately judge their own productivity by vibes alone.

[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

7777777phil

12 days ago

If you look at the survey it's not only about productivity it's also about usage, model choice etc. But I agree with you self reported productivity gains is to be taken with a grain of salt. But then what else would you propose? The goal is to not only rely on benchmarks for model performance but develop some kind of TIOBE Index for LLMs.

direwolf20

12 days ago

The ever-present rebuttal to all LLM failure anecdotes: you're using the wrong model, you're prompting it wrong, etc. All failures are always the user's fault. It couldn't possibly be that the tool is bad.

peteforde

12 days ago

Of course, your logic could also be equally allied to the opposite position.

Quite a few of us are tired of being told that we're imagining doing what used to take weeks multiple times in an evening.

anonymous908213

11 days ago

If it generated something that saved you weeks, I think it's almost certainly because it was used for something you have absolutely zero domain understanding for and would have had to study from scratch. And I, at least, repeatedly do note that LLMs lower the barrier to entry for making proof-of-concepts. But the problem is that (1) people treat that instant gratification as a form of productivity that can replace software engineers. At most, it can make something extremely rough that is suited to one individual's very specific use case, where you mostly work around the plentiful bugs by knowing the landmines are there and not doing the behaviour that trips them; and (2) people spam these low-effort proof-of-concepts, which have no value to other people on account of how rough and lacking in ability to be extended to cover more than one person's use case they are, and this drowns out the content people actually put effort into.

LLMs, when used like this, do not increase productivity on making software worth sharing with other people. While they can knock out the proof-of-concept, they cannot build it into something valuable to anyone but the prompter, and by shortcircuiting the learning process, you do not learn the skills necessary to build upon the domain yourself, meaning you still have to spend weeks learning those skills if you actually want to build something meaningful. At least this is true for everything I have observed out of the vibe-coding bubble thus far, and my own extensive experiences trying to discover the 10x boost I am told exists. I am open to being shown something genuinely great that an LLM generated in an evening if you wish to share evidence to the contrary.

There is also the question of the provenance of the code, of course. Could you have saved those weeks by simply using a library? Is the LLM saving you weeks by writing the library ""from scratch"", in actuality regurgitating code from an existing library one prompt at a time? If the LLM's productivity gain is that it normalized copying and pasting open-source code wholesale while calling it your own, I don't think that's the great advancement for humanity it is portrayed as.

peteforde

10 days ago

I find your persistent, willful bullheadedness on this topic to be exhausting. I'd say delusional, but I don't know you and you're anonymous so I'm probably arguing with an LLM in someone's sick social experiment.

A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.

This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.

I happen to believe that you're foolish for endlessly repeating the same blather about "vibe coding" instead of celebrating how amazing what you yourself said about lowering the barrier to entry for domains that are extremely rough and outside of their immediate skillset actually is and the incredible impact it has on project trajectory, motivation and skill-stacking for future projects.

Your [projected] assumption that everyone using these tools learns nothing from seeing how problems can be solved is painfully narrow-minded, especially given than anyone with a shred of intellectual curiosity quickly finds that they can get up to speed on topics that previously seemed daunting to impossible. Yes, I really do believe that you have to expend effort to not experience this.

During the last few weeks I've built a series of increasingly sophisticated multi-stage audio amplifier circuits after literal decades of being quietly intimidated by audio circuits, all because I have the ability to endlessly pepper ChatGPT with questions. I've gone from not understanding at all to fully grasping the purpose and function of every node to a degree that I could probably start to make my own hybrids. I don't know if you do electronics, but the disposition of most audio electronics types does not lend itself to hours of questions about op-amps.

Where do we agree? I strongly agree that people are wasting our time when they post low-effort slop. I think that easy access to LLMs shines a mirror on the awkward lack of creativity and good, original ideas that too many people clearly [don't] have. And my own hot take is that I think Claude Code is unserious. I don't think it's responsible or even particularly compelling to get excited about making never looking at the code as a goal.

I've used Cursor to build a 550k+ LoC FreeRTOS embedded app over the past six months that spans 45 distinct components which communicate via a custom message bus and event queue, juggling streams from USB, UART, half a dozen sensors, a high speed SPI display. It is well-tested, fully specified and the product of about 700 distinct feature implementation plan -> chat -> debug loops. It is downright obnoxious reading the stuff you declare when you're clearly either doing it wrong or, well, confirmation of the dead internet theory.

I honestly don't know which is worse.

illegalbyte2

12 days ago

Really like the ux of that survey - super easy to fill out, is it just a custom web form or you used a library?

7777777phil

12 days ago

Yes exactly, it's a standalone cloudflare page with some custom html/css that writes to a D1 (Cloudflare SQL DB) for results and rate limits, thats's it. I looked at so many survey tools but none offered what I was looking for (simple single page form, no email, no signup, no tracking) so I built this (with claude) Thanks for the feedback!

Tade0

12 days ago

They are, but in the sense of net productivity gains.

Responsible people who use their knowledge to review LLM-generated code will produce more - up to their maximum rate of taking responsibility.

Irresponsible people will just smear shit all over the codebase.

The jury is out what's the net effect and the agents' level of sophistication is a secondary factor.

fhennig

12 days ago

IMO a productivity gain of about x2 seems about right!

Semaphor

12 days ago

/r/selfhosted also got tons of new submissions, all unmaintainable AI slop. Now that they are only allowed on Fridays, it calmed down again. But I guess folks who insist on AI superiority think that’s a productivity gain.

CuriouslyC

12 days ago

The people spamming built bad stuff because they don't know any better. They would have built zero software without AI, so to the extent that anyone built anything working at all, it's basically an infinite productivity increase for those people.

jacquesm

12 days ago

AI productivity gains are not found in the slop bucket with projects tossed off after five prompts and zero intention of keeping them alive for the longer run.

seinvak

12 days ago

Probably the same happening for websites being built or apps being published.

captn3m0

12 days ago

A better metric would be how many Show HN posts are reaching the front page.

voidUpdate

12 days ago

Isn't it posts/month? What's the p for?