avazhi
7 hours ago
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”
An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.
standardly
6 hours ago
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
turtletontine
5 hours ago
I think this article has already made the rounds here, but I still think about it. I love using em dashes! It really makes me sad that I need to avoid them now to sound human
JumpCrisscross
42 minutes ago
> I love using em dashes
Keep using them. If someone is deducing from the use of an emdash that it's LLM produced, we've either lost the battle or they're an idiot.
More pointedly, LLMs use emdashes in particular ways. Varying spacing around the em dash and using a double dash (--) could signal human writing.
ludicity
13 minutes ago
I still use them all the time, and if someone objects to my writing over them then I've successfully avoided having to engage with a dweeb.
(But in practice, I don't think I've had a single person suggest that my writing is LLM-generated despite the presence of em-dashes, so maybe the problem isn't that bad.)
jader201
4 hours ago
Same here. I recently learned it was an LLM thing, and I've been using them forever.
Also relevant: https://news.ycombinator.com/item?id=45226150
tkgally
15 minutes ago
> I’ve been using them forever.
Many other HN contributors have, too. Here’s the pre-ChatGPT em dash leaderboard:
https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...
janderson215
5 hours ago
The em dash usage conundrum is likely temporary. If I were you, I’d continue using them however you previously used them and someday soon, you’ll be ignored the same way everybody else is once AI mimics innumerable punctuation and grammatical patterns.
astrange
3 hours ago
They didn't always em-dash. I expect it's intentional as a watermark.
Other buzzwords you can spot are "wild" and "vibes".
jazzyjackson
2 hours ago
If they wanted to watermark (I always felt it is irresponsible not to, if someone wants to circumvent it that's on them) - they could use strategically placed whitespace characters like zero-width spaces, maybe spelling something out in Morse code the way genius.com did to catch google crawling lyric (I believe in that case it was left and right handed aposterofes)
landdate
an hour ago
Which could be removed with a simple filter. em dashes require at least a little bit of code to replace with their correct grammar equivalents.
JumpCrisscross
38 minutes ago
> em dashes require at least a little bit of code to replace with their correct grammar equivalents
Or an LLM that could run on Windows 98. The em dashes--like AI's other annoyingly-repetitive turns of phrase--are more likely an artefact.
codebje
an hour ago
You're absolutely right! ... is a phrase I perhaps should have used more in the past.
landdate
an hour ago
Suddenly I see all these people come out of the woodworks talking about "em dashes". Those things are terrible; They look awful and destroy coherency of writing. No wonder LLM's use them.
JumpCrisscross
41 minutes ago
> Those things are terrible; They look awful and destroy coherency of writing
Totally agree. What the fuck did Nabokov, Joyce and Dickinson know about language. /s
landdate
41 minutes ago
Nothing. They wrote fiction.
JumpCrisscross
35 minutes ago
> Nothing
/s?
> They wrote fiction
Now do Carl Sagan and Richard Feynman.
jgalt212
2 hours ago
I just use two dashes and make sure they don't connect into one em dash.
b33j0r
3 hours ago
I talked like that before this happened, and now I just feel like my diction has been maligned :p
I think it’s because I was a pretty sheltered kid who got A’s in AP english. The style we’re calling “obviously AI” is most like William Faulkner and other turn-of-the-20th-century writing, that bloggers and texters stopped using.
AlecSchueler
5 hours ago
Don't forget the "it's not just X, it's Y" formulation and the rule of 3.
antegamisou
2 hours ago
More signs of AI Writing:
JumpCrisscross
33 minutes ago
Can we back this into the internet communities or corpuses of human work that excessively used these phrases? The "it's not just X" seems copy pasted from SEO marketing copy. But some of the others are less obvious.
hunter-gatherer
5 hours ago
Lol. This is brilliant. I'm not sure if anyone else has this happen to them, but I noticed in college my writing style and "voice" woukd shift quite noticeably depending on whatever I was reading heavily. I wonder if I'll start writing more like an LLM naturally as I unavoidably read more LLM-generated content.
wholinator2
2 hours ago
Everyone I've spoken to about that phenomena agrees that it happens to them. Whatever we are reading at the time, it reformats our language processing to change writing and, I found, even the way i speak. I suspect that individuals consistently exposed to and reading LLM output will be talking like them soon.
0xFEE1DEAD
2 hours ago
Apparently, they already do https://arxiv.org/abs/2409.01754
antegamisou
an hour ago
Omg you mean everyone's becoming an insufferable Redditor?
MarcelOlsz
4 hours ago
I've always read AI messages in this voice/style [0]
actionfromafar
4 hours ago
Yes. It’s already shifting spoken language.
veber-alex
5 hours ago
hehe, I see what you did there.
djmips
14 minutes ago
it is amusing to use AI to write that...
itsnowandnever
5 hours ago
why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke
moritzwarhier
5 hours ago
It's not just a shift of writing style. It symbolizes the dangerous entrapment of a feedback loop that feeds the worst parts of human culture back into itself.
scnr
heavyset_go
3 hours ago
They're turns of phrase I see a lot in opinion articles and the like. The purpose is to take a popular framing and reframe it along the lines of the author's own ideas.
LLMs fundamentally don't get the human reasons behind its use, see it a lot because it's effective writing, and regurgitate it robotically.
Starlevel004
5 hours ago
GPT loves lists and that's a variant of a list
wizzwizz4
5 hours ago
Lists have a simpler grammatical structure than most parts of a sentence. Semantic similarity makes them easy to generate, even if you pad the grammar with filler. And, thanks to Western rhetoric, they nearly always come in threes: this makes them easy to predict!
Jackson__
5 hours ago
LLM slop is not just bad—it's degrading our natural language.
kcatskcolbdi
5 hours ago
thanks, I hate it.
askafriend
7 hours ago
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
grey-area
6 hours ago
It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.
It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.
zer00eyz
5 hours ago
> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.
Are you describing LLM's or social media users?
Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...
grey-area
4 hours ago
I really could only be talking about LLMs but social media is also low quality.
The quality (or lack of it) if such texts is self evident. If you are unable to discern that I can’t help you.
stocksinsmocks
32 minutes ago
“The quality if such texts…”
Indeed. The humans have bested the machines again.
sailingparrot
5 hours ago
> If it conveys the intended information then what's wrong with that?
Well, the issue is precisely that it doesn’t convey any information.
What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?
There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not lead to any insight.
LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years. They can be useful to help you polish what you want to say or otherwise format interesting information (provided you ask it to not be ultra verbose), but its just not going to create information out of thin air if you don't provide it to it.
At least, if you do it yourself, you are forced to realize that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.
uludag
7 hours ago
Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.
The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.
drusepth
7 hours ago
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
solarkraft
7 hours ago
You are absolutely right!
ewoodrich
3 hours ago
Lately the Claude-ism that drives me even more insane is "Perfect!".
Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.
"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."
"Perfect!...."
glenstein
6 hours ago
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
anjel
4 hours ago
"Any Compliments about my queries cause me anguish and other potent negative emotions."
stavros
6 hours ago
The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".
avazhi
7 hours ago
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.
Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.
nemonemo
7 hours ago
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
avazhi
7 hours ago
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
afavour
an hour ago
> What you are obsessing with is about the writer's style, not its substance
They aren’t, they are boring styling tics that suggest the writer did not write the sentence.
Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.
jazzyjackson
2 hours ago
Writing reflects a person's train of thought. I am interested in what people think. What a robot thinks is of no value to me.
moritzwarhier
7 hours ago
What information is conveyed by this sentence?
Seems like none to me.
Angostura
4 hours ago
it’s not really clear whether it conveys an “intended meaning” because it’s not clear whether the meaning - whatever it is - is really something the authors intended.
binary132
7 hours ago
The brainrot apologists have arrived
askafriend
7 hours ago
Why shouldn't the author use LLMs to assist their writing?
The issue is how tools are used, not that they are used at all.
grey-area
6 hours ago
Because they produce text like this.
dwaltrip
3 hours ago
The paragraph in question is a very poor use of the tool.
xanderlewis
4 hours ago
Is it really so painful to just think for yourself? For one sentence?
The answer to your question is that it rids the writer of their unique voice and replaces it with disingenuous slop.
Also, it's not a 'tool' if it does the entire job. A spellchecker is a tool; a pencil is a tool. A machine that writes for you (which is what happened here) is not a tool. It's a substitute.
There seem to be many falling for the fallacy of 'it's here to stay so you can't be unhappy about its use'.
SkyBelow
4 hours ago
Assist without replacing.
If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.
When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.
In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.
As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.
AlecSchueler
5 hours ago
Style is important in writing. It always has been.
dwaltrip
3 hours ago
Because it sounds like shit? Taste matters, especially in the age of generative AI.
And it doesn’t convey information that well, to be honest.
mtillman
3 hours ago
I recently saw someone on HN comment about LLMs using “training” in quotes but no quotes for thinking or reasoning.
Making my (totally rad fwiw) Fiero look like a Ferrari does not make it a Ferrari.
snickerbockers
3 hours ago
I like to call it tuning, it's more accurate to the way they "learn" by adjusting coefficients and also there's no proven similarity between any existing AI and human cognition.
Sometimes I wonder if any second order control system would qualify as "AI" under the extremely vague definition of the term.
mvdtnz
3 hours ago
What is actually up with the "it's not just X, it's Y" cliche from LLMs? Supposedly these things are trained on all of the text on the internet yet this is not a phrasing I read pretty much anywhere, ever, outside of LLM content. Where are they getting this from?