Antibabelic
19 days ago
I found the page Wikipedia:Signs of AI Writing[1] very interesting and informative. It goes into a lot more detail than the typical "em-dashes" heuristic.
[1]: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
jcattle
19 days ago
An interesting observation from that page:
"Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated."
embedding-shape
19 days ago
I think that's a general guideline to identify "propaganda", regardless of the source. I've seen people in person write such statements with their own hands/fingers, and I know many people who speak like that (shockingly, most of them are in management).
Lots of those points seems to get into the same idea which seems like a good balance. It's the language itself that is problematic, not how the text itself came to be, so makes sense to 100% target what language the text is.
Hopefully those guidelines make all text on Wikipedia better, not just LLM produced ones, because they seem like generally good guidelines even outside the context of LLMs.
Antibabelic
19 days ago
Wikipedia already has very detailed guidelines on how text on Wikipedia should look, which address many of these problems.[1] For example, take a look at its advice on "puffery"[2]:
"Peacock example:
Bob Dylan is the defining figure of the 1960s counterculture and a brilliant songwriter.
Just the facts:
Dylan was included in Time's 100: The Most Important People of the Century, in which he was called "master poet, caustic social critic and intrepid, guiding spirit of the counterculture generation". By the mid-1970s, his songs had been covered by hundreds of other artists."
[1]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style
[2]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Word...
embedding-shape
19 days ago
Right, but unless you have a specific page about "This is how to treat AI texts", people will (if they haven't already) bombard you with "This text is so obviously AI written, do something" and by having a specific page to answer to those, you can just link that instead of general "Here's how text on Wikipedia should be" guidelines. Being more specific sometimes helps people understand better :)
foobarchu
18 days ago
A good place to see this pre-2022 (the ai epoch) is articles on less known bands from the late 2000s when Wikipedia was becoming more popular. Quite a few of them turn out to be copy/paste promo text. I know this because I did webdev work for that industry, and when I look up those bands on wikipedia I will recognize the text as text that I personally had to paste into a bio page 20 years ago. Since the bands are well known, nobody reports it (I admit I'm too lazy)
The real tell on those tends to be weirdly time-specific claims that tend to be wildly outdated ("currently touring with XYZ")
mrweasel
19 days ago
To me that seems like we're mistaken in mixing fiction and non-fiction in AI training data. The "a revolutionary titan of industry" makes sense if you where reading a novel where something like 90% of a book is describing the people, locations, objects and circumstances. The author of a novel would want to use exaggeration and more colourful words to underscore a uniquely important person, but "this week in trains" would probably de-emphasize the person and focus on the train-coupler.
lacunary
19 days ago
fiction is part of our shared language and culture. we communicate by making analogies, and our stories, especially our old ones, provide a rich basis to draw upon. neither a person nor an llm can be fluent users of human language without spending time learning from both fiction and non-fiction.
andrepd
19 days ago
Outstanding. Praise wikipedia, despite any shortcomings wow, isn't it such a breath of fresh air in the world of 2026.
robertjwebb
19 days ago
The funny thing about this is that this also appears in bad human writing. We would be better off if vague statements like this were eliminated altogether, or replaced with less fantastical but verifiable statements. If this means that nothing of the article is left then we have killed two birds with one stone.
nottorp
19 days ago
What do you think the LLMs were trained on? 90% of everything is crap, and they trained on everything.
eurekin
19 days ago
That's actually putting into words, what I couldn't, but felt similar. Spectacular quote
jcattle
19 days ago
I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.
Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).
These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.
The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.
There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.
My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.
But I'd be curious how others see this, who might be more knowledgeable in the area.
bspammer
19 days ago
That sounds like Flanderization to me https://en.wikipedia.org/wiki/Flanderization
From my experience with LLMs that's a great observation.
inquirerGeneral
19 days ago
[dead]
Amorymeltzer
19 days ago
I particularly like (what I assume is) the subtle paean to Ted Chiang's "Blurry Jpeg of the Web" in there.
<https://www.newyorker.com/tech/annals-of-technology/chatgpt-...>
smusamashah
19 days ago
This is so much detailed and everyone who is sick of reading generated text should read this.
I had a bad experience at a shitty airport, went to google maps to leave a bad review, and found that its rating was 4.7 by many thousand people. Knowing that airport is run by corrupt government, I started reading those super positive reviews and the other older reviews by them. People who could barely manage few coherent sentences of English are now writing multiple paragraphs about history and vital importance of that airport in that region.
Reading first section "Undue emphasis on significance" those fake reviews is all I can think of.
cjlm
19 days ago
Turned this into a ruleset[0] for vale.sh[1]
[0]: https://ammil.industries/signs-of-ai-writing-a-vale-ruleset/ [1]: https://vale.sh/
eddyg
19 days ago
It’s also very useful in writing skills to help avoid these kinds of issues.
harrisoned
19 days ago
This is very good, but I'm surprised the term "game-changer" is not mentioned there. From my observations this is used a lot in LLM texts.
user
19 days ago
paradite
19 days ago
Ironically this is a goldmine for AI labs and AI writer startups to do RL and fine-tuning.
zipy124
19 days ago
That's not quite how that works though. It can for example be possible that fine-tuning a model to avoid the styles described in the article cause the LLM to stop functionaing as well as it can. It might just be an artefact of the architecture itself that to be effective it has to follow these rules. If it was as easy as just providing data and the LLM would then 'encode' that as a rule, we would advance much quicker than we currently are.
einrealist
19 days ago
In the case of those big 'foundation models': Fine-tune for whom and how? I doubt it is possible to fine-tune things like this in a way that satisfies all audiences and training set instances. Much of this is probably due to the training set itself containing a lot of propaganda (advertising) or just bad style.
paradite
19 days ago
I'm pretty sure Mistral is doing fine tuning for their enterprise clients. OpenAI and Anthropic are probably not?
I'm more thinking about startups for fine-tuning.
kingstnap
19 days ago
Seems more like the kind of thing you would make prompts using.
I can totally see someone taking that page and throwing it into whatever bot and going "Make up a comprehensive style guide that does the opposite of whatever is mentioned here".
eddyg
19 days ago