atourgates
3 months ago
I've been using ChatGPT fairly regularly for about a year. Mostly as an editor/brainstorming-partner/copy-reviewer.
Lots of things have changed in that year, but the things that haven't are:
* So, so many em-dashes. All over the place. (I've tried various ways to get it to stop. None of them have worked long term).
* Random emojis.
* Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
* Weird adjectives it gets stuck on like "deep experience".
* Randomly bolded words.
Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT. But apart from that, it's wild to me that a $500bn company hasn't managed to fix those persistent challenges over the course of a year.
estimator7292
3 months ago
Ah, you've hit a classic problem with <SUBJECT> :smile_with_sweat_drop:. Your intuition is right-- but let me clarify some subtleties...
totetsu
3 months ago
Yeah, that’s a really insightful point, and you’ve kind of hit the nail on the head…
user
3 months ago
WASDx
3 months ago
You can customize it to get rid of all that. I set it to the "Robot" personality and a custom instruction to "No fluff and politeness. Be short and get straight to the point. Don't overuse bold font for emphasis."
Forgeties79
3 months ago
> Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
What a great point! I also can’t stand it. I get it’s basically a meme to point it out - even South Park has mocked it - but I just cannot stand it.
In all seriousness it’s so annoying. It is a tool, not my friend, and considering we are already coming from a place of skepticism with many of the responses, buttering me up does not do anything but make me even more skeptical and trust it less. I don’t want to be told how smart I am or how much a machine “empathizes” with my problem. I want it to give me a solution that I can easily verify, that’s it.
Stop wasting my tokens and time with fake friendship!
SoftTalker
3 months ago
Drives me nuts too. All the stuff like "OK let me do..." Or "I agree ..." stop talking like a person.
I want the star trek experience. The computer just says "working" and then gives you the answer without any chit-chat. And it doesn't refer to itself as if it's a person.
What we have now is Hal 9000 before it went insane.
cyanydeez
3 months ago
Guys. It's basically because among the all well researched data, the amount of garbage is infinitely more.
If AI wants to be useful (it's not going to atm), real people need to cull all the banalities that facebook, reddit & forums have generated.
Because what you're noticing is things we typically elide over in discussions with actual humans.
Forgeties79
3 months ago
It is far more polite than any social media platform or forum I’ve ever seen lol
vintermann
3 months ago
Yeah, and earlier incarnations too... I remember AI dungeon used to cuss people out and even "leave the chat" when people acted annoying.
diamond559
3 months ago
Hal was completely competent, until it wasn't... This is like Hal .9 beta mode.
layer8
3 months ago
Setting ChatGPT personality to “Robot” pretty much does that for me.
lazide
3 months ago
Meanwhile, 90% of the population is asking it to write love letters for their bf’s/gf’s
Forgeties79
3 months ago
Man it is truly difficult to overstate all the behavioral health issues that have been emerging.
Ferret7446
3 months ago
These are just symptoms and not the cause.
Forgeties79
3 months ago
This comes across as an unnecessary oversimplification in service of handwaving away a valid concern about AI and its already-observed, expanding impact on our society. At the very least you should explain what you mean exactly.
Alcoholism can also be a symptom of a larger issue. Should we not at least discuss alcohol’s effects and what access looks like when deciding the solution?
shagie
3 months ago
A modern Cyrano de Bergerai.
furyofantares
3 months ago
> Stop wasting my tokens and time with fake friendship!
They could hide it so that it doesn't annoy you, but I think it's not a waste of tokens. It's there so the tokens that follow are more likely to align with what you asked for. It's harder for it to then say "This is a lot of work, we'll just do a placeholder for now" or give otherwise "lazy" responses, or to continue saying a wrong thing that you've corrected it about.
I bet it also probably makes it more likely to gaslight you when you're asking something it's just not capable of, though.
antoniojtorres
3 months ago
The emoji thing is so bad. You can see it all over github docs and other long form docs. All section headers will have emojis and so on. Strange.
thraxil
3 months ago
Obviously nothing solid to back this up, but I kind of feel like I was seeing emojis all over github READMEs on JS projects for quite a while before AI picked it up. I feel like it may have been something that bled over from Twitch streaming communities.
photonthug
3 months ago
Agree, this stuff was trending up very fast before AI.
Could be my own changing perspective, but what I think is interesting is how the signal it sends keeps changing. At first, emoji-heavy was actually kind of positive: maybe the project doesn't need a webpage, but you took some time and interest in your README.md. Then it was negative: having emoji's became a strong indicator that the whole README was going to be very low information density, more emotive than referential[1] (which is fine for bloggery but not for technical writing).
Now there's no signal, but you also can't say it's exactly neutral. Emojis in docs will alienate some readers, maybe due to association with commercial stuff and marketing where it's pretty normalized. But skipping emojis alienates other readers, who might be smart and serious, but nevertheless are the type that would prefer WATCHME.youtube instead of README.md. There's probably something about all this that's related to "costly signaling"[2].
[1] https://en.wikipedia.org/wiki/Jakobson%27s_functions_of_lang... [2] https://en.wikipedia.org/wiki/Costly_signaling_theory_in_evo...
quintu5
3 months ago
There’s a pattern to emoji use in docs, especially when combined with one or more other common LLM-generated documentation patterns, that makes it plainly obvious that you’re about to read slop.
Even when I create the first draft of a project’s README with an LLM, part of the final pass is removing those slop-associated patterns to clarify to the reader that they’re not reading unfiltered LLM output.
quietbritishjim
3 months ago
Yeah and this explains why you see it in LLMs in the first place. They had to learn it from somewhere.
vintermann
3 months ago
The name of HuggingFace is a reminder that it was a thing long before the current crop of LLMs.
anbotero
3 months ago
It drives me crazy. It happens with Claude models too. I even created an instruction to avoid them in a CLAUDE.md, and the miserable thing from time to time still does it.
Why?!
teeray
3 months ago
You can take my em-dashes from my cold, dead hands—I use them all the time.
merelysounds
3 months ago
On iOS in particular the longer dash variants are easy to access — via long pressing dash.
Anecdotally, I use them less often these days, because of the association with AI.
Terretta
3 months ago
On MacOS or iPadOS keyboard, option - and option shift - give n and m dashes respectively.
bakugo
3 months ago
Don't forget the classic: "It's not just X—it's Y."
rogerkirkness
3 months ago
This is the main thing that immediately tells me something is AI. This form of reasoning was much less common before ChatGPT.
topaz0
3 months ago
I don't think this is true. The LLMs use this construction noticeably more frequently than normal people, and I too feel the annoyance when they do, but if you look around I think you'll find it's pretty common in many registers of human natural english.
vintermann
3 months ago
And each of us has patterns. I bet if you read a million of my posts, you would be annoyed with my writing idiosyncrasies too.
topaz0
3 months ago
Yes, this is absolutely part of it, and I think an underappreciated harm of LLMs is the homogeneity. Even to the extent that their writing style is adequate, it is homogeneous in a way that quickly becomes grating when you encounter LLM-generated text several times a day. That said, I think it's fair to judge LLM writing style not to be adequate for most purposes, partly because a decent human writer does a better job of consciously keeping their prose interesting by varying their wording and so forth.
topaz0
3 months ago
Not sure what the downvotes are for -- it's trivial to find examples of this contruction from before 2023, or even decades ago. I'm not disagreeing that LLMs overuse this construction (tbh it was already something of a "writing smell" for me before LLMs started doing it, because it's often a sign of a weakly motivated argument).
razodactyl
3 months ago
Absolutely this. I feel like I'm having an immune response to my own language. These patterns irk me in a weird way. Lack of variance is jarring perhaps? Everyone sounding more robotic than usual? Mode-collapse of normal language.
Starlevel004
3 months ago
It sounds like LinkedIn speak which most people have a natural immune reaction to.
pimeys
3 months ago
Or... How can you detect the usage of Claude models in a writeup? Look for the word comprehensive, especially if it's used multiple times throughout the article.
razodactyl
3 months ago
"Enhanced"
joegibbs
3 months ago
I notice this less with GPT-5 and GPT-5-Codex but it has a new problem: it'll write a sentence that mostly makes sense but have one or two strange word choices that nobody would use in that situation. It tends to use a lot of very dense jargon that makes it hard to read, spitting out references to various algorithms and concepts in places that don't actually make sense for them to be. Also yesterday Codex refused a task from me because it would be too much work, which I thought was pretty ridiculous - it wasn't actually that much work, a couple hundred lines max.
nomel
3 months ago
> refused a task from me because it would be too much work
Was this after many iterations? Try letting it get some "sleep". Hear me out...
I haven't used Codex, so maybe not relevant, but with Claude I always notice a slow degradation in quality, refusals, and "<implementation here>" placeholders with iterations within the same context window. One time, after making a mistake, it apologized and said something like "that's what I get for writing code at 2am". Statistically, this makes sense: long conversations between developers would go into the night, and they get tired, their code gets sparser and crappier.
So, I told it "Ok, let's get some sleep and do this tomorrow.", then the very next message (since the LLM has no concept of time), "Good morning! Let's do this!" and bam, output a completely functional, giant, block of code.
Human behavior is deeeeep in the statistics.
hnuser123456
3 months ago
That's hilarious.
BiteCode_dev
3 months ago
I think it's the default behavior, because it's cheaper and faster to produce than the real answer.
I assume the beginning of the answer is given to a cheaper, faster model, so that the slower, more expensive one can have time to think.
It keeps the conversation lively and natural for most people.
Would be interesting to test if it's true, by disabling it with a system prompt, and measure if the time-to-answer is slower for the first word.
psyclobe
3 months ago
I was able to get it to briefly change that initial You’re right!! By telling it to say something else like Yarr Mayte. Stuck for a while.
giancarlostoro
3 months ago
I dont use ChatGPT very often, though perplexity has it, but I find that going all caps and sounding really angry helps them to fix things.
layer8
3 months ago
It’s a pity that em-dashes are being much more shunned due to their LLM association than emojis.
gowld
3 months ago
> Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT
Maybe it's intentional, like the "shiny" tone applied to "photorealistic" images of real people.
neoCrimeLabs
3 months ago
I am reasonably sure affirmations are a feature, not a bug. No matter how much I might disagree.
illuminator83
3 months ago
Also pretty sure it is a feature because the general population wants to have pleasant interactions with their ChatGPT and OpenAI's user feedback research will have told them this helps. I know some non-developer type people which mostly talk to ChatGPT about stuff like
- how to cope with the sadness of losing their cat
- ranting about the annoying habits of their friends
- finding all the nice places to eat in a city
etc.
They do not want that "robot" personality and they are the majority.
neoCrimeLabs
3 months ago
Agreed on all points.
I also recall reading a while back that it's also a dopamine trigger. If you make people feel better using your app, they keep coming back for another fix. At least until they realize the hollow nature of the affirmations and start getting negative feelings about it. Such a fine line.
koakuma-chan
3 months ago
ChatGPT is made for normies—they love sweatdrop emojis. I recommend https://ai.dev