low_tech_love
8 hours ago
The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.
Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.
Roark66
5 hours ago
>The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die
Do you think AI has changed that in any way? I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s. It is around that time when Google stopped pretending they are a search company and focused on their primary business of advertising.
Before, at least they were trying to downrank all the crap "word aggregators". After, they stopped caring at all.
AI gives even better tools to page rank. Detection of AI generated content is not that bad.
So why don't we have "a new Google" emerge? Simple, because of the monopolistic practices Google did to make the barrier to entry huge. First, 99% of the content people want to search for is behind a login wall (Facebook, Instagram, twitter, YouTube), second almost all CDNs now implement "verify you are human" by default. Third, no one links to other sites. Ever! These 3 things mean a new Google is essentially impossible. Even duck duck go has thrown the towel and subscribed to Bing results.
It has nothing to do with AI, and everything to do with Google. In fact AI might give us the tools to better fight Google.
TheOtherHobbes
3 hours ago
Google didn't change it, it embodied it. The problem isn't AI, it's the pervasive culture of PR and advertising which appeared in the 50s and eventually consumed its host.
Western industrial culture was based on substance - getting real shit done. There was always a lot of scammery around it, but the bedrock goal was to make physical things happen - build things, invent things, deliver things, innovate.
PR and ad culture was there to support that. The goal was to change values and behaviours to get people to Buy More Stuff. OK.
Then around the time the Internet arrived, industry was off-shored, and the culture started to become one of appearance and performance, not of substance and action.
SEO, adtech, social media, web framework soup, management fads - they're all about impression management and popularity games, not about underlying fundamentals.
This is very obvious on social media in the arts. The qualification for a creative career used to be substantial talent and ability. Now there are thousands of people making careers out of performing the lifestyle of being a creative person. Their ability to do the basics - draw, write, compose - is very limited. Worse, they lack the ability to imagine anything fresh or original - which is where the real substance is in art.
Worse than that, they don't know what they don't know, because they've been trained to be superficial in a superficial culture.
It's just as bad in engineering, where it has become more important to create the illusion of work being done, than to do the work. (Looking at you, Boeing. And also Agile...)
You literally make more money doing this. A lot more.
So AI isn't really a tool for creating substance. It's a tool for automating impression management. You can create the impression of getting a lot of work done. Or the impression of a well-written cover letter. Or of a genre novel, techno track, whatever.
AI might one day be a tool for creating substance. But at the moment it's reflecting and enabling a Potemkin busy-culture of recycled facades and appearances that has almost nothing real behind it.
Unfortunately it's quite good at that.
But the problem is the culture, not the technology. And it's been a problem for a long time.
techdmn
2 hours ago
Thank you, you've stated this all very clearly. I've been thinking about this in terms of "doing work", where you care about the results, and "performing work", where you care about how you are evaluated. I know someone who works in a lab, and pointed out that some of the equipment being used was out of spec and under-serviced to the point that it was essentially a random number generator. Caring about this is "doing work". However, pointing it out made that person the enemy of the greater cohort that was "performing work". The results were not important to them, their metrics about units of work completed was. I see this pattern frequently. And it's hard to say those "performing work" are wrong. "Performing" is rewarded, "doing" is punished - Perhaps right to the top, as many companies are involved in a public performance designed to affect the short-term stock price.
rjbwork
an hour ago
Yeah. It's like our entire society has been turned into a Goodhart's Law based simulacrum of a productive society.
I mean, here it's late morning and I'm commenting on hacker news. And getting paid for it.
1dom
2 hours ago
I like this take on modern tech motivations.
The thing that I struggle with is I agree with it, but I also get a lot of value in using AI to make me more productive - to me, it feels like it lets me focus on producing substance and actions, freeing me up from having to some tedious things in some tedious ways. Without getting into the debate about if it's productive overall, there are certain tasks which it feels irrefutably fast and effective at (e.g. writing tests).
I do agree with the missing substance with modern generative AI: everyone notices when it's producing things in that uncanny valley, and if no human is there to edit that, it makes people uncomfortable.
The only way I can reconcile the almost existential discomfort of AI against my actual day-to-day generally-positive experience with AI is to accept that AI in itself isn't the problem. Ultimately, it is an info tool, and human nature makes people spam garbage for clicks with it.
People will do the equivalent of spam garbage for clicks with any new modern thing, unfortunately.
Getting the most out of latest information of a society has probably always been a cat and mouse game of trying to find the areas where the spam-garbage-for-clicks people haven't outnumbered use-AI-to-facilitate-substance people, like here, hopefully.
skydhash
15 minutes ago
Just one nitpick. The thing about test is that it’s repetitive enough to be automated (in a deterministic way) or abstracted into a framework. You don’t need an AI to generate it.
deephoneybear
15 minutes ago
Echoing other comments in gratitude for this very clear articulation of feelings I share, but have not manifested so well. Just wanted to add two connected opinions that round out this view.
1) This consuming of the host is only possible on the one hand because the host has grown so strong, that is the modern global industrial economy is so efficient. The doing stuff side of the equation is truly amazing and getting better (some real work gets done either by accident or those who have not-succumbed to PR and ad culture), and even this drop of "real work" produces enough material wealth to support (at least a lot of) humanity. We really do live in a post scarcity world from a production perspective, we just have profound distribution and allocation problems.
2) Radical wealth inequality profoundly exacerbates the problem of PR and ad culture. If everyone has some wealth doing things that help many people live more comfortably is a great way to become wealthy. But if very few people have wealth, then doing a venture capital FOMO hustle on the wealthy is anyone's best ROI. Radical wealth inequality eventually breaks all the good aspects of capitalist/market economies.
rich_sasha
5 hours ago
Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.
The flip side was, calligraphy was sufficient evidence for both his education to whoever hired him, and for a recipient of a document, of its official nature. Calligraphy itself or course didn't make him efficient or smart or fair.
That's long gone of course, but we had similar heuristics. I am reminded of the Reddit story about an AI-generated mushroom atlas that had factual errors and lead to someone getting poisoned. We can no longer assume that a book is legit simply because it looks legit. The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.
LLMs are fantastic at breaking our heuristics as to what is and isn't legit, but not as good at being right.
matwood
5 hours ago
> We can no longer assume that a book is legit simply because it looks legit.
The problem is that this has been an issue for a long time. My first interactions with the internet in the 90s came along with the warning "don't automatically trust what you read on the internet".
I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation. Maybe LLM education will shift people back toward skepticism - unlikely, but I can hope.
honzabe
2 hours ago
> I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation.
I think that previous generations were not any different. For most people, trusting is the default mode and you need to learn to distrust a source. I know many people who still have not learned that about the internet in general. These are often older people. They believe insane things just because there exists a nicely looking website claiming that thing.
mrweasel
3 hours ago
One of the issues today is the volume of content produced, and that journalism and professional writing is dying. LLMs produce large amounts of "good enough" quality to make a profit.
In the 90s we could reasonably trust that that the major news sites and corporate websites was true, while random forums required a bit more critical reading. Today even formerly trusted sites may be using LLMs to generate content along with automatic translations.
I wouldn't necessarily put the blame on LLMs, this just make it easier. The trolls and spammers was always there, now they just have a more powerful tool. The commercial sites now have a tool they don't understand, which they apply liberally, because it reduces cost, or their staff use it, to get out of work, keep up with deadlines or to cover up incompetence. So, not the fault of the LLMs, but their use is worsening existing trends.
llm_trw
4 hours ago
>That's long gone of course, but we had similar heuristics.
To quote someone about this:
>>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.
A book looking legit, a paper being peer reviewed, an expert saying something, none of those things were _ever_ good heuristics. It's just that it was the done thing. Now we have to face the fact that our heuristics are obviously broken and we have to start thinking about every topic.
To quote someone else about this:
>>Most people would rather die than think.
Which explains neatly the politics of the last 10 years.
hprotagonist
4 hours ago
> To quote someone about this: >>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.
So, same as it ever was?
Smoke, nothing but smoke. [That’s what the Quester says.] There’s nothing to anything—it’s all smoke. What’s there to show for a lifetime of work, a lifetime of working your fingers to the bone? One generation goes its way, the next one arrives, but nothing changes—it’s business as usual for old planet earth. The sun comes up and the sun goes down, then does it again, and again—the same old round. The wind blows south, the wind blows north. Around and around and around it blows, blowing this way, then that—the whirling, erratic wind. All the rivers flow into the sea, but the sea never fills up. The rivers keep flowing to the same old place, and then start all over and do it again. Everything’s boring, utterly boring— no one can find any meaning in it. Boring to the eye, boring to the ear. What was will be again, what happened will happen again. There’s nothing new on this earth. Year after year it’s the same old thing. Does someone call out, “Hey, this is new”? Don’t get excited—it’s the same old story. Nobody remembers what happened yesterday. And the things that will happen tomorrow? Nobody’ll remember them either. Don’t count on being remembered.
c. 450BC
wwweston
2 hours ago
Culd be my KJV upbringing talking, but personally I think there's an informative quality to calling it "vanity" over smoke.
And there's more reasons not to simply compare the modern challenges of image and media with the ancient grappling with impermanence. Tech may only truly change the human condition rarely, but it frequently magnifies some aspect of it, sometimes so much that the quantitative change becomes a qualitative one.
And in this case, what we're talking about isn't just impermanence and mortality and meaning as the preacher/quester is. We'd be lucky if it's business as usual for old planet earth, but we've managed to magnify our ability to impact our environment with tech to the point where winds, rivers, seas, and other things may well change drastically. And as for "smoke", it's one thing if we're dust in the wind, but when we're dust we can trust, that enables continuity and cooperation. There's always been reasons for distrust, but with media scale, the liabilities are magnified, and now we've automated some of them.
The realities of human nature that are the seeds of the human condition are old. But some of the technical and social machinery we have made to magnify things is new, and we can and will see new problems.
hprotagonist
2 hours ago
'הבל (hevel)' has the primary sense of vapor, or mist -- a transient thing, not a meaningless or purposeless one.
llm_trw
3 hours ago
One is a complaint that everything is constantly changing, the other that nothing ever changes. I don't think you could misunderstand what either is trying to say harder if you tried.
hprotagonist
3 hours ago
"everything is constantly changing!" is the thing that never changes.
llm_trw
3 hours ago
You sound like a poorly trained gpt2 model.
failbuffer
2 hours ago
Heuristics don't have to be perfect to be useful so long as they improve the efficacy of our attentions. Once that breaks down society must follow because thinking about every topic is intractable.
ziml77
3 hours ago
The mushroom thing is almost certainly true. There's tons of trash AI generated foraging books being published to Amazon. Atomic Shrimp has a video on it.
sevensor
4 hours ago
> Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.
Similar story! Family lore has it that he was from a farming family of modest means, but he was hired to write insurance policies because of his beautiful handwriting, and this was a big step up in the world.
newswasboring
4 hours ago
> The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.
What?! Someone just made up something and then got mad at it. This is specially weird when you even acknowledge its a made up story. If we start evaluating new things like this nothing will ever progress.
bad_user
5 hours ago
You're attributing too much to Google.
Bots are now blocked because they've been abusive. When you host content on the internet, it's not fun to have bots bring your server down or inflate your bandwidth price. Google's bot is actually quite well-behaved. The other problem has been the recent trend in AI, and I can understand blockers being put in place, since AI is essentially plagiarizing content without attribution. But I'd blame OpenAI more at this point.
I also don't think you can blame Google for the centralization behind closed gardens. Or for why people no longer link to other websites. That's ridiculous.
And you should be attributing them the fact that the web is still alive.
dennis_jeeves2
4 hours ago
>I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.
Things have not changed much really. This was true since the dawn of man-kind (and woman-kind from the man-kind rib of course) even before there writings was invented, in the form of gossip.
The internet/AI now carries on the torch of our ancestral inner calling, lol.
ninetyninenine
4 hours ago
> I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.
I mean the AI is trained and modeled on this excrement. It makes sense. As much as people think AI content is raw garbage… they don’t realize that they are staring into a mirror.
elnasca2
8 hours ago
What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.
Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?
nicce
7 hours ago
In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.
While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.
ookdatnog
7 hours ago
Writing a text of decent quality used to constitute proof of work. This is now no longer the case, and we haven't adapted to this assumption becoming invalid.
For example, when applying to a job, your cover letter used to count as proof of work. The contents are less important than the fact that you put some amount of effort in it, enough to prove that you care about this specific vacancy. Now this basic assumption has evaporated, and job searching has become a meaningless two-way spam war, where having your AI-generated application selected from hundreds or thousands of other AI-generated applications is little more than a lottery.
bitexploder
7 hours ago
This. I am very picky about how I use ML still, but it is unsurpassed as a virtual editor. It can clean up grammar and rephrase things in a very light way, but it gives my prose the polish I want. The thing is, I am a very decent writer. I wrote professionally for 18 years as a part of my job delivering reports of high quality as my work product. So, it really helps that I know exactly what “good” looks like by my standards. ML can clean things up so much faster than I can and I am confident my writing is organic still, but it can fix up small issues, find mistakes, etc very quickly. A word change here or there, some punctuation, that is normal editing. It is genuinely good at light rephrasing as well, if you have some idea of what intent you want.
When it becomes obvious, though, is when people let the LLM do the writing for them. The job search bit is definitely rough. Referrals, references, and actual accomplishments may become even more important.
gtirloni
6 hours ago
As usual, LLMs are an excellent tool when you already have a decent understanding of the field you're interested in using them in. Which is not the case of people posting in social media or creating their first programs. That's where the dullness and noise come from.
The noise ground has been elevated 100x by LLMs. It was already bad before but it's accelerated the trend.
So, yes, we should have never been trusting anything online but before LLMs we could rely on our brains to quickly identify the bad. Nowadays, it's exhausting. Maybe we need a LLM trained on spotting LLMs.
This month, I, with decades of experience, used Claude Dev as an experiment to create a small automation tool. After countless manual fixes, it finally worked and I was happy. Until I gave thr whole thing a decent look again and realized what a piece of garbage I had created. It's exhausting to be on the lookout for these situations. I prefer to think things through myself, it's a more rewarding experience with better end results anyway.
danielbln
2 hours ago
Not to sound too dismissive, but there is a distinct learning curve when it comes to using models like Claude for code assist. Not just the intuition when the model goes off the rails, but also what to provide it in the context, how and what to ask for etc. Trying it once and dismissing it is maybe not the best experimental setup.
I've been using Zed recently with its LLM integration so assist me in my development and its been absolutely wonderful, but one must control tightly what to present to the model and what to ask for and how.
gtirloni
2 hours ago
It's not my first time using LLMs and you're assuming too much.
iszomer
5 hours ago
LLM's are a great onramp to filling in knowledge that may have been lost to age or updated to their modern classification. For example, I didn't know Hokkien and Haka are distinct linguistic branches within the Sino-Tibetan language and warrants more (personal) research into the subject. And all this time, without the internet, we often just colloquially called it Taiwanese.
aguaviva
4 hours ago
How is this considered "lost" knowledge there are (large) Wikipedia pages about those languages (which is of course what the LLM is cribbing from)?
"Human-curated encycolpedias are a great onramp to filling in knowledge gaps", that I can go with.
nicce
4 hours ago
It is lost in a sense that you had no idea about such possibility and you did not know to search it in the first hand, while I believe that in this case LLM brought it up as a side note.
aguaviva
2 hours ago
Such fortuitous stumblings happen all the time without LLMs (and in regular libraries, for those brave enough to use them). It's just the natural byproduct of doing any kind of research.
skydhash
9 minutes ago
Most of my knowledge comes from physical encyclopedia and download the wikipedia text dump (internet was not readily available). You search for one thing and just explore by clicking.
dotnet00
4 hours ago
Yeah, this is how I use it too. I tend to be a very dry writer, which isn't unusual in science, but lately I've taken to writing, then asking an LLM to suggest improvements.
I know not to trust it to be as precise as good research papers need to be, so I don't take its output, it usually helps me reorder points or use different transitions which make the material much more enjoyable to read. I also find it useful for helping to come up with an opening sentence from which to start writing a section.
bitexploder
2 hours ago
Active voice is difficult in technical and scientific writing for sure :)
rasulkireev
38 minutes ago
Great opportunity to get ahead of all the lazy people who use AI for a cover letter. Do a video! Sure, AI will be able to do that soon, but then we (not lazy people, who care) will come up with something even more personal!
roenxi
7 hours ago
> While the professional looking text could have been already wrong, the likelihood was smaller...
I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.
SoftTalker
an hour ago
In the past, with a printed book or journal article, it was safe to assume that an editor had been involved, to some degree or another challenging claimed facts, and the publisher also had an interest in maintaining their reputation by not publishing poorly researched or outright false information. You would also have reviewers reading and reacting to the book in many cases.
All of that is gone now. You have LLMs spitting their excrement directly onto the web without so much as a human giving it a once-over.
dietr1ch
7 hours ago
I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.
Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.
wlesieutre
4 hours ago
There’s not enough time in the day to go on a full bore research project about every sentence I read, so it’s not physically possible to be “fully diligent.”
The best we can hope for is prioritizing which things are worth checking. But even that gets harder because you go looking for sources and now those are increasingly likely to be LLM spam.
roenxi
7 hours ago
I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.
AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).
Jensson
7 hours ago
> I'd argue people clearly don't care about the truth at all
Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.
Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.
rendall
6 hours ago
> if you know the truth you can make much better decisions than those who are ignorant
There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better life outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.
Jensson
6 hours ago
You are thinking too big, most of the time the truth is whether a piece of food is spoiled or not etc, and that greatly affects your quality of life. Companies would love to keep you ignorant here so they can sell you literal shit, so there are powerful forces wanting to keep you ignorant, and today those powerful forces has way stronger tools than ever before working to keep you ignorant.
roenxi
6 hours ago
Socrates is also a big name. Never forget.
danmaz74
3 hours ago
You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.
When dealing with almost everything you do day by day, you have to rely on the credibility of the source of the information you have. Otherwise how could you know that the can of tuna you're going to eat is actually tuna and not some venomous fish? How do you know that you should do what your doctor told you? Etc. etc.
svieira
an hour ago
> You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.
But isn't your third sentence True?
quietbritishjim
7 hours ago
How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.
glenstein
6 hours ago
>But you never checked the accuracy of your information before so
They didn't say that and that's not a fair or warranted extrapolation.
They're talking about a heuristic that we all use, as a shorthand proxy that doesn't replace but can help steer the initial navigation in the selection of reliable sources, which can be complemented with fact checking (see the steelmanning I did there?). I don't think someone using that heuristic can be interpreted as tantamount to completely ignoring facts, which is a ridiculous extrapolation.
I also think is misrepresents the lay of the land, which is that in the universe of nonfiction writing, I don't think that there's a fire hose of facts and falsehoods indistinguishable in tone. I think there's in fact a reasonably high correlation between the discernible tone of impersonal professional and credible information, which, again (since this seems to be a difficult sticking point) doesn't mean that the tone substitutes for the facts which still need to be verified.
The idea that information and misinformation are tonally indistinguishable is, in my experience, only something believed by post-truth "do you own research" people who think there are equally valid facts in all directions.
There's not, for instance, a Science Daily of equally sciency sounding misinformation. There's not a second different IPCC that publishes a report with thousands of citations which are all wrong, etc. Misinformation is out there but it's not symmetrical, and understanding that it's not symmetrical is an important aspect of information literacy.
This is important because it goes to their point, which is that something has changed, in the advent of LLMS. That symmetry may be coming, and it's precisely the fact that it wasn't there before that is pivotal.
cutemonster
7 hours ago
Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).
I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.
(Or if they clicked "Generate" and don't actually care, or even have different opinions)
jackthetab
6 hours ago
> While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.
https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...
mewpmewp2
5 hours ago
Although, there were already before tons of "technical influencers" before that who excelled at writing, but didn't know deeply what they were writing about.
They give a superficially smart look, but really they regurgitate without deep understanding.
factormeta
6 hours ago
>In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.
That is pretty much true also for other media, such as audio and video. Before digital stuff become mainstream pics are developed in the darkroom, and film are actually cut with scissors. A lot of effort are put into producing the final product. AI has really commoditized for many brain related tasks. We must realize the fragile nature of digital tech and still learn how to do these by ourselves.
gizmo
7 hours ago
I think you overestimate the value of things looking professional. The overwhelming majority of books published every year are trash, despite all the effort that went into research, writing, and editing them. Most news is trash. Most of what humanity produces just isn't any good. An top expert in his field can leave a typo-riddled comment in a hurry that contains more valuable information than a shelf of books written on the subject by lesser minds.
AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.
herval
6 hours ago
> AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.
AIs are getting good at precisely imitating your voice with a single sample as reference, or generating original music, or creating video with all sorts of impossible physics and special effects. By your rationale, nothing “requires much intelligence or expertise”, which is patently false (even for text writing)
gizmo
5 hours ago
My point is that writing a good book is vastly more difficult than writing a mediocre book. The distance between incoherent babble and a mediocre book is smaller than the distance between a mediocre book and a great book. Most people can write professional looking text just by putting in a little bit of effort.
bitexploder
7 hours ago
I think you underestimate how high that bar is, but I will grant that it isn’t that high. It can be a form of sophistry all of its own. Still, it is a difficult skill to write clearly, simply, and without a lot of extravagant words.
mewpmewp2
5 hours ago
Although presently at least it's still quite obvious when something is written by AI.
chilli_axe
4 hours ago
it's obvious when text has been produced by chatGPT with the default prompt - but there's probably loads of text on the internet which doesn't follow AI's usual prose style that blends in well.
ImHereToVote
7 hours ago
So content produced by think tanks was credible by default, since think tanks are usually very well funded. Interesting perspective
diggan
7 hours ago
> By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject
How did you know this unless you also had the same or more knowledge than the author?
It would seem to me we are as clueless now as before about how to judge how skilled a writer is without requiring to already posses that very skill ourselves.
ffsm8
7 hours ago
Trust as no bearing on what they said.
Reading was a form of connecting with someone. Their opinions are bound to be flawed, everyone's are - but they're still the thoughts and words of a person.
This is no longer the case. Thus, the human factor is gone and this reduces the experience to some of us, me included.
farleykr
7 hours ago
This is exactly what’s at stake. I heard an artist say one time that he’d rather listen to Bob Dylan miss a note than listen to a song that had all the imperfections engineered out of it.
herval
6 hours ago
The flipside of that is the most popular artists of all time (eg Taylor Swift) do autotune to perfection, and yet more and more people love them
kombookcha
6 hours ago
If you ask a Swiftie what they love about Taylor Swift, I guarantee they will not say "the autotune is flawless".
They're not connecting with the relative correctness of each note, but feeling a human, creative connection with an artist expressing herself.
herval
6 hours ago
They're "creatively connecting" to an autotuned version of a human, not to a "flawed Bob Dylan"
kombookcha
6 hours ago
They're not connecting to the autotune, but to the artist. People have a lot of opinions about Taylor Swift's music but "not being personal enough" is definitely not a common one.
If you wanna advocate for unplugged music being more gratifying, I don't disagree, but acting like the autotune is what people are getting out of Taylor Swift songs is goofy.
soco
5 hours ago
I have no idea about Taylor Swift so I'll ask in general: can't we have a human showing an autotuned personality? Like, you are what you are in private, but in interviews you focus on things suggested by your AI conselor, your lyrics are fine tuned by AI, all this to show a better marketable personality? Maybe that's the autotune we should worry about. Again, nothing new (looking at you, Village People) but nowadays the potential powered by AI is many orders of magnitude higher... you could say yes only until the fans catch wind of it, true, but by that time the next figure shows up and so on. Not sure where this arms escalation can lead us. Because also acceptance levels are shifting, so what we reject today as unacceptable lies could be fine tomorrow, look already at the AI influencers doing a decent job while overtly fake.
oceanplexian
4 hours ago
I’m convinced it’s already being done, or at least played with. Lots of public figures only speak through a teleprompter. It would be easy to put a fine tuned LLM on the other side of that teleprompter where even unscripted questions can be met with scripted answers.
herval
5 hours ago
you're missing the point by a few miles
Frost1x
6 hours ago
I think the key thing here is equating trust and truth. I trust my dog, a lot, more than most humans frankly. She has some of my highest levels of trust attainable, yet I don’t exactly equate her actions with truth. She often barks when there’s no one at the door or at false threats she doesn’t know aren’t real threats and so on. But I trust she believes it 100% and thinks she’s helping me 100%.
What I think OP was saying and I agree with is that connection, that knowing no matter what was said or how flawed or what motive someone had I trusted there was a human producing the words. I could guess and reasons the other factors away. Now I don’t always know if that is the case.
If you’ve ever played a multiplayer game, most of the enjoyable experience for me is playing other humans. We’ve had good game AIs in many domains for years, sometimes difficult to distinguish from humans, but I always lost interest if I didn’t know I was in fact playing and connecting with another human. If it’s just some automated system I could do that any hour of the day as much as I want but it lacked the human connection element, the flaws, the emotion, the connection. If you can reproduce that then maybe it would be enjoyable but that sort of substance has meaning to many.
It’s interesting to see a calculator quickly spit out correct complex arithmetic but when you see a human do it, it’s more impressive or at least interesting, because you know the natural capability is lower and that they’re flawed just like you are.
sevensor
4 hours ago
For me, the problem has gone from “figure out the author’s agenda” to “figure out whether this is a meaningful text at all,” because gibberish now looks a whole lot more like meaning than it used to.
pxoe
4 hours ago
This has been a problem on the internet for the past decade if not more anyway, with all of the seo nonsense. If anything, maybe it's going to be ever so slightly more readable.
solidninja
3 hours ago
There's a quantity argument to be made here - before, it used to be hard to generate large amounts of plausible but incorrect text. Now it easy. Similar to surveillance before/after smartphones + the internet - you had to have a person following you vs just soaking up all the data on the backbone.
a99c43f2d565504
7 hours ago
Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda. And like pointed out by others, the usual text style and vocabulary signs that could have been used to identify expertise or agenda are gone.
danielmarkbruce
16 minutes ago
Those signs are largely bs. It's a textual version of charisma.
thesz
7 hours ago
Propaganda works by repeating the same in different forms. Now it is easier to have different forms of the same, hence, more propaganda. Also, it is much easier to iinfluence whatever people write by influencing the tool they use to write.
Imagine that AI tools sway generated sentences to be slightly close, in summarisation space, to the phrase "eat dirt" or anything. What would happen?
ImHereToVote
7 hours ago
Hopefully people will exercise more judgement now that every Tom, Dick, and Harry scam artists can output elaborate prose.
galactus
5 hours ago
I think it is a totally different threat. Excluding adversarial behavior, humans usually produce information with a quality level that is homogeneous (from homogeneously sloppy to homogeneously rigurous).
AI otoh can produce texts that are quite accurate globally with some totally random hallucinations here and there. It makes it quite harder to identify
baq
8 hours ago
scale makes all the difference. society without trust falls apart. it's good if some people doubt some things, but if everyone necessarily must doubt everything, it's anarchy.
vouaobrasil
6 hours ago
Perhaps that anarchy is the exact thing we need to convince everyone to revolt against big tech firms like Google and OpenAI and take them down by mob rule.
dangitman
7 hours ago
Is our society built on trust? I don't generally trust most of what's distributed as news, for instance. Virtually every newsroom in america is undermined by basic conflicts of interest. This has been true since long before I was born, although perhaps the death of local news has accelerated this phenomenon. Mostly I just "trust" that most people don't want to hurt me (even if this trust is violated any time I bike along side cars for long enough)
I don't think that LLMs will change much, frankly, it's just gonna be more obvious when they didn't hire a human to do the writing.
low_tech_love
2 hours ago
It’s nothing to do with trusting in terms of being true or false, but whatever I read before I felt like, well, it can be good or bad, I can judge it, but whatever it is, somebody wrote it. It’s their work. Now when I read something I just have absolutely no idea whether the person wrote it, how much percent did they write it, or how much they even had to think before publishing it. Anyone can simply publish a perfectly well-written piece of text about any topic whatsoever, and I just can’t wrap my head around why, but it feels like a complete waste of time to read anything. Like… it’s all just garbage, I don’t know.
everdrive
7 hours ago
How do you like questioning much more of it, much more frequently, from many more sources? And mistrusting it in new ways. AI and regular people are not wrong in the same ways, nor for the same reasons, and now you must track this too, increasingly.
danielmarkbruce
17 minutes ago
The following appears to be true:
If one spends a lot of years reading a lot of stuff, they come to this conclusion, that most of it cannot be trusted. But it takes lots of years and lots of material to see it.
If they don't, they don't.
rsynnott
6 hours ago
There are topics on which you should be somewhat suspicious of anything you read, but also many topics where it is simply improbable that anyone would spend time maliciously coming up with a lie. However, they may well have spicy autocomplete imagine something for them. An example from a few days ago: https://news.ycombinator.com/item?id=41645282
voidmain0001
7 hours ago
I read the original comment not as a lament of not being able to trust the content, rather, they are lamenting the fact that AI/LLM generated content has no more thought or effort put into it than a cheap microwave dinner purchased from Walmart. Yes, it fills the gut with calories but it lacks taste.
On second thought, perhaps AI/LLM generated content is better illustrated with it being like eating the regurgitated sludge called cud. Nothing new, but it fills the gut.
akudha
5 hours ago
There were news reports that Russia spent less than a million dollars on a massive propaganda campaign targeting U.S elections and the American population in general.
Do you think it would be possible before internet, before AI?
Bad actors, poorly written/sourced information, sensationalism etc have always existed. It is nothing new. What is new is the scale, speed and cost of making and spreading poor quality stuff now.
All one needs today is a laptop and an internet connection and a few hours, they can wreak havoc. In the past, you'd need TV or newspapers to spread bad (and good) stuff - they were expensive, time consuming to produce and had limited reach.
kloop
3 minutes ago
There are lots of organizations with $1M and a desire to influence the population
This can only be done with a sentiment that was, at least partially, already there. And may very well happen naturally eventually
heresie-dabord
5 hours ago
> you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so. [...] Why do you think that you could trust what you read before?
A human communicator is, in a sense, testifying when communicating. Humans have skin in the social game.
We try to educate people, we do want people to be well-informed and to think critically about what they read and hear. In the marketplace of information, we tend very strongly to trust non-delusional, non-hallucinating members of society. Human society is a social-confidence network.
In social media, where there is a cloak of anonymity (or obscurity), people may behave very badly. But they are usually full of excuses when the cloak is torn away; they are usually remarkably contrite before a judge.
A human communicator can face social, legal, and economic consequences for false testimony. Humans in a corporation, and the corporation itself, may be held accountable. They may allocate large sums of money to their defence, but reputation has value and their defence is not without social cost and monetary cost.
It is literally less effort at every scale to consult a trusted and trustworthy source of information.
It is literally more effort at every scale to feed oneself untrustworthy communication.
kombookcha
7 hours ago
Debunking bullshit inherently takes more effort than generating bullshit, so the human factor is normally your big force multiplier. Does this person seem trustworthy? What else have they done, who have they worked with, what hidden motivations or biases might they have, are their vibes /off/ to your acute social monkey senses?
However with AI anyone can generate absurd torrential flows of bullshit at a rate where, with your finite human time and energy, the only winning move is to reject out of hand any piece of media that you can sniff out as AI. It's a solution that's imperfect, but workable, when you're swimming through a sea of slop.
ontouchstart
6 hours ago
Debugging is harder than writing code. Once the code passed linter, compiler and test, the bugs might be more subtly logical and require more effort and intelligence.
We are all becoming QA of this super automated world.
bitexploder
6 hours ago
Maybe the debunking AIs can match the bullshit generating AIs, and we will have balance in the force. Everyone is focused on the generative AIs, it seems.
eesmith
7 hours ago
The negation of 'I cannot trust' is not 'I could always trust' but rather 'I could sometimes trust'.
Nor is trust meant to mean something is absolute and unquestionable. I may trust someone, but with enough evidence I can withdraw trust.
escape_goat
3 hours ago
There was a degree of proof of work involved. Text took human effort to create, and this roughly constrained the quantity and quality of misinforming text to the number of humans with motive to expend sufficient effort to misinform. Now superficially indistinguishable text can be created by an investment in flops, which are fungible. This means that the constraint on the amount of misinforming text instead scales with whatever money is resourced to the task of generating misinforming text. If misinforming text can generate value for someone that can be translated back into money, the generation of misinforming text can be scaled to saturation and full extraction of that value.
tuyguntn
7 hours ago
> For me, LLMs don't change anything. I already questioned the information before and continue to do so.
I also did, but LLM increased the volume of content, which forces my brain first try to identify if content is generated by LLMs, which is consuming a lot of energy and makes brain even less focused, because now it's primary goal is skimming quickly to identify, instead of absorbing first and then analyzing info
desdenova
6 hours ago
The web being polluted only makes me ignore more of it.
You already know some of the more trustworthy sources of information, you don't need to read a random blog which will require a lot more effort to verify.
Even here on hackernews, I ignore like 90% of the spam people post. A lot of posts here are extremely low effort blogs adding zero value to anything, and I don't even want to think whether someone wasted their own time writing that or used some LLM, it's worthless in both cases.
croes
7 hours ago
The quota changed because it's now easier and faster
tempfile
7 hours ago
> I already questioned the information before and continue to do so.
You might question new information, but you certainly do not actually verify it. So all you can hope to do is sense-checking - if something doesn't sound plausible, you assume it isn't true.
This depends on having two things: having trustworthy sources at all, and being able to relatively easily distinguish between junk info and real thorough research. AI is a very easy way for previously-trustworthy sources to sneak in utter disinformation without necessarily changing tone much. That makes it much easier for the info to sneak past your senses than previously.
desdenova
6 hours ago
Exactly. The web before LLMs was mostly low effort SEO spam written by low-wage people in marketing agencies.
Now it's mostly zero effort LLM-generated SEO spam, and the low-wage workers lost their jobs.
vouaobrasil
6 hours ago
The difference is that now we'll have even more zero-effort SEO spam because AI is a force multiplier for that. Much more.
crazygringo
12 minutes ago
Counterpoint: I don't think anything has changed much at all.
I trust everything in the NY Times to the same degree I did before AI, which is to say to a significant degree (they rarely outright lie -- they generally don't say person X said Y if that person didn't) but far from entirely (they often omit entire, major, important perspectives from articles).
Are reporters using ChatGPT to quickly look up facts? Are they using it to brainstorm different article ledes, or column ideas? Or to polish clunky sentences? I couldn't care less, as long as they're still manually verifying the facts and evaluating the final prose according to the same standards, where I see no evidence of change or falling standards.
I've certainly come across entire ChatGPT-written websites full of e.g. Python tutorials that you quickly realize are hallucinated garbage, but that's also not really any different from previous blogspam regurgitated by low-cost workers in different countries who can barely write in grammatical English, but who were still human beings.
Whether someone uses AI to help them write or not is irrelevant to their trustworthiness. What matters is the quality control that comes afterwards. Even when you write an article, your first draft is often terrible. Writing is an iterative process where evaluating and editing what you've written is often more important than the writing itself. Often times, not a single sentence from your first draft will make it into the final version.
Complaining that an author used AI as a tool during writing is like complaining that a farmer used a tractor growing their crops instead of a hoe and shovel. What matters is the quality of the output, which humans are still evaluating as much as ever before.
akudha
5 hours ago
I was listening to an interview few months ago (forgot the name). He is a prolific reader/writer and has a huge following. He mentioned that he only reads books that are at least 50 years old, so pre 70s. That sounds like a good idea now.
Even ignoring the AI, if you look at the movies and books that come out these days, their quality is significantly lower than 30-40 years ago (on an average). Maybe people's attention spans and taste is to blame, or maybe people just don't have the money/time/patience to consume quality work... I do not know.
One thing I know for sure - there is enough high quality material written before AI, before article spinners, before MFA sites etc. We would need multiple lifetimes to even scratch the surface of that body of work. We can ignore mostly everything that is published these days and we won't be missing much
eloisant
2 hours ago
I'd say it's probably survivor's bias. Bad books from the pre 70s are probably forgotten and no longer printed.
Old books that we're still printing and are still talking about have stood the test of time. It doesn't mean that are no great recent books.
onion2k
7 hours ago
The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.
What AI is going to teach people is that they don't actually need to trust half as many things as they thought they did, but that they do need to verify what's left.
This has always been the case. We've just been deferring to 'truster organizations' a lot recently, without actually looking to see if they still warrant having our trust when they change over time.
layer8
6 hours ago
How can you verify most of anything if you can’t trust any writing (or photographs, audio, and video, for that matter)?
Frost1x
6 hours ago
Independent verification is always good however not always possible and practical. At complex levels of life we have to just trust underlying processes work, usually until something fails.
I don’t go double checking civil engineers work (nor could I) for every bridge I drive over. I don’t check inspection records to make sure it was recent and proper actions were taken. I trust that enough people involved know what they’re doing with good enough intent that I can take my 20 second trip over it in my car without batting an eye.
If I had to verify everything, I’m not sure how I’d get across many bridges on a daily basis. Or use any major infrastructure in general where my life might be at risk. And those are cases where it’s very important to be done right, if it’s some accounting form or generated video on the internet… I have even less time to be concerned from a practical standpoint. Having the skills to do it should I want or need to are good and everyone should have these but we’re at a point in society we really have to outsource trust in a lot of cases.
This is true everywhere, even in science which these days many people just trust in ways akin to faith in some cases, and I don’t see anyway around that. The key being that all the information should exist to be able to independently verify something but from a practice standpoint it’s rarely viable.
nils-m-holm
7 hours ago
> It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition.
I am writing regularly and I will never use AI. In fact I am working on a 400+ pages book right now and it does not contain a single character that I have not come up with and typed myself. Something like pride in craftmanship does exist.
nyarlathotep_
2 hours ago
In b4 all the botslop shills tell you you're gonna get "left behind" if you don't pollute your output with GPT'd copypasta.
low_tech_love
2 hours ago
Amazing! Do you feel any pressure from your environment? And are you self-funded? I am also thinking about starting my first book.
nils-m-holm
an hour ago
What I write is pretty niche anyway (compilers, LISP, buddhism, advaita), so I do not think AI will cause much trouble. Google ranking small websites into oblivion, though, I do notice that!
smitelli
5 hours ago
I'm right there with you. I write short and medium form articles for my personal site (link in bio, follow it or don't, the world keeps spinning either way). I will never use AI as part of this craft. If that hampers my output, or puts me at a disadvantage compared to the competition, or changes the opinion others have of me, I really don't care.
vouaobrasil
6 hours ago
Nice. I will definitely consider your book over other books. I'm not interested in reading AI-assisted works.
noobermin
4 hours ago
When you're writing, how are you "missing out" if you're not using chatgpt??? I don't even understand how this can be unless what you're writing is already unnecessary such that you shouldn't need to write it in the first place.
jwells89
3 hours ago
I don’t get it either. Writing is not something I need that level of assistance with, and I would even say that using LLMs to write defeats some significant portion of the point of writing — by using LLMs to write for me I feel that I’m no longer expressing myself in the purest sense, because the words are not mine and do not exhibit any of my personality, tendencies, etc. Even if I were to train an LLM on my style, it’d only be a temporal facsimile of middling quality, because peoples’ styles evolve (sometimes quite rapidly) and there’s no way to work around all the corner cases that never got trained for.
As you say, if the subject is worth being written about, there should be no issue and writing will come naturally. If it’s a struggle, maybe one should step back and figure out why that is.
There may some argument for speed, because writing quality prose does take time, but then the question becomes a matter of quantity vs. quality. Do you want to write high quality pieces that people want to read at a slower pace or churn out endless volumes of low-substance grey goo “content”?
dotnet00
3 hours ago
LLMs are surprisingly capable editors/brainstorming tools. So, you're missing out in that you're being less efficient in editing.
Like, you can write a bunch of text, then ask an LLM to improve it with minimal changes. Then, you read through its output and pick out the improvements you like.
jayd16
2 hours ago
But that's the problem. Unique, quirky mannerisms become polished out. Flaws are smoothed and over sharpened.
I'm personally not as gloomy about it as the parent comments but I fear it's a trend that pushes towards a samey, mass-produced style in all writing.
Eventually there will be a counter culture and backlash to it and then equilibrium in quality content but it's probably here to stay for anything where cost is a major factor.
dotnet00
2 hours ago
Yeah, I suppose that would be an issue for creative writing. My focus is mostly on scientific writing, where such mannerisms should be less relevant than precision, so I didn't consider that aspect of other kinds of writing.
slashdave
2 hours ago
And I the only one who doesn't even like automatic grammar checkers, because they are contributing to a single and uniformly bland style of writing? LLMs are just going to make this worse.
tourmalinetaco
3 hours ago
Sure, but Grammarly and similar have existed far before the LLM boom.
dotnet00
2 hours ago
That's a fair point, I only very recently found that LLMs could actually be useful for editing, and hadn't really thought much of using tools for that kind of thing previously.
flir
7 hours ago
I've been using it in my personal writing (combination of GPT and Claude). I ask the AI to write something, maybe several times, and I edit it until I'm happy with it. I've always known I'm a better editor than I am an author, and the AI text gives me somewhere to start.
So there's a human in the loop who is prepared to vouch for those sentences. They're not 100% human-written, but they are 100% human-approved. I haven't just connected my blog to a Markov chain firehose and walked away.
Am I still adding to the AI smog? idk. I imagine that, at a bare minimum, its way of organising text bleeds through no matter how much editing I do.
vladstudio
7 hours ago
you wrote this comment completely by your own, right? without any AI involved. And I read your comment feeling confident that it's truly 100% yours. I think this reader's confidence is what the OP is talking about.
flir
7 hours ago
I did. I write for myself mostly so I'm not so worried about one reader's trust - I guess I'm more worried that I might be contributing to the dead internet theory by generating AI-polluted text for the next generation of AIs to train on.
At the moment I'm using it for local history research. I feed it all the text I can find on an event (mostly newspaper articles and other primary sources, occasionally quotes from secondary sources) and I prompt with something like "Summarize this document in a concise and direct style. Focus on the main points and key details. Maintain a neutral, objective voice." Then I hack at it until I'm happy (mostly I cut stuff). Analysis, I do the other way around: I write the first draft, then ask the AI to polish. Then I go back and forth a few times until I'm happy with that paragraph.
I'm not going anywhere with this really, I'm just musing out loud. Am I contributing to a tragedy of the commons by writing about 18th century enclosures? Because that would be ironic.
ontouchstart
6 hours ago
If you write for yourself, whether you use generated text or not, (I am using the text completion on my phone typing this message), the only thing that matters is how it affects you.
Reading and writing are mental processes (with or without advanced technology) that shape our collective mind.
edavison1
3 hours ago
>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.
A very HN-centric view of the world. From my perch in journalism and publishing, elite writers absolutely loathe AI and almost uniformly agree it sucks. So to my mind the most 'competitive' spheres in writing do not use AI at all.
DrillShopper
3 hours ago
It doesn't matter how elite you think you are if the newspaper, magazine, or publishing company you write for can make more money from hiring people at a fraction of your cost and having them use AI to match or eclipse your professional output.
At some point the competition will be less about "does this look like the most skilled human writer wrote this?" and more about "did the AI guided by a human for a fraction of the cost of a skilled human writer output something acceptably good for people to read it between giant ads on our website / watch the TTS video on YouTube and sit through the ads and sponsors?", and I'm sorry to say, skilled human writers are at a distinct disadvantage here because they have professional standards and self respect.
edavison1
an hour ago
So is the argument here that the New Yorker can make more money from AI slop writing overseen by low-wage overseas workers? Isn't that obviously not the case?
Anyway I think I've misunderstood the context in which we're using the word 'competition' here. My response was about attitudes toward AI from writers at the tip-top of the industry rather than profit maxxing/high-volume content farm type places.
easterncalculus
2 hours ago
Exactly. Also, if the past few years is any indication, at the very least tech journalists in general tend to love to use what they hate.
goatlover
an hour ago
So you're saying major media companies are going to outsource their writing to people overseas using LLMs? There is more to journalism than the writing. There's also the investigative part where journalists go and talk to people, look into old records, etc.
edavison1
an hour ago
This has become such a talking point of mine when I'm inevitably forced to explain why LLMs can't come for my job (yet). People seem baffled by the idea that reporting collects novel information about the world which hasn't been indexed/ingested at any point because it didn't exist before I did the interview or whatever it is.
fennecfoxy
3 hours ago
Yes, but what really matters is what and how the general public, aka the consumers want to consume.
I can bang on about older games being better all day long but it doesn't stop Fortnite from being popular, and somewhat rightly so, I suppose.
jayd16
2 hours ago
Sure but no one gets to avoid all but the most elite content. I think they're bemoaning the quality of pulp.
beefnugs
43 minutes ago
Just add more swearing and off color jokes to everything you do and say. If there is one thing we know for sure its that the corporate AIs will never allow dirty jokes.
(it will get into the dark places like spam though, which seems dumb since they know how to make meth instead, spend time on that you wankers)
walthamstow
8 hours ago
I've even grown to enjoy spelling and grammar mistakes - at least I know a human wrote it.
ipaio
7 hours ago
You can prompt/train the AI to add a couple of random minor errors. They're trained from human text after all, they can pretend to be as human as you like.
eleveriven
6 hours ago
Making it feel like there's no reliable way to discern what's truly human
vouaobrasil
6 hours ago
There is. Be vehemently against AI, put 100% AI free in your work. The more consistent you are against AI, the more likely people will believe you. Write articles slamming AI. Personally, I am 100% against AI and I state that loud and clear on my blogs and YouTube channel. I HATE AI.
jaredsohn
5 hours ago
Hate to tell you but there is nothing stopping people using AI from doing the same thing.
vouaobrasil
5 hours ago
AI cannot build up a sufficient level of trust, especially if you are known in person by others who will vouch for you. That web of trust is hard to break with AI. And I am one of those.
danielbln
2 hours ago
Are you including transformer based translation models like Google Translate or Deepl in your categorical AI rejectio?
vouaobrasil
2 hours ago
Yeah.
vasco
7 hours ago
The funny thing is that the things it refuses to say are "wrong-speech" type stuff, so the only things you can be more sure of nowadays are conspiracy theories and other nasty stuff. The nastier the more likely it's human written, which is a bit ironic.
matteoraso
7 hours ago
No, you can finetune locally hosted LLMs to be nasty.
slashdave
2 hours ago
Maybe the future of creative writing is fine tuning your own unique form of nastiness
Jensson
7 hours ago
> The nastier the more likely it's human written, which is a bit ironic.
This is as everything else, machine produced has a flawlessness along some dimension that humans tend to lack.
Applejinx
7 hours ago
Barring simple typos, human mistakes are erroneous intention from a single source. You can't simply write human vagaries off as 'error' because they're glimpses into a picture of intention that is perhaps misguided.
I'm listening to a slightly wonky early James Brown instrumental right now, and there's certainly a lot more error than you'd get in sequenced computer music (or indeed generated music) but the force with which humans wrest the wonkiness toward an idea of groove is palpable. Same with Zeppelin's 'Communication Breakdown' (I'm doing a groove analysis project, ok?).
I can't program the AI to have intention, nor can you. If you do, hello Skynet, and it's time you started thinking about how to be nice to it, or else :)
Gigachad
7 hours ago
There was a meme along the lines of people will start including slurs in their messages to prove it wasn’t AI generated.
jay_kyburz
7 hours ago
A few months ago, I tried to get Gemini to help me write some criticism of something. I can't even remember what it was, but I wanted to clearly say something was wrong and bad.
Gemini just could not do it. It kept trying to avoid being explicitly negative. It wanted me to instead focus on the positive. I think it evidently just told me no, and that it would not do it.
Gigachad
7 hours ago
Yeah all the current tools have this particular brand of corporate speech that’s pretty easy to pick up on. Overly verbose, overly polite, very vague, non assertive, and non opinionated.
stahorn
5 hours ago
Next big thing: AI that writes as British football hooligans talk about the referee after a match where their team lost?
dijit
7 hours ago
I mean, it's not a meme..
I included a few more "private" words than I should and I even tried to narrate things to prove I wasn't an AI.
https://blog.dijit.sh/gcp-the-only-good-cloud/
Not sure what else I should do, but it's pretty clear that it's not AI written (mostly because it's incoherent) even without grammar mistakes.
bloak
6 hours ago
I liked the "New to AWS / Experienced at AWS" cartoon.
redandblack
an hour ago
yesss. my thought too. All the variations of English should not lost.
I enjoyed all the belter dialogue in the expanse
1aleksa
7 hours ago
Whenever somebody misspells my name, I know it's legit haha
sseagull
6 hours ago
Way back when we had a landline and would get telemarketers, it was always a sign when the caller couldn’t pronounce our last name. It’s not even that uncommon a name, either
fzzzy
7 hours ago
Guess what? Now the computers will learn to do that so they can more convincingly pass a turing test.
faragon
7 hours ago
People could prompt for authenticity, adding subtle mistakes, etc. I hope that AI as a whole will help people writing better, if reading back the text. It is a bit like "The Substance" movie: a "better" version of ourselves.
oneshtein
7 hours ago
> Write a response to this comment, make spelling and grammar mistakes.
yeah well sumtimes spellling and grammer erors just make thing hard two read. like i no wat u mean bout wanting two kno its a reel person, but i think cleear communication is still importint! ;)
hyggetrold
an hour ago
> The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.
This has nearly always been true. "Manufacturing consent" is way older than any digital technology.
unshavedyak
an hour ago
Agreed. I also suspect we've grown to rely on the crutch of trust far too much. Faulty writing has existed for ages but now suddenly because the computer is the thing making it up we have an issue with it.
I guess it depends on scope. I'm imaging scientific or education. Ie things we probably shouldn't have relied on Blogs to facilitate, yet we did. For looking up some random "how do i build a widget?", yea AI will probably make it worse. For now. Then it'll massively improve to the point that it's not even worth asking how to build the widget.
The larger "scientific or education" is what i'm concerned about, and i think we'll need a new paradigm to validate. We've been getting attacked on this front for 12+ years, AI is only bringing this to light imo.
Trust will have to be earned and verified in this word-soup world. I just hope we find a way.
hyggetrold
an hour ago
IMHO AI tools will (or at least should!) fundamentally change the way the education system works. AI tools are - from a certain point of view - really just a scaled version of AI now can put at our fingertips. Paradoxically, the more AI can do "grunt work" the more we need folks to be educated on the higher-level constructs on which they are operating.
Some of the bigger issues you're raising I think have less to do with technology and more to do with how our economic system is currently structured. AI will be a tremendous accelerant, but are we sure we know where we're going?
jcd748
3 hours ago
Life is short and I like creating things. AI is not part of how I write, or code, or make pixel art, or compose. It's very important to me that whatever I make represents some sort of creative impulse or want, and is reflective of me as a person and my life and experiences to that point.
If other people want to hit enter, watch as reams of text are generated, and then slap their name on it, I can't stop them. But deep inside they know their creative lives are shallow and I'll never know the same.
onemoresoop
2 hours ago
> If other people want to hit enter, watch as reams of text are generated, and then slap their name on it,
The problem is this kind of content is flooding the internet. Before you know it becomes extremely hard to find non AI generated content...
jcd748
2 hours ago
I think we agree. I hate it, and I can't stop it, but also I definitely won't participate in it.
low_tech_love
2 hours ago
That’s super cool, and I hope you are right and that I am wrong and artists/creators like you will still have a place in the future. My fear is that your work turns into some kind of artesanal fringe activity that is only accessible to 1% of people, like Ming vases or whatever.
CuriouslyC
30 minutes ago
A lot of writers using AI use it to create outlines of a chapter or scene then flesh it out by hand.
bryanrasmussen
7 hours ago
>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.
Are you sure you don't mean if you write regularly in one particular subclass of writing - like technical writing, documentation etc.? Do you think novel writing, poetry, film reviews etc. cannot keep up in the same way?
t-3
7 hours ago
I'm absolutely positive that the vast majority of fiction is or will soon be written by LLM. Will it be high-quality? Will it be loved and remembered by generations to come? Probably not. Will it make money? Probably more than before on average as the author's effort is reduced to writing outlines and prompts, and editing the generated-in-seconds output, rather than months-years of doing the writing themselves.
lokimedes
7 hours ago
I get two associations from your comment: One about how AI being mainly used to interpolate within a corpus of prior knowledge, seems like entropy in a thermodynamical sense. The other, how this is like the Tower of Babel but where distrust is sown by sameness rather than differences. In fact, relying on AI for coding and writing, feels more like channeling demonic suggestions than anything else. No wonder we are becoming skeptical.
t43562
8 hours ago
It empowers people to create mountains of shit that they cannot distinguish from shit - so they are happy.
_heimdall
3 hours ago
> Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it.
Why do you say people have to do it?
People absolutely can choose not to use LLMs and to instead write their own words and thoughts, just like developers can simply refuse to build LLM tools, whether its because they have safety concerns or because they simply see "AI" in its current state as a doomed marketing play that is not worth wasting time and resources on. There will always be side effects to making those decisions, but its well within everyone's right to make them.
DrillShopper
3 hours ago
> Why do you say people have to do it?
Gotta eat, yo
goatlover
35 minutes ago
Somehow people made enough to eat before LLMs became all the rage a couple years ago. I suspect people are still making enough to eat without having to use LLMs.
fennecfoxy
3 hours ago
Why does a human being behind any words change anything at all? Trust should be based on established facts/research and not species.
bloak
2 hours ago
A lot of communication isn't about "established facts/research"; it's about someone's experience. For example, if a human writes about their experience of using a product, perhaps a drug, or writes what they think about a book or a film, then I might be interested in reading that. When they write using their own words I get some insight into how they think and what sort of person they are. I have very little interest in reading an AI-generated text with similar "content".
goatlover
37 minutes ago
An LLM isn't even a species. I prefer communicating with other humans, unless I choose to interact with an LLM. But then I know that it's a text generator and not a person, even when I ask it to act like a person. The difference matters to most humans.
BeFlatXIII
an hour ago
> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.
Only if you're competing on volume.
vouaobrasil
6 hours ago
> If you write regularly and you're not using AI, you simply cannot keep up with the competition.
Wrong. I am a professional writer and I never use AI. I hate AI.
ChrisMarshallNY
7 hours ago
I don't use AI in my own blogging, but then, I don't particularly care whether or not someone reads my stuff (the ones that do, seem to like it).
I have used it, from time to time, to help polish stuff like marketing fluff for the App Store, but I'd never use it verbatim. I generally use it to polish a paragraph or sentence.
But AI hasn't suddenly injected untrustworthy prose into the world. We've been doing that, for hundreds of years.
notarobot123
2 hours ago
I have my reservations about AI but it's hard not to notice that LLMs are effectively a Gutenberg level event in the history of written communication. They mark a fundamental shift in our capacity to produce persuasive text.
The ability to speak the same language or understand cultural norms are no longer barriers to publishing pretty much anything. You don't have to understand a topic or the jargon of any given domain. You don't have to learn the expected style or conventions an author might normally use in that context. You just have to know how to write a good prompt.
There's bound to be a significant increase in the quantity as well as the quality of untrustworthy published text because of these new capacities to produce it. It's not the phenomenon but the scale of production that changes the game here.
layer8
6 hours ago
> marketing fluff for the App Store
If it’s fluff, why do you put it there? As an App Store user, I‘m not interested in reading marketing fluff.
ChrisMarshallNY
5 hours ago
Because it’s required?
I’ve released over 20 apps, over the years, and have learned to add some basic stuff to each app.
Truth be told, it was really sort of a self-deprecating joke.
I’m not a marketer, so I don’t have the training to write the kind of stuff users expect on the Store, and could use all the help I can get
Over the years, I’ve learned that owning my limitations, can be even more important, than knowing my strengths.
layer8
4 hours ago
My point was that as a user I expect substance, not fluff. Some app descriptions actually provide that, but many don’t.
ChrisMarshallNY
2 hours ago
Well, you can always check out my stuff, and see what you think. Easy to find.
osigurdson
4 hours ago
AI expansion: take a few bullet points and have ChatGPT expand it into several pages of text
AI compression: take pages of text and use ChatGPT to compress into a few bullet points
We need to stop being impressed with long documents.
fennecfoxy
2 hours ago
The foundations of our education systems are based on rote memorisation so I'd probably start there.
dijit
7 hours ago
Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.
Lowering the bar to write books is "good" but increases the noise to signal ratio.
I'm not 100% certain how to give another proof-of-work, but what I've started doing is narrating my blog posts - though AI voices are getting better too.. :\
vasco
7 hours ago
> Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.
Said the scribe upon hearing about the printing press.
dijit
7 hours ago
I'm not certain what statement you're implying, but yes, accessibility of bookwriting has definitely decreased the quality of books.
Even technical books like Hardcore Java: https://www.oreilly.com/library/view/hardcore-java/059600568... are god-awful, and even further away from the seminal texts on computer science that came before.
It does feel like authorship was once heralded in higher esteem than it deserves today.
Seems like people agree: https://www.reddit.com/r/books/comments/18cvy9e/rant_bestsel...
neta1337
6 hours ago
Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.
ks2048
7 hours ago
> If you write regularly and you're not using AI, you simply cannot keep up with the competition.
Is that true today? I guess it depends what kind of writing you are talking about, but I wouldn't think most successful writers today - from novelests to tech bloggers - rely that much on AI, but I don't know. Five years from now, could be a different story.
theshackleford
3 hours ago
Yes it’s true today, depending on what is your writing is the foundation of.
It doesn’t matter that my writing is more considered, more accurate and of a higher quality when my coworkers are all openly using AI to perform five times the work I am and producing outcomes that are “good enough” because good enough is quite enough for a larger majority than many likely realise.
amelius
3 hours ago
Funny thing is that people will also ask AI to __read__ stuff for them and summarize it.
So everything an AI writes will eventually be nothing more than some kind of internal representation.
munksbeer
6 hours ago
> but it's still depressing, to be honest.
Cheer up. Things usually get better, we just don't notice it because we're so consumed with extrapolating the negatives. Humans are funny like that.
vouaobrasil
6 hours ago
I actually disagree with that. People are so busy hoping things will get better, and creating little bubbles for themselves to hide away from what human beings as a whole are doing, that they don't realize things are getting worse. Technology constantly makes things worse. Cheering up is a good self-help strategy but not a good strategy if you want to contribute to making the world actually a better place.
munksbeer
3 hours ago
>Technology constantly makes things worse.
And it also makes things a lot better. Overall we lead better lives than people just 50 years ago, never mind centuries.
vouaobrasil
2 hours ago
No way. Life 50 years ago was better for MANY. Maybe that would be true for 200. But 50 years ago was the 70s. There were far fewer people, and the world was not starting to suffer from climate change. Tell your statement to any climate refugee, and ask them whether they'd like to live now or back then.
AND, we had fewer computers and life was not so hectic. YES, some things have gotten better, but on average? It's arguable.
vundercind
an hour ago
It’s fairly common for (at least) specific things to get worse and then never improve again.
wengo314
7 hours ago
i think the problem started when quantity became more important over quality.
you could totally compete on quality merit, but nowadays the volume of output (and frequency) is what is prioritized.
limit499karma
4 hours ago
I'll take your statement that your conclusions are based on a 'depressed mind' at face value, since it is so self-defeating and places little faith in Human abilities. Your assumption that a person driven to write will "with a high degree of certainty" also mix up their work with a machine assistant can only be informed by your own self-assessment (after all how could you possibly know the mindset of every creative human out there?)
My optimistic and enthusiastic view of AI's role in Human development is that it will create selection pressures that will release the dormant psychological abilities of the species. Undoubtedly, wide-spread appearance of Psi abilities will be featured in this adjustment of the human super-organism to technologies of its own making.
Machines can't do Psi.
yusufaytas
7 hours ago
I totally understand your frustration. We started writing our book long before(2022) AI became mainstream, and when we finally published it on May 2024, all we hear now is people asking if it's just AI-generated content. It’s sad to see how quickly the conversation shifts away from the human touch in writing.
eleveriven
7 hours ago
I can imagine how disheartening that must be
FrustratedMonky
38 minutes ago
Maybe this will push people back to reading old paper books?
There could be resurgence in reading the classics, on paper, since we know they are not AI.
user
7 hours ago
sandworm101
7 hours ago
>> cannot trust anything that has been written in the past 2 years or so and up until the day that I die.
You never should have. Large amounts of work, even stuff by major authors, is ghostwritten. I was talking to someone about Taylor Swift recently. They thought that she wrote all her songs. I commented that one cannot really know that, that the entertainment industry is very going at generating seemingly "authentic" product at a rapid pace. My colleague looked at me like I had just killed a small animal. The idea that TS was "genuine" was a cornerstone of their fandom, and my suggestion had attacked that love. If you love music or film, don't dig too deep. It is all a factory. That AI is now part of that factory doesn't change much for me.
Maybe my opinion would change if I saw something AI-generated with even a hint of artistic relevance. I've seen cool pictures and passable prose, but nothing so far with actual meaning, nothing worthy of my time.
WalterBright
5 hours ago
Watch the movie "The Wrecking Crew" about how a group of studio musicians in the 1970s were responsible for the albums of quite a few diverse "bands". Many bands had to then learn to play their own songs so they could go on tour.
nyarlathotep_
2 hours ago
> You never should have. Large amounts of work, even stuff by major authors, is ghostwritten.
I'm reminded of 'Under The Silver Lake' with this reference. Strange film, but that plotline stuck with me.
greenie_beans
4 hours ago
i know a lot of writers who don't use ai. in fact, i can't think of any writers who use it, except a few literary fiction writers.
working theory: writers have taste and LLM writing style doesn't match the typical taste of a published writer.
InDubioProRubio
5 hours ago
Just dont be average and your fine.
tim333
6 hours ago
I'm not sure it's always that hard to tell the AI stuff from the non AI. Comments on HN and on twitter from people you follow are pretty much non AI, also people on youtube where you an see the actual human talking.
On the other hand there's a lot on youtube for example that is obviously ai - weird writing and speaking style and I'll only watch those if I'm really interested in the subject matter and there aren't alternatives.
Maybe people will gravitate more to the stuff like PaulG or Elon Musk on twitter or HN and less to blog style content?
jshdhehe
6 hours ago
AI only helps writing in so far as checking/suggesting edits. Most people can write better than AI (more engaging). AI cant tell a human story, have real tacit experience.
So it is like saying my champaigne bottle cant keep up with the tap water.
eleveriven
7 hours ago
Maybe, over time, there will also be a renewed appreciation for authenticity
paganel
7 hours ago
You kind of notice the stuff written with AI, it has a certain something that makes it detectable. Granted, stuff like the Reuters press reports might have already been written by AI, but I think that in that case it doesn’t really matter.
williamcotton
7 hours ago
Well we’re going to need some system of PKI that is tied to real identities. You can keep being anonymous if you want but I would prefer not and prefer to not interact with the anonymous, just like how I don’t want to interact with people wearing ski masks.
flir
6 hours ago
I doubt that's possible. I can always lend my identity to an AI.
The best you can hope for is not "a human wrote this text", it's "a human vouched for this text".
nottorp
7 hours ago
Why are you posting on this forum where the user's identity isn't verified by anyone then? :)
But the real problem is that having the poster's identity verified is no proof that their output is not coming straight from a LLM.
williamcotton
7 hours ago
I don’t really have a choice about interacting with the anonymous at this point.
It certainly will affect the reputation of people that are consistently publishing untruths.
nottorp
6 hours ago
> It certainly will affect the reputation of people that are consistently publishing untruths.
Oh? I thought there are a lot of very well identified people making a living from publishing untruths right now on all social media. How would PKI help, when they're already making it very clear who they are?
wickedsight
6 hours ago
With a friend, I created a website about a race track in the past two years. I definitely used AI to speed up some of writing. One thing I used it for was a track guide, describing every corner and how to drive it. It was surprisingly accurate, most of the time. The other times though, it would drive the track backwards, completely hallucinate the instructions or link corners that are in different parts of the track.
I spent a lot of time analyzing the track myself and fixed everything to the point that experienced drivers agreed with my description. If I hadn't done that, most visitors would probably still accept our guide as the truth, because they wouldn't know any better.
We know that not everyone cares about whether what they put on the internet is correct and AI allows those people to create content at an unprecedented pace. I fully agree with your sentiment.
uhtred
3 hours ago
To be honest I got sick of most new movies, TV shows, music even before AI so I will continue to consume media from pre 2010 until the day I die and will hope I don't get through it all.
Something happened around 2010 and it all got shit. I think everyone becoming massively online made global cultural output reduce in quality to meet the interests of most people and most people have terrible taste.
FrankyHollywood
7 hours ago
I have never read more bullshit in my life than during the corona pandemic, all written by humans. So you should never trust something you read, always question the source and it's reasoning.
At the same time I use copilot on a daily basis, both for coding as well as the normal chat.
It is not perfect, but I'm at a point I trust AI more than the average human. And why shouldn't I? LLMs ingest and combine more knowledge than any human can ever do. An LLM is not a human brain but it's actually performing really well.
avereveard
8 hours ago
why do you trust things now? unless you recognize the author and have a chain of trust from that author production to the content you're consuming, there already was no way to estabilish trust.
layer8
6 hours ago
For one, I trust authors more who are not too lazy to start sentences with upper case.
EGreg
5 hours ago
I have been predicting this from 2016
And I also predict that many responses to you will say “it was always that way, nothing changed”.
datavirtue
6 hours ago
It's either good or it isn't. It either tracks or it doesn't. No need to befuddle your thoughts over some perceived slight.
verisimi
6 hours ago
You're lucky. I consider it a possibility that older works (even ancient writings) are retrojected into the historical record.
farts_mckensy
3 hours ago
>But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me.
Everyone is going to have to get over that very soon, or they're going to start sounding like those old puritanical freaks who thought Elvis thrusting his hips around was the work of the devil.
goatlover
31 minutes ago
Those two things don't sound at all similar. We don't have to get over wanting to communicate with humans online.
advael
7 hours ago
In trying to write a book, it makes little sense to try to "compete" on speed or volume of output. There were already vast disparities in that among people who write, and people whose aim was to express themselves or contribute something of importance to people's lives, or the body of creative work in the world, have little reason to value quantity over quality. Probably if there's a significant correlation with volume of output, it's in earnings, and that seems both somewhat tenuous and like something that's addressable by changes in incentives, which seem necessary for a lot of things. Computers being able to do dumb stuff at massive scale should be viewed as finding vulnerabilities in the metrics this allows it to become trivial to game, and it's baffling whenever people say "Well clearly we're going to keep all our metrics the same and this will ruin everything." Of course, in cases where we are doing that, we should stop (For example, we should probably act to significantly curb price and wage discrimination, though that's more like a return to form of previous regulatory standards)
As a creator of any kind, I think that simply relying on LLMs to expand your output via straightforward uses of widely available tools is inevitably going to lead to regression to the mean in terms of creativity. I'm open to the idea, however, that there could be more creative uses of the things that some people will bother to do. Feedback loops they can create that somehow don't stifle their own creativity in favor of mimicking a statistical model, ways of incorporating their own ingredients into these food processors of information. I don't see a ton of finished work that seems to do this, but I see hints that some people are thinking this way, and they might come up with some cool stuff. It's a relatively newly adopted technology, and computer-generated art of various kinds usually separates into "efficiency" (which reads as low quality) in mimicking existing forms, and new forms which are uniquely possible with the new technology. I think plenty of people are just going to keep writing without significant input from LLMs, because while writer's block is a famous ailment, many writers are not primarily limited by their speed in producing more words. Like if you count comments on various sites and discussions with other people, I write thousands of words unassisted most days
This kind of gets to the crux of why these things are useful in some contexts, but really not up to snuff with what's being claimed about them. The most compelling use cases I've seen boil down to some form of fitting some information into a format that's more contextually appropriate, which can be great for highly structured formatting requirements and dealing with situations which are already subject to high protocol of some kind, so long as some error is tolerated. For things for which conveying your ideas with high fidelity, emphasizing your own narrative voice or nuanced thoughts on a subject, or standing behind the factual claims made by the piece are not as important. As much as their more strident proponents want to claim that humans are merely learning things by aggregating and remixing them in the same sense as these models do, this reads as the same sort of wishful thinking about technology that led people to believe that brains should work like clockwork or transistors at various other points in time at best, and honestly this most often seems to be trotted out as the kind of bad-faith analogy tech lawyers tend to use when trying to claim that the use of [exciting new computer thing] means something they are doing can't be a crime
So basically, I think rumors of the death of hand-written prose are, at least at present, greatly exaggerated, though I share the concern that it's going to be much harder to filter out spam from the genuine article, so what it's really going to ruin is most automated search techniques. The comparison to "low-background steel" seems apt, but analogies about how "people don't handwash their clothes as much anymore" kind of don't apply to things like books
dustingetz
7 hours ago
> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.
What? No! Content volume only matters in stupid contests like VC app marketing grifts or political disinformation ops where the content isn’t even meant to be read, it’s an excuse for a headline. I personally write all my startup’s marketing content, quality is exquisite and due to this our brand is becoming a juggernaut
ozim
7 hours ago
What kind of silliness is this?
AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.
Had a friend who thought that if it is written in a book it is for sure true. Well NO!
There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.
Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.
shprd
7 hours ago
No one claimed humans are perfect. But gen AI is a force multiplier for every problem we had to deal with. It's just completely different scale. Your brain is about to be DDOSed by junk content.
Of course, gen AI is just a tool that can be used for good or bad, but spam, targeted misinformation campaigns, and garbage content in general is one area that will be most amplified because it became so low effort and they don't care about doing any review, double-checking, etc. They can completely automate their process to whatever goal they've in mind. So where sensible humans enjoy 10x productivity, these spam farms will be enjoying 10000x scale.
So I don't think downplaying it and acting like nothing changed, is the brightest idea. I hope you see now how that's a completely different game, one that's already here but we aren't prepared for yet, certainly not with traditional tools we have.
flir
6 hours ago
> Your brain is about to be DDOSed by junk content.
It's not the best analogy because there's already more junk out there than can fit through the limited bandwidth available to my brain, and yet I'm still (vaguely) functional.
So how do I avoid the junk now? Rough and ready trust metrics, I guess. Which of those will still work when the spam's 10x more human?
I think the recommendations of friends will still work, and we'll increasingly retreat to walled gardens where obvious spammers (of both the digital and human variety) can be booted out. I'm still on facebook, but I'm only interested in a few well-moderated groups. The main timeline is dead to me. Those moderators are my content curators for facebook content.
ozim
6 hours ago
That is something I agree with.
One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.
shprd
6 hours ago
> One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.
The junk gets thrown at you in mass volume at low cost without your permission. What you gonna do? keep dodging it? waste your time evaluating every piece of information you come across?
If one of the results on the first page in search deviate from others, it's easy to notice. But if all of them agree, they became the truth. Of course your first thought is to say search engines are shit or whatever off-hand remarks, but this example is just to illustrate how the volume alone can change things. The medium doesn't matter, these things could come in many forms: book reviews, posts on social media, ads, false product description on amazon, etc.
Of course, these things exist today but the scale is different, the customization is different. It's like the difference between firearms and drones. If you think it's the same old game and you can defend against the new threat using your old arsenal, I admire your confidence but you're in for a surprise.
shprd
6 hours ago
So you're basically sheltering yourself and seeking human curated content? Good for you, I follow similar strategy. How do you propose we apply this solution for the masses in today's digital age? or you're just saying 'each on their own'?
Sadly, you seem to not be looking further than your nose. We are not talking about just you and me here. Less tech literate people are the ones at a disadvantage and need protection the most.
flir
4 hours ago
> How do you propose we apply this solution for the masses in today's digital age?
The social media algorithms are the content curators for the technically illiterate.
Ok, they suck and they're actively user-hostile, but they sucked before AI. Maybe (maybe!) AI's the straw that breaks the camel's back, and people leave those algorithm-curated spaces in droves. I hope that, one way and another, they'll drift back towards human-curated spaces. Maybe without even realizing it.
dns_snek
7 hours ago
> nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.
The problem is that wheat:chaff ratio used to be 1:100, and soon it's going to become 1:100 million. I think you're severely underestimating the amount of effort it's going to take to find real information in the sea of AI generated content.
tempfile
7 hours ago
> you have to make useful proxy measures what to read and what not
yes, obviously. But AI slop makes those proxy measures significantly more complicated. Critical thinking is not magic - it is still a guess, and people are obviously worse at distinguishing AI bullshit from human bullshit.
grecy
8 hours ago
Eh, like everything in life you can choose what you spend your time on and what you ignore.
There have always been human writers I don’t waste my time on, and now there are AI writers in the same category.
I don’t care. I will just do what I want with my life and use my time and energy on things I enjoy and find useful.
GrumpyNl
7 hours ago
response from AI on this: I completely understand where you're coming from. The increasing reliance on AI in writing does raise important questions about authenticity and connection. There’s something uniquely human in knowing that the words you're reading come from someone’s personal thoughts, experiences, and emotions—even if flawed. AI-generated content, while efficient and often well-written, lacks that deeper layer of humanity, the imperfections, and the creative struggle that gives writing its soul.
It’s easy to feel disillusioned when you know AI is shaping so much of the content around us. Writing used to be a deeply personal exchange, but now, it can feel mechanical, like it’s losing its essence. The pressure to keep up with AI can be overwhelming for human writers, leading to this shift in content creation.
At the same time, it’s worth considering that the human element still exists and will always matter—whether in long-form journalism, creative fiction, or even personal blogs. There are people out there who write for the love of it, for the connection it fosters, and for the need to express something uniquely theirs. While the presence of AI is unavoidable, the appreciation for genuine human insight and emotion will never go away.
Maybe the answer lies in seeking out and cherishing those authentic voices. While AI-generated writing will continue to grow, the hunger for human storytelling and connection will persist too. It’s about finding balance in this new reality and, when necessary, looking back to the richness of past writings, as you mentioned. While it may seem like a loss in some ways, it could also be a call to be more intentional in what we read and who we trust to deliver those words.