I Am Tired of AI

682 pointsposted 9 hours ago
by Liriel

668 Comments

low_tech_love

8 hours ago

The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.

Roark66

5 hours ago

>The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die

Do you think AI has changed that in any way? I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s. It is around that time when Google stopped pretending they are a search company and focused on their primary business of advertising.

Before, at least they were trying to downrank all the crap "word aggregators". After, they stopped caring at all.

AI gives even better tools to page rank. Detection of AI generated content is not that bad.

So why don't we have "a new Google" emerge? Simple, because of the monopolistic practices Google did to make the barrier to entry huge. First, 99% of the content people want to search for is behind a login wall (Facebook, Instagram, twitter, YouTube), second almost all CDNs now implement "verify you are human" by default. Third, no one links to other sites. Ever! These 3 things mean a new Google is essentially impossible. Even duck duck go has thrown the towel and subscribed to Bing results.

It has nothing to do with AI, and everything to do with Google. In fact AI might give us the tools to better fight Google.

TheOtherHobbes

3 hours ago

Google didn't change it, it embodied it. The problem isn't AI, it's the pervasive culture of PR and advertising which appeared in the 50s and eventually consumed its host.

Western industrial culture was based on substance - getting real shit done. There was always a lot of scammery around it, but the bedrock goal was to make physical things happen - build things, invent things, deliver things, innovate.

PR and ad culture was there to support that. The goal was to change values and behaviours to get people to Buy More Stuff. OK.

Then around the time the Internet arrived, industry was off-shored, and the culture started to become one of appearance and performance, not of substance and action.

SEO, adtech, social media, web framework soup, management fads - they're all about impression management and popularity games, not about underlying fundamentals.

This is very obvious on social media in the arts. The qualification for a creative career used to be substantial talent and ability. Now there are thousands of people making careers out of performing the lifestyle of being a creative person. Their ability to do the basics - draw, write, compose - is very limited. Worse, they lack the ability to imagine anything fresh or original - which is where the real substance is in art.

Worse than that, they don't know what they don't know, because they've been trained to be superficial in a superficial culture.

It's just as bad in engineering, where it has become more important to create the illusion of work being done, than to do the work. (Looking at you, Boeing. And also Agile...)

You literally make more money doing this. A lot more.

So AI isn't really a tool for creating substance. It's a tool for automating impression management. You can create the impression of getting a lot of work done. Or the impression of a well-written cover letter. Or of a genre novel, techno track, whatever.

AI might one day be a tool for creating substance. But at the moment it's reflecting and enabling a Potemkin busy-culture of recycled facades and appearances that has almost nothing real behind it.

Unfortunately it's quite good at that.

But the problem is the culture, not the technology. And it's been a problem for a long time.

techdmn

2 hours ago

Thank you, you've stated this all very clearly. I've been thinking about this in terms of "doing work", where you care about the results, and "performing work", where you care about how you are evaluated. I know someone who works in a lab, and pointed out that some of the equipment being used was out of spec and under-serviced to the point that it was essentially a random number generator. Caring about this is "doing work". However, pointing it out made that person the enemy of the greater cohort that was "performing work". The results were not important to them, their metrics about units of work completed was. I see this pattern frequently. And it's hard to say those "performing work" are wrong. "Performing" is rewarded, "doing" is punished - Perhaps right to the top, as many companies are involved in a public performance designed to affect the short-term stock price.

rjbwork

an hour ago

Yeah. It's like our entire society has been turned into a Goodhart's Law based simulacrum of a productive society.

I mean, here it's late morning and I'm commenting on hacker news. And getting paid for it.

1dom

2 hours ago

I like this take on modern tech motivations.

The thing that I struggle with is I agree with it, but I also get a lot of value in using AI to make me more productive - to me, it feels like it lets me focus on producing substance and actions, freeing me up from having to some tedious things in some tedious ways. Without getting into the debate about if it's productive overall, there are certain tasks which it feels irrefutably fast and effective at (e.g. writing tests).

I do agree with the missing substance with modern generative AI: everyone notices when it's producing things in that uncanny valley, and if no human is there to edit that, it makes people uncomfortable.

The only way I can reconcile the almost existential discomfort of AI against my actual day-to-day generally-positive experience with AI is to accept that AI in itself isn't the problem. Ultimately, it is an info tool, and human nature makes people spam garbage for clicks with it.

People will do the equivalent of spam garbage for clicks with any new modern thing, unfortunately.

Getting the most out of latest information of a society has probably always been a cat and mouse game of trying to find the areas where the spam-garbage-for-clicks people haven't outnumbered use-AI-to-facilitate-substance people, like here, hopefully.

skydhash

15 minutes ago

Just one nitpick. The thing about test is that it’s repetitive enough to be automated (in a deterministic way) or abstracted into a framework. You don’t need an AI to generate it.

deephoneybear

15 minutes ago

Echoing other comments in gratitude for this very clear articulation of feelings I share, but have not manifested so well. Just wanted to add two connected opinions that round out this view.

1) This consuming of the host is only possible on the one hand because the host has grown so strong, that is the modern global industrial economy is so efficient. The doing stuff side of the equation is truly amazing and getting better (some real work gets done either by accident or those who have not-succumbed to PR and ad culture), and even this drop of "real work" produces enough material wealth to support (at least a lot of) humanity. We really do live in a post scarcity world from a production perspective, we just have profound distribution and allocation problems.

2) Radical wealth inequality profoundly exacerbates the problem of PR and ad culture. If everyone has some wealth doing things that help many people live more comfortably is a great way to become wealthy. But if very few people have wealth, then doing a venture capital FOMO hustle on the wealthy is anyone's best ROI. Radical wealth inequality eventually breaks all the good aspects of capitalist/market economies.

rich_sasha

5 hours ago

Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.

The flip side was, calligraphy was sufficient evidence for both his education to whoever hired him, and for a recipient of a document, of its official nature. Calligraphy itself or course didn't make him efficient or smart or fair.

That's long gone of course, but we had similar heuristics. I am reminded of the Reddit story about an AI-generated mushroom atlas that had factual errors and lead to someone getting poisoned. We can no longer assume that a book is legit simply because it looks legit. The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.

LLMs are fantastic at breaking our heuristics as to what is and isn't legit, but not as good at being right.

matwood

5 hours ago

> We can no longer assume that a book is legit simply because it looks legit.

The problem is that this has been an issue for a long time. My first interactions with the internet in the 90s came along with the warning "don't automatically trust what you read on the internet".

I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation. Maybe LLM education will shift people back toward skepticism - unlikely, but I can hope.

honzabe

2 hours ago

> I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation.

I think that previous generations were not any different. For most people, trusting is the default mode and you need to learn to distrust a source. I know many people who still have not learned that about the internet in general. These are often older people. They believe insane things just because there exists a nicely looking website claiming that thing.

mrweasel

3 hours ago

One of the issues today is the volume of content produced, and that journalism and professional writing is dying. LLMs produce large amounts of "good enough" quality to make a profit.

In the 90s we could reasonably trust that that the major news sites and corporate websites was true, while random forums required a bit more critical reading. Today even formerly trusted sites may be using LLMs to generate content along with automatic translations.

I wouldn't necessarily put the blame on LLMs, this just make it easier. The trolls and spammers was always there, now they just have a more powerful tool. The commercial sites now have a tool they don't understand, which they apply liberally, because it reduces cost, or their staff use it, to get out of work, keep up with deadlines or to cover up incompetence. So, not the fault of the LLMs, but their use is worsening existing trends.

llm_trw

4 hours ago

>That's long gone of course, but we had similar heuristics.

To quote someone about this:

>>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.

A book looking legit, a paper being peer reviewed, an expert saying something, none of those things were _ever_ good heuristics. It's just that it was the done thing. Now we have to face the fact that our heuristics are obviously broken and we have to start thinking about every topic.

To quote someone else about this:

>>Most people would rather die than think.

Which explains neatly the politics of the last 10 years.

hprotagonist

4 hours ago

> To quote someone about this: >>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.

So, same as it ever was?

Smoke, nothing but smoke. [That’s what the Quester says.] There’s nothing to anything—it’s all smoke. What’s there to show for a lifetime of work, a lifetime of working your fingers to the bone? One generation goes its way, the next one arrives, but nothing changes—it’s business as usual for old planet earth. The sun comes up and the sun goes down, then does it again, and again—the same old round. The wind blows south, the wind blows north. Around and around and around it blows, blowing this way, then that—the whirling, erratic wind. All the rivers flow into the sea, but the sea never fills up. The rivers keep flowing to the same old place, and then start all over and do it again. Everything’s boring, utterly boring— no one can find any meaning in it. Boring to the eye, boring to the ear. What was will be again, what happened will happen again. There’s nothing new on this earth. Year after year it’s the same old thing. Does someone call out, “Hey, this is new”? Don’t get excited—it’s the same old story. Nobody remembers what happened yesterday. And the things that will happen tomorrow? Nobody’ll remember them either. Don’t count on being remembered.

c. 450BC

wwweston

2 hours ago

Culd be my KJV upbringing talking, but personally I think there's an informative quality to calling it "vanity" over smoke.

And there's more reasons not to simply compare the modern challenges of image and media with the ancient grappling with impermanence. Tech may only truly change the human condition rarely, but it frequently magnifies some aspect of it, sometimes so much that the quantitative change becomes a qualitative one.

And in this case, what we're talking about isn't just impermanence and mortality and meaning as the preacher/quester is. We'd be lucky if it's business as usual for old planet earth, but we've managed to magnify our ability to impact our environment with tech to the point where winds, rivers, seas, and other things may well change drastically. And as for "smoke", it's one thing if we're dust in the wind, but when we're dust we can trust, that enables continuity and cooperation. There's always been reasons for distrust, but with media scale, the liabilities are magnified, and now we've automated some of them.

The realities of human nature that are the seeds of the human condition are old. But some of the technical and social machinery we have made to magnify things is new, and we can and will see new problems.

hprotagonist

2 hours ago

'הבל (hevel)' has the primary sense of vapor, or mist -- a transient thing, not a meaningless or purposeless one.

llm_trw

3 hours ago

One is a complaint that everything is constantly changing, the other that nothing ever changes. I don't think you could misunderstand what either is trying to say harder if you tried.

hprotagonist

3 hours ago

"everything is constantly changing!" is the thing that never changes.

llm_trw

3 hours ago

You sound like a poorly trained gpt2 model.

failbuffer

2 hours ago

Heuristics don't have to be perfect to be useful so long as they improve the efficacy of our attentions. Once that breaks down society must follow because thinking about every topic is intractable.

ziml77

3 hours ago

The mushroom thing is almost certainly true. There's tons of trash AI generated foraging books being published to Amazon. Atomic Shrimp has a video on it.

sevensor

4 hours ago

> Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.

Similar story! Family lore has it that he was from a farming family of modest means, but he was hired to write insurance policies because of his beautiful handwriting, and this was a big step up in the world.

newswasboring

4 hours ago

> The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.

What?! Someone just made up something and then got mad at it. This is specially weird when you even acknowledge its a made up story. If we start evaluating new things like this nothing will ever progress.

bad_user

5 hours ago

You're attributing too much to Google.

Bots are now blocked because they've been abusive. When you host content on the internet, it's not fun to have bots bring your server down or inflate your bandwidth price. Google's bot is actually quite well-behaved. The other problem has been the recent trend in AI, and I can understand blockers being put in place, since AI is essentially plagiarizing content without attribution. But I'd blame OpenAI more at this point.

I also don't think you can blame Google for the centralization behind closed gardens. Or for why people no longer link to other websites. That's ridiculous.

And you should be attributing them the fact that the web is still alive.

dennis_jeeves2

4 hours ago

>I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.

Things have not changed much really. This was true since the dawn of man-kind (and woman-kind from the man-kind rib of course) even before there writings was invented, in the form of gossip.

The internet/AI now carries on the torch of our ancestral inner calling, lol.

ninetyninenine

4 hours ago

> I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.

I mean the AI is trained and modeled on this excrement. It makes sense. As much as people think AI content is raw garbage… they don’t realize that they are staring into a mirror.

elnasca2

8 hours ago

What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.

Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?

nicce

7 hours ago

In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.

ookdatnog

7 hours ago

Writing a text of decent quality used to constitute proof of work. This is now no longer the case, and we haven't adapted to this assumption becoming invalid.

For example, when applying to a job, your cover letter used to count as proof of work. The contents are less important than the fact that you put some amount of effort in it, enough to prove that you care about this specific vacancy. Now this basic assumption has evaporated, and job searching has become a meaningless two-way spam war, where having your AI-generated application selected from hundreds or thousands of other AI-generated applications is little more than a lottery.

bitexploder

7 hours ago

This. I am very picky about how I use ML still, but it is unsurpassed as a virtual editor. It can clean up grammar and rephrase things in a very light way, but it gives my prose the polish I want. The thing is, I am a very decent writer. I wrote professionally for 18 years as a part of my job delivering reports of high quality as my work product. So, it really helps that I know exactly what “good” looks like by my standards. ML can clean things up so much faster than I can and I am confident my writing is organic still, but it can fix up small issues, find mistakes, etc very quickly. A word change here or there, some punctuation, that is normal editing. It is genuinely good at light rephrasing as well, if you have some idea of what intent you want.

When it becomes obvious, though, is when people let the LLM do the writing for them. The job search bit is definitely rough. Referrals, references, and actual accomplishments may become even more important.

gtirloni

6 hours ago

As usual, LLMs are an excellent tool when you already have a decent understanding of the field you're interested in using them in. Which is not the case of people posting in social media or creating their first programs. That's where the dullness and noise come from.

The noise ground has been elevated 100x by LLMs. It was already bad before but it's accelerated the trend.

So, yes, we should have never been trusting anything online but before LLMs we could rely on our brains to quickly identify the bad. Nowadays, it's exhausting. Maybe we need a LLM trained on spotting LLMs.

This month, I, with decades of experience, used Claude Dev as an experiment to create a small automation tool. After countless manual fixes, it finally worked and I was happy. Until I gave thr whole thing a decent look again and realized what a piece of garbage I had created. It's exhausting to be on the lookout for these situations. I prefer to think things through myself, it's a more rewarding experience with better end results anyway.

danielbln

2 hours ago

Not to sound too dismissive, but there is a distinct learning curve when it comes to using models like Claude for code assist. Not just the intuition when the model goes off the rails, but also what to provide it in the context, how and what to ask for etc. Trying it once and dismissing it is maybe not the best experimental setup.

I've been using Zed recently with its LLM integration so assist me in my development and its been absolutely wonderful, but one must control tightly what to present to the model and what to ask for and how.

gtirloni

2 hours ago

It's not my first time using LLMs and you're assuming too much.

iszomer

5 hours ago

LLM's are a great onramp to filling in knowledge that may have been lost to age or updated to their modern classification. For example, I didn't know Hokkien and Haka are distinct linguistic branches within the Sino-Tibetan language and warrants more (personal) research into the subject. And all this time, without the internet, we often just colloquially called it Taiwanese.

aguaviva

4 hours ago

How is this considered "lost" knowledge there are (large) Wikipedia pages about those languages (which is of course what the LLM is cribbing from)?

"Human-curated encycolpedias are a great onramp to filling in knowledge gaps", that I can go with.

nicce

4 hours ago

It is lost in a sense that you had no idea about such possibility and you did not know to search it in the first hand, while I believe that in this case LLM brought it up as a side note.

aguaviva

2 hours ago

Such fortuitous stumblings happen all the time without LLMs (and in regular libraries, for those brave enough to use them). It's just the natural byproduct of doing any kind of research.

skydhash

9 minutes ago

Most of my knowledge comes from physical encyclopedia and download the wikipedia text dump (internet was not readily available). You search for one thing and just explore by clicking.

dotnet00

4 hours ago

Yeah, this is how I use it too. I tend to be a very dry writer, which isn't unusual in science, but lately I've taken to writing, then asking an LLM to suggest improvements.

I know not to trust it to be as precise as good research papers need to be, so I don't take its output, it usually helps me reorder points or use different transitions which make the material much more enjoyable to read. I also find it useful for helping to come up with an opening sentence from which to start writing a section.

bitexploder

2 hours ago

Active voice is difficult in technical and scientific writing for sure :)

rasulkireev

38 minutes ago

Great opportunity to get ahead of all the lazy people who use AI for a cover letter. Do a video! Sure, AI will be able to do that soon, but then we (not lazy people, who care) will come up with something even more personal!

roenxi

7 hours ago

> While the professional looking text could have been already wrong, the likelihood was smaller...

I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.

SoftTalker

an hour ago

In the past, with a printed book or journal article, it was safe to assume that an editor had been involved, to some degree or another challenging claimed facts, and the publisher also had an interest in maintaining their reputation by not publishing poorly researched or outright false information. You would also have reviewers reading and reacting to the book in many cases.

All of that is gone now. You have LLMs spitting their excrement directly onto the web without so much as a human giving it a once-over.

dietr1ch

7 hours ago

I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.

Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.

wlesieutre

4 hours ago

There’s not enough time in the day to go on a full bore research project about every sentence I read, so it’s not physically possible to be “fully diligent.”

The best we can hope for is prioritizing which things are worth checking. But even that gets harder because you go looking for sources and now those are increasingly likely to be LLM spam.

roenxi

7 hours ago

I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.

AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).

Jensson

7 hours ago

> I'd argue people clearly don't care about the truth at all

Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.

Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.

rendall

6 hours ago

> if you know the truth you can make much better decisions than those who are ignorant

There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better life outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.

Jensson

6 hours ago

You are thinking too big, most of the time the truth is whether a piece of food is spoiled or not etc, and that greatly affects your quality of life. Companies would love to keep you ignorant here so they can sell you literal shit, so there are powerful forces wanting to keep you ignorant, and today those powerful forces has way stronger tools than ever before working to keep you ignorant.

roenxi

6 hours ago

Socrates is also a big name. Never forget.

danmaz74

3 hours ago

You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

When dealing with almost everything you do day by day, you have to rely on the credibility of the source of the information you have. Otherwise how could you know that the can of tuna you're going to eat is actually tuna and not some venomous fish? How do you know that you should do what your doctor told you? Etc. etc.

svieira

an hour ago

> You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

But isn't your third sentence True?

quietbritishjim

7 hours ago

How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.

glenstein

6 hours ago

>But you never checked the accuracy of your information before so

They didn't say that and that's not a fair or warranted extrapolation.

They're talking about a heuristic that we all use, as a shorthand proxy that doesn't replace but can help steer the initial navigation in the selection of reliable sources, which can be complemented with fact checking (see the steelmanning I did there?). I don't think someone using that heuristic can be interpreted as tantamount to completely ignoring facts, which is a ridiculous extrapolation.

I also think is misrepresents the lay of the land, which is that in the universe of nonfiction writing, I don't think that there's a fire hose of facts and falsehoods indistinguishable in tone. I think there's in fact a reasonably high correlation between the discernible tone of impersonal professional and credible information, which, again (since this seems to be a difficult sticking point) doesn't mean that the tone substitutes for the facts which still need to be verified.

The idea that information and misinformation are tonally indistinguishable is, in my experience, only something believed by post-truth "do you own research" people who think there are equally valid facts in all directions.

There's not, for instance, a Science Daily of equally sciency sounding misinformation. There's not a second different IPCC that publishes a report with thousands of citations which are all wrong, etc. Misinformation is out there but it's not symmetrical, and understanding that it's not symmetrical is an important aspect of information literacy.

This is important because it goes to their point, which is that something has changed, in the advent of LLMS. That symmetry may be coming, and it's precisely the fact that it wasn't there before that is pivotal.

cutemonster

7 hours ago

Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).

I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.

(Or if they clicked "Generate" and don't actually care, or even have different opinions)

mewpmewp2

5 hours ago

Although, there were already before tons of "technical influencers" before that who excelled at writing, but didn't know deeply what they were writing about.

They give a superficially smart look, but really they regurgitate without deep understanding.

factormeta

6 hours ago

>In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

That is pretty much true also for other media, such as audio and video. Before digital stuff become mainstream pics are developed in the darkroom, and film are actually cut with scissors. A lot of effort are put into producing the final product. AI has really commoditized for many brain related tasks. We must realize the fragile nature of digital tech and still learn how to do these by ourselves.

gizmo

7 hours ago

I think you overestimate the value of things looking professional. The overwhelming majority of books published every year are trash, despite all the effort that went into research, writing, and editing them. Most news is trash. Most of what humanity produces just isn't any good. An top expert in his field can leave a typo-riddled comment in a hurry that contains more valuable information than a shelf of books written on the subject by lesser minds.

AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

herval

6 hours ago

> AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

AIs are getting good at precisely imitating your voice with a single sample as reference, or generating original music, or creating video with all sorts of impossible physics and special effects. By your rationale, nothing “requires much intelligence or expertise”, which is patently false (even for text writing)

gizmo

5 hours ago

My point is that writing a good book is vastly more difficult than writing a mediocre book. The distance between incoherent babble and a mediocre book is smaller than the distance between a mediocre book and a great book. Most people can write professional looking text just by putting in a little bit of effort.

bitexploder

7 hours ago

I think you underestimate how high that bar is, but I will grant that it isn’t that high. It can be a form of sophistry all of its own. Still, it is a difficult skill to write clearly, simply, and without a lot of extravagant words.

mewpmewp2

5 hours ago

Although presently at least it's still quite obvious when something is written by AI.

chilli_axe

4 hours ago

it's obvious when text has been produced by chatGPT with the default prompt - but there's probably loads of text on the internet which doesn't follow AI's usual prose style that blends in well.

ImHereToVote

7 hours ago

So content produced by think tanks was credible by default, since think tanks are usually very well funded. Interesting perspective

diggan

7 hours ago

> By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject

How did you know this unless you also had the same or more knowledge than the author?

It would seem to me we are as clueless now as before about how to judge how skilled a writer is without requiring to already posses that very skill ourselves.

ffsm8

7 hours ago

Trust as no bearing on what they said.

Reading was a form of connecting with someone. Their opinions are bound to be flawed, everyone's are - but they're still the thoughts and words of a person.

This is no longer the case. Thus, the human factor is gone and this reduces the experience to some of us, me included.

farleykr

7 hours ago

This is exactly what’s at stake. I heard an artist say one time that he’d rather listen to Bob Dylan miss a note than listen to a song that had all the imperfections engineered out of it.

herval

6 hours ago

The flipside of that is the most popular artists of all time (eg Taylor Swift) do autotune to perfection, and yet more and more people love them

kombookcha

6 hours ago

If you ask a Swiftie what they love about Taylor Swift, I guarantee they will not say "the autotune is flawless".

They're not connecting with the relative correctness of each note, but feeling a human, creative connection with an artist expressing herself.

herval

6 hours ago

They're "creatively connecting" to an autotuned version of a human, not to a "flawed Bob Dylan"

kombookcha

6 hours ago

They're not connecting to the autotune, but to the artist. People have a lot of opinions about Taylor Swift's music but "not being personal enough" is definitely not a common one.

If you wanna advocate for unplugged music being more gratifying, I don't disagree, but acting like the autotune is what people are getting out of Taylor Swift songs is goofy.

soco

5 hours ago

I have no idea about Taylor Swift so I'll ask in general: can't we have a human showing an autotuned personality? Like, you are what you are in private, but in interviews you focus on things suggested by your AI conselor, your lyrics are fine tuned by AI, all this to show a better marketable personality? Maybe that's the autotune we should worry about. Again, nothing new (looking at you, Village People) but nowadays the potential powered by AI is many orders of magnitude higher... you could say yes only until the fans catch wind of it, true, but by that time the next figure shows up and so on. Not sure where this arms escalation can lead us. Because also acceptance levels are shifting, so what we reject today as unacceptable lies could be fine tomorrow, look already at the AI influencers doing a decent job while overtly fake.

oceanplexian

4 hours ago

I’m convinced it’s already being done, or at least played with. Lots of public figures only speak through a teleprompter. It would be easy to put a fine tuned LLM on the other side of that teleprompter where even unscripted questions can be met with scripted answers.

herval

5 hours ago

you're missing the point by a few miles

Frost1x

6 hours ago

I think the key thing here is equating trust and truth. I trust my dog, a lot, more than most humans frankly. She has some of my highest levels of trust attainable, yet I don’t exactly equate her actions with truth. She often barks when there’s no one at the door or at false threats she doesn’t know aren’t real threats and so on. But I trust she believes it 100% and thinks she’s helping me 100%.

What I think OP was saying and I agree with is that connection, that knowing no matter what was said or how flawed or what motive someone had I trusted there was a human producing the words. I could guess and reasons the other factors away. Now I don’t always know if that is the case.

If you’ve ever played a multiplayer game, most of the enjoyable experience for me is playing other humans. We’ve had good game AIs in many domains for years, sometimes difficult to distinguish from humans, but I always lost interest if I didn’t know I was in fact playing and connecting with another human. If it’s just some automated system I could do that any hour of the day as much as I want but it lacked the human connection element, the flaws, the emotion, the connection. If you can reproduce that then maybe it would be enjoyable but that sort of substance has meaning to many.

It’s interesting to see a calculator quickly spit out correct complex arithmetic but when you see a human do it, it’s more impressive or at least interesting, because you know the natural capability is lower and that they’re flawed just like you are.

sevensor

4 hours ago

For me, the problem has gone from “figure out the author’s agenda” to “figure out whether this is a meaningful text at all,” because gibberish now looks a whole lot more like meaning than it used to.

pxoe

4 hours ago

This has been a problem on the internet for the past decade if not more anyway, with all of the seo nonsense. If anything, maybe it's going to be ever so slightly more readable.

solidninja

3 hours ago

There's a quantity argument to be made here - before, it used to be hard to generate large amounts of plausible but incorrect text. Now it easy. Similar to surveillance before/after smartphones + the internet - you had to have a person following you vs just soaking up all the data on the backbone.

a99c43f2d565504

7 hours ago

Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda. And like pointed out by others, the usual text style and vocabulary signs that could have been used to identify expertise or agenda are gone.

danielmarkbruce

16 minutes ago

Those signs are largely bs. It's a textual version of charisma.

thesz

7 hours ago

Propaganda works by repeating the same in different forms. Now it is easier to have different forms of the same, hence, more propaganda. Also, it is much easier to iinfluence whatever people write by influencing the tool they use to write.

Imagine that AI tools sway generated sentences to be slightly close, in summarisation space, to the phrase "eat dirt" or anything. What would happen?

ImHereToVote

7 hours ago

Hopefully people will exercise more judgement now that every Tom, Dick, and Harry scam artists can output elaborate prose.

galactus

5 hours ago

I think it is a totally different threat. Excluding adversarial behavior, humans usually produce information with a quality level that is homogeneous (from homogeneously sloppy to homogeneously rigurous).

AI otoh can produce texts that are quite accurate globally with some totally random hallucinations here and there. It makes it quite harder to identify

baq

8 hours ago

scale makes all the difference. society without trust falls apart. it's good if some people doubt some things, but if everyone necessarily must doubt everything, it's anarchy.

vouaobrasil

6 hours ago

Perhaps that anarchy is the exact thing we need to convince everyone to revolt against big tech firms like Google and OpenAI and take them down by mob rule.

dangitman

7 hours ago

Is our society built on trust? I don't generally trust most of what's distributed as news, for instance. Virtually every newsroom in america is undermined by basic conflicts of interest. This has been true since long before I was born, although perhaps the death of local news has accelerated this phenomenon. Mostly I just "trust" that most people don't want to hurt me (even if this trust is violated any time I bike along side cars for long enough)

I don't think that LLMs will change much, frankly, it's just gonna be more obvious when they didn't hire a human to do the writing.

low_tech_love

2 hours ago

It’s nothing to do with trusting in terms of being true or false, but whatever I read before I felt like, well, it can be good or bad, I can judge it, but whatever it is, somebody wrote it. It’s their work. Now when I read something I just have absolutely no idea whether the person wrote it, how much percent did they write it, or how much they even had to think before publishing it. Anyone can simply publish a perfectly well-written piece of text about any topic whatsoever, and I just can’t wrap my head around why, but it feels like a complete waste of time to read anything. Like… it’s all just garbage, I don’t know.

everdrive

7 hours ago

How do you like questioning much more of it, much more frequently, from many more sources? And mistrusting it in new ways. AI and regular people are not wrong in the same ways, nor for the same reasons, and now you must track this too, increasingly.

danielmarkbruce

17 minutes ago

The following appears to be true:

If one spends a lot of years reading a lot of stuff, they come to this conclusion, that most of it cannot be trusted. But it takes lots of years and lots of material to see it.

If they don't, they don't.

rsynnott

6 hours ago

There are topics on which you should be somewhat suspicious of anything you read, but also many topics where it is simply improbable that anyone would spend time maliciously coming up with a lie. However, they may well have spicy autocomplete imagine something for them. An example from a few days ago: https://news.ycombinator.com/item?id=41645282

voidmain0001

7 hours ago

I read the original comment not as a lament of not being able to trust the content, rather, they are lamenting the fact that AI/LLM generated content has no more thought or effort put into it than a cheap microwave dinner purchased from Walmart. Yes, it fills the gut with calories but it lacks taste.

On second thought, perhaps AI/LLM generated content is better illustrated with it being like eating the regurgitated sludge called cud. Nothing new, but it fills the gut.

akudha

5 hours ago

There were news reports that Russia spent less than a million dollars on a massive propaganda campaign targeting U.S elections and the American population in general.

Do you think it would be possible before internet, before AI?

Bad actors, poorly written/sourced information, sensationalism etc have always existed. It is nothing new. What is new is the scale, speed and cost of making and spreading poor quality stuff now.

All one needs today is a laptop and an internet connection and a few hours, they can wreak havoc. In the past, you'd need TV or newspapers to spread bad (and good) stuff - they were expensive, time consuming to produce and had limited reach.

kloop

3 minutes ago

There are lots of organizations with $1M and a desire to influence the population

This can only be done with a sentiment that was, at least partially, already there. And may very well happen naturally eventually

heresie-dabord

5 hours ago

> you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so. [...] Why do you think that you could trust what you read before?

A human communicator is, in a sense, testifying when communicating. Humans have skin in the social game.

We try to educate people, we do want people to be well-informed and to think critically about what they read and hear. In the marketplace of information, we tend very strongly to trust non-delusional, non-hallucinating members of society. Human society is a social-confidence network.

In social media, where there is a cloak of anonymity (or obscurity), people may behave very badly. But they are usually full of excuses when the cloak is torn away; they are usually remarkably contrite before a judge.

A human communicator can face social, legal, and economic consequences for false testimony. Humans in a corporation, and the corporation itself, may be held accountable. They may allocate large sums of money to their defence, but reputation has value and their defence is not without social cost and monetary cost.

It is literally less effort at every scale to consult a trusted and trustworthy source of information.

It is literally more effort at every scale to feed oneself untrustworthy communication.

kombookcha

7 hours ago

Debunking bullshit inherently takes more effort than generating bullshit, so the human factor is normally your big force multiplier. Does this person seem trustworthy? What else have they done, who have they worked with, what hidden motivations or biases might they have, are their vibes /off/ to your acute social monkey senses?

However with AI anyone can generate absurd torrential flows of bullshit at a rate where, with your finite human time and energy, the only winning move is to reject out of hand any piece of media that you can sniff out as AI. It's a solution that's imperfect, but workable, when you're swimming through a sea of slop.

ontouchstart

6 hours ago

Debugging is harder than writing code. Once the code passed linter, compiler and test, the bugs might be more subtly logical and require more effort and intelligence.

We are all becoming QA of this super automated world.

bitexploder

6 hours ago

Maybe the debunking AIs can match the bullshit generating AIs, and we will have balance in the force. Everyone is focused on the generative AIs, it seems.

desdenova

6 hours ago

No, they can't. They'll still be randomly deciding if something is fake or not, so they'll only have a probability of being correct, like all nondeterministic AI.

nicce

6 hours ago

There is always more money available for bullshit generation than bullshit removal.

eesmith

7 hours ago

The negation of 'I cannot trust' is not 'I could always trust' but rather 'I could sometimes trust'.

Nor is trust meant to mean something is absolute and unquestionable. I may trust someone, but with enough evidence I can withdraw trust.

escape_goat

3 hours ago

There was a degree of proof of work involved. Text took human effort to create, and this roughly constrained the quantity and quality of misinforming text to the number of humans with motive to expend sufficient effort to misinform. Now superficially indistinguishable text can be created by an investment in flops, which are fungible. This means that the constraint on the amount of misinforming text instead scales with whatever money is resourced to the task of generating misinforming text. If misinforming text can generate value for someone that can be translated back into money, the generation of misinforming text can be scaled to saturation and full extraction of that value.

tuyguntn

7 hours ago

> For me, LLMs don't change anything. I already questioned the information before and continue to do so.

I also did, but LLM increased the volume of content, which forces my brain first try to identify if content is generated by LLMs, which is consuming a lot of energy and makes brain even less focused, because now it's primary goal is skimming quickly to identify, instead of absorbing first and then analyzing info

desdenova

6 hours ago

The web being polluted only makes me ignore more of it.

You already know some of the more trustworthy sources of information, you don't need to read a random blog which will require a lot more effort to verify.

Even here on hackernews, I ignore like 90% of the spam people post. A lot of posts here are extremely low effort blogs adding zero value to anything, and I don't even want to think whether someone wasted their own time writing that or used some LLM, it's worthless in both cases.

croes

7 hours ago

The quota changed because it's now easier and faster

tempfile

7 hours ago

> I already questioned the information before and continue to do so.

You might question new information, but you certainly do not actually verify it. So all you can hope to do is sense-checking - if something doesn't sound plausible, you assume it isn't true.

This depends on having two things: having trustworthy sources at all, and being able to relatively easily distinguish between junk info and real thorough research. AI is a very easy way for previously-trustworthy sources to sneak in utter disinformation without necessarily changing tone much. That makes it much easier for the info to sneak past your senses than previously.

desdenova

6 hours ago

Exactly. The web before LLMs was mostly low effort SEO spam written by low-wage people in marketing agencies.

Now it's mostly zero effort LLM-generated SEO spam, and the low-wage workers lost their jobs.

vouaobrasil

6 hours ago

The difference is that now we'll have even more zero-effort SEO spam because AI is a force multiplier for that. Much more.

crazygringo

12 minutes ago

Counterpoint: I don't think anything has changed much at all.

I trust everything in the NY Times to the same degree I did before AI, which is to say to a significant degree (they rarely outright lie -- they generally don't say person X said Y if that person didn't) but far from entirely (they often omit entire, major, important perspectives from articles).

Are reporters using ChatGPT to quickly look up facts? Are they using it to brainstorm different article ledes, or column ideas? Or to polish clunky sentences? I couldn't care less, as long as they're still manually verifying the facts and evaluating the final prose according to the same standards, where I see no evidence of change or falling standards.

I've certainly come across entire ChatGPT-written websites full of e.g. Python tutorials that you quickly realize are hallucinated garbage, but that's also not really any different from previous blogspam regurgitated by low-cost workers in different countries who can barely write in grammatical English, but who were still human beings.

Whether someone uses AI to help them write or not is irrelevant to their trustworthiness. What matters is the quality control that comes afterwards. Even when you write an article, your first draft is often terrible. Writing is an iterative process where evaluating and editing what you've written is often more important than the writing itself. Often times, not a single sentence from your first draft will make it into the final version.

Complaining that an author used AI as a tool during writing is like complaining that a farmer used a tractor growing their crops instead of a hoe and shovel. What matters is the quality of the output, which humans are still evaluating as much as ever before.

akudha

5 hours ago

I was listening to an interview few months ago (forgot the name). He is a prolific reader/writer and has a huge following. He mentioned that he only reads books that are at least 50 years old, so pre 70s. That sounds like a good idea now.

Even ignoring the AI, if you look at the movies and books that come out these days, their quality is significantly lower than 30-40 years ago (on an average). Maybe people's attention spans and taste is to blame, or maybe people just don't have the money/time/patience to consume quality work... I do not know.

One thing I know for sure - there is enough high quality material written before AI, before article spinners, before MFA sites etc. We would need multiple lifetimes to even scratch the surface of that body of work. We can ignore mostly everything that is published these days and we won't be missing much

eloisant

2 hours ago

I'd say it's probably survivor's bias. Bad books from the pre 70s are probably forgotten and no longer printed.

Old books that we're still printing and are still talking about have stood the test of time. It doesn't mean that are no great recent books.

onion2k

7 hours ago

The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

What AI is going to teach people is that they don't actually need to trust half as many things as they thought they did, but that they do need to verify what's left.

This has always been the case. We've just been deferring to 'truster organizations' a lot recently, without actually looking to see if they still warrant having our trust when they change over time.

layer8

6 hours ago

How can you verify most of anything if you can’t trust any writing (or photographs, audio, and video, for that matter)?

Frost1x

6 hours ago

Independent verification is always good however not always possible and practical. At complex levels of life we have to just trust underlying processes work, usually until something fails.

I don’t go double checking civil engineers work (nor could I) for every bridge I drive over. I don’t check inspection records to make sure it was recent and proper actions were taken. I trust that enough people involved know what they’re doing with good enough intent that I can take my 20 second trip over it in my car without batting an eye.

If I had to verify everything, I’m not sure how I’d get across many bridges on a daily basis. Or use any major infrastructure in general where my life might be at risk. And those are cases where it’s very important to be done right, if it’s some accounting form or generated video on the internet… I have even less time to be concerned from a practical standpoint. Having the skills to do it should I want or need to are good and everyone should have these but we’re at a point in society we really have to outsource trust in a lot of cases.

This is true everywhere, even in science which these days many people just trust in ways akin to faith in some cases, and I don’t see anyway around that. The key being that all the information should exist to be able to independently verify something but from a practice standpoint it’s rarely viable.

nils-m-holm

7 hours ago

> It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition.

I am writing regularly and I will never use AI. In fact I am working on a 400+ pages book right now and it does not contain a single character that I have not come up with and typed myself. Something like pride in craftmanship does exist.

nyarlathotep_

2 hours ago

In b4 all the botslop shills tell you you're gonna get "left behind" if you don't pollute your output with GPT'd copypasta.

low_tech_love

2 hours ago

Amazing! Do you feel any pressure from your environment? And are you self-funded? I am also thinking about starting my first book.

nils-m-holm

an hour ago

What I write is pretty niche anyway (compilers, LISP, buddhism, advaita), so I do not think AI will cause much trouble. Google ranking small websites into oblivion, though, I do notice that!

smitelli

5 hours ago

I'm right there with you. I write short and medium form articles for my personal site (link in bio, follow it or don't, the world keeps spinning either way). I will never use AI as part of this craft. If that hampers my output, or puts me at a disadvantage compared to the competition, or changes the opinion others have of me, I really don't care.

vouaobrasil

6 hours ago

Nice. I will definitely consider your book over other books. I'm not interested in reading AI-assisted works.

noobermin

4 hours ago

When you're writing, how are you "missing out" if you're not using chatgpt??? I don't even understand how this can be unless what you're writing is already unnecessary such that you shouldn't need to write it in the first place.

jwells89

3 hours ago

I don’t get it either. Writing is not something I need that level of assistance with, and I would even say that using LLMs to write defeats some significant portion of the point of writing — by using LLMs to write for me I feel that I’m no longer expressing myself in the purest sense, because the words are not mine and do not exhibit any of my personality, tendencies, etc. Even if I were to train an LLM on my style, it’d only be a temporal facsimile of middling quality, because peoples’ styles evolve (sometimes quite rapidly) and there’s no way to work around all the corner cases that never got trained for.

As you say, if the subject is worth being written about, there should be no issue and writing will come naturally. If it’s a struggle, maybe one should step back and figure out why that is.

There may some argument for speed, because writing quality prose does take time, but then the question becomes a matter of quantity vs. quality. Do you want to write high quality pieces that people want to read at a slower pace or churn out endless volumes of low-substance grey goo “content”?

dotnet00

3 hours ago

LLMs are surprisingly capable editors/brainstorming tools. So, you're missing out in that you're being less efficient in editing.

Like, you can write a bunch of text, then ask an LLM to improve it with minimal changes. Then, you read through its output and pick out the improvements you like.

jayd16

2 hours ago

But that's the problem. Unique, quirky mannerisms become polished out. Flaws are smoothed and over sharpened.

I'm personally not as gloomy about it as the parent comments but I fear it's a trend that pushes towards a samey, mass-produced style in all writing.

Eventually there will be a counter culture and backlash to it and then equilibrium in quality content but it's probably here to stay for anything where cost is a major factor.

dotnet00

2 hours ago

Yeah, I suppose that would be an issue for creative writing. My focus is mostly on scientific writing, where such mannerisms should be less relevant than precision, so I didn't consider that aspect of other kinds of writing.

slashdave

2 hours ago

And I the only one who doesn't even like automatic grammar checkers, because they are contributing to a single and uniformly bland style of writing? LLMs are just going to make this worse.

tourmalinetaco

3 hours ago

Sure, but Grammarly and similar have existed far before the LLM boom.

dotnet00

2 hours ago

That's a fair point, I only very recently found that LLMs could actually be useful for editing, and hadn't really thought much of using tools for that kind of thing previously.

flir

7 hours ago

I've been using it in my personal writing (combination of GPT and Claude). I ask the AI to write something, maybe several times, and I edit it until I'm happy with it. I've always known I'm a better editor than I am an author, and the AI text gives me somewhere to start.

So there's a human in the loop who is prepared to vouch for those sentences. They're not 100% human-written, but they are 100% human-approved. I haven't just connected my blog to a Markov chain firehose and walked away.

Am I still adding to the AI smog? idk. I imagine that, at a bare minimum, its way of organising text bleeds through no matter how much editing I do.

vladstudio

7 hours ago

you wrote this comment completely by your own, right? without any AI involved. And I read your comment feeling confident that it's truly 100% yours. I think this reader's confidence is what the OP is talking about.

flir

7 hours ago

I did. I write for myself mostly so I'm not so worried about one reader's trust - I guess I'm more worried that I might be contributing to the dead internet theory by generating AI-polluted text for the next generation of AIs to train on.

At the moment I'm using it for local history research. I feed it all the text I can find on an event (mostly newspaper articles and other primary sources, occasionally quotes from secondary sources) and I prompt with something like "Summarize this document in a concise and direct style. Focus on the main points and key details. Maintain a neutral, objective voice." Then I hack at it until I'm happy (mostly I cut stuff). Analysis, I do the other way around: I write the first draft, then ask the AI to polish. Then I go back and forth a few times until I'm happy with that paragraph.

I'm not going anywhere with this really, I'm just musing out loud. Am I contributing to a tragedy of the commons by writing about 18th century enclosures? Because that would be ironic.

ontouchstart

6 hours ago

If you write for yourself, whether you use generated text or not, (I am using the text completion on my phone typing this message), the only thing that matters is how it affects you.

Reading and writing are mental processes (with or without advanced technology) that shape our collective mind.

edavison1

3 hours ago

>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

A very HN-centric view of the world. From my perch in journalism and publishing, elite writers absolutely loathe AI and almost uniformly agree it sucks. So to my mind the most 'competitive' spheres in writing do not use AI at all.

DrillShopper

3 hours ago

It doesn't matter how elite you think you are if the newspaper, magazine, or publishing company you write for can make more money from hiring people at a fraction of your cost and having them use AI to match or eclipse your professional output.

At some point the competition will be less about "does this look like the most skilled human writer wrote this?" and more about "did the AI guided by a human for a fraction of the cost of a skilled human writer output something acceptably good for people to read it between giant ads on our website / watch the TTS video on YouTube and sit through the ads and sponsors?", and I'm sorry to say, skilled human writers are at a distinct disadvantage here because they have professional standards and self respect.

edavison1

an hour ago

So is the argument here that the New Yorker can make more money from AI slop writing overseen by low-wage overseas workers? Isn't that obviously not the case?

Anyway I think I've misunderstood the context in which we're using the word 'competition' here. My response was about attitudes toward AI from writers at the tip-top of the industry rather than profit maxxing/high-volume content farm type places.

easterncalculus

2 hours ago

Exactly. Also, if the past few years is any indication, at the very least tech journalists in general tend to love to use what they hate.

goatlover

an hour ago

So you're saying major media companies are going to outsource their writing to people overseas using LLMs? There is more to journalism than the writing. There's also the investigative part where journalists go and talk to people, look into old records, etc.

edavison1

an hour ago

This has become such a talking point of mine when I'm inevitably forced to explain why LLMs can't come for my job (yet). People seem baffled by the idea that reporting collects novel information about the world which hasn't been indexed/ingested at any point because it didn't exist before I did the interview or whatever it is.

fennecfoxy

3 hours ago

Yes, but what really matters is what and how the general public, aka the consumers want to consume.

I can bang on about older games being better all day long but it doesn't stop Fortnite from being popular, and somewhat rightly so, I suppose.

jayd16

2 hours ago

Sure but no one gets to avoid all but the most elite content. I think they're bemoaning the quality of pulp.

beefnugs

43 minutes ago

Just add more swearing and off color jokes to everything you do and say. If there is one thing we know for sure its that the corporate AIs will never allow dirty jokes.

(it will get into the dark places like spam though, which seems dumb since they know how to make meth instead, spend time on that you wankers)

walthamstow

8 hours ago

I've even grown to enjoy spelling and grammar mistakes - at least I know a human wrote it.

ipaio

7 hours ago

You can prompt/train the AI to add a couple of random minor errors. They're trained from human text after all, they can pretend to be as human as you like.

eleveriven

6 hours ago

Making it feel like there's no reliable way to discern what's truly human

vouaobrasil

6 hours ago

There is. Be vehemently against AI, put 100% AI free in your work. The more consistent you are against AI, the more likely people will believe you. Write articles slamming AI. Personally, I am 100% against AI and I state that loud and clear on my blogs and YouTube channel. I HATE AI.

jaredsohn

5 hours ago

Hate to tell you but there is nothing stopping people using AI from doing the same thing.

vouaobrasil

5 hours ago

AI cannot build up a sufficient level of trust, especially if you are known in person by others who will vouch for you. That web of trust is hard to break with AI. And I am one of those.

danielbln

2 hours ago

Are you including transformer based translation models like Google Translate or Deepl in your categorical AI rejectio?

vasco

7 hours ago

The funny thing is that the things it refuses to say are "wrong-speech" type stuff, so the only things you can be more sure of nowadays are conspiracy theories and other nasty stuff. The nastier the more likely it's human written, which is a bit ironic.

matteoraso

7 hours ago

No, you can finetune locally hosted LLMs to be nasty.

slashdave

2 hours ago

Maybe the future of creative writing is fine tuning your own unique form of nastiness

Jensson

7 hours ago

> The nastier the more likely it's human written, which is a bit ironic.

This is as everything else, machine produced has a flawlessness along some dimension that humans tend to lack.

Applejinx

7 hours ago

Barring simple typos, human mistakes are erroneous intention from a single source. You can't simply write human vagaries off as 'error' because they're glimpses into a picture of intention that is perhaps misguided.

I'm listening to a slightly wonky early James Brown instrumental right now, and there's certainly a lot more error than you'd get in sequenced computer music (or indeed generated music) but the force with which humans wrest the wonkiness toward an idea of groove is palpable. Same with Zeppelin's 'Communication Breakdown' (I'm doing a groove analysis project, ok?).

I can't program the AI to have intention, nor can you. If you do, hello Skynet, and it's time you started thinking about how to be nice to it, or else :)

Gigachad

7 hours ago

There was a meme along the lines of people will start including slurs in their messages to prove it wasn’t AI generated.

jay_kyburz

7 hours ago

A few months ago, I tried to get Gemini to help me write some criticism of something. I can't even remember what it was, but I wanted to clearly say something was wrong and bad.

Gemini just could not do it. It kept trying to avoid being explicitly negative. It wanted me to instead focus on the positive. I think it evidently just told me no, and that it would not do it.

Gigachad

7 hours ago

Yeah all the current tools have this particular brand of corporate speech that’s pretty easy to pick up on. Overly verbose, overly polite, very vague, non assertive, and non opinionated.

stahorn

5 hours ago

Next big thing: AI that writes as British football hooligans talk about the referee after a match where their team lost?

dijit

7 hours ago

I mean, it's not a meme..

I included a few more "private" words than I should and I even tried to narrate things to prove I wasn't an AI.

https://blog.dijit.sh/gcp-the-only-good-cloud/

Not sure what else I should do, but it's pretty clear that it's not AI written (mostly because it's incoherent) even without grammar mistakes.

bloak

6 hours ago

I liked the "New to AWS / Experienced at AWS" cartoon.

redandblack

an hour ago

yesss. my thought too. All the variations of English should not lost.

I enjoyed all the belter dialogue in the expanse

1aleksa

7 hours ago

Whenever somebody misspells my name, I know it's legit haha

sseagull

6 hours ago

Way back when we had a landline and would get telemarketers, it was always a sign when the caller couldn’t pronounce our last name. It’s not even that uncommon a name, either

fzzzy

7 hours ago

Guess what? Now the computers will learn to do that so they can more convincingly pass a turing test.

faragon

7 hours ago

People could prompt for authenticity, adding subtle mistakes, etc. I hope that AI as a whole will help people writing better, if reading back the text. It is a bit like "The Substance" movie: a "better" version of ourselves.

oneshtein

7 hours ago

> Write a response to this comment, make spelling and grammar mistakes.

yeah well sumtimes spellling and grammer erors just make thing hard two read. like i no wat u mean bout wanting two kno its a reel person, but i think cleear communication is still importint! ;)

hyggetrold

an hour ago

> The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

This has nearly always been true. "Manufacturing consent" is way older than any digital technology.

unshavedyak

an hour ago

Agreed. I also suspect we've grown to rely on the crutch of trust far too much. Faulty writing has existed for ages but now suddenly because the computer is the thing making it up we have an issue with it.

I guess it depends on scope. I'm imaging scientific or education. Ie things we probably shouldn't have relied on Blogs to facilitate, yet we did. For looking up some random "how do i build a widget?", yea AI will probably make it worse. For now. Then it'll massively improve to the point that it's not even worth asking how to build the widget.

The larger "scientific or education" is what i'm concerned about, and i think we'll need a new paradigm to validate. We've been getting attacked on this front for 12+ years, AI is only bringing this to light imo.

Trust will have to be earned and verified in this word-soup world. I just hope we find a way.

hyggetrold

an hour ago

IMHO AI tools will (or at least should!) fundamentally change the way the education system works. AI tools are - from a certain point of view - really just a scaled version of AI now can put at our fingertips. Paradoxically, the more AI can do "grunt work" the more we need folks to be educated on the higher-level constructs on which they are operating.

Some of the bigger issues you're raising I think have less to do with technology and more to do with how our economic system is currently structured. AI will be a tremendous accelerant, but are we sure we know where we're going?

jcd748

3 hours ago

Life is short and I like creating things. AI is not part of how I write, or code, or make pixel art, or compose. It's very important to me that whatever I make represents some sort of creative impulse or want, and is reflective of me as a person and my life and experiences to that point.

If other people want to hit enter, watch as reams of text are generated, and then slap their name on it, I can't stop them. But deep inside they know their creative lives are shallow and I'll never know the same.

onemoresoop

2 hours ago

> If other people want to hit enter, watch as reams of text are generated, and then slap their name on it,

The problem is this kind of content is flooding the internet. Before you know it becomes extremely hard to find non AI generated content...

jcd748

2 hours ago

I think we agree. I hate it, and I can't stop it, but also I definitely won't participate in it.

low_tech_love

2 hours ago

That’s super cool, and I hope you are right and that I am wrong and artists/creators like you will still have a place in the future. My fear is that your work turns into some kind of artesanal fringe activity that is only accessible to 1% of people, like Ming vases or whatever.

CuriouslyC

30 minutes ago

A lot of writers using AI use it to create outlines of a chapter or scene then flesh it out by hand.

bryanrasmussen

7 hours ago

>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

Are you sure you don't mean if you write regularly in one particular subclass of writing - like technical writing, documentation etc.? Do you think novel writing, poetry, film reviews etc. cannot keep up in the same way?

t-3

7 hours ago

I'm absolutely positive that the vast majority of fiction is or will soon be written by LLM. Will it be high-quality? Will it be loved and remembered by generations to come? Probably not. Will it make money? Probably more than before on average as the author's effort is reduced to writing outlines and prompts, and editing the generated-in-seconds output, rather than months-years of doing the writing themselves.

lokimedes

7 hours ago

I get two associations from your comment: One about how AI being mainly used to interpolate within a corpus of prior knowledge, seems like entropy in a thermodynamical sense. The other, how this is like the Tower of Babel but where distrust is sown by sameness rather than differences. In fact, relying on AI for coding and writing, feels more like channeling demonic suggestions than anything else. No wonder we are becoming skeptical.

t43562

8 hours ago

It empowers people to create mountains of shit that they cannot distinguish from shit - so they are happy.

_heimdall

3 hours ago

> Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it.

Why do you say people have to do it?

People absolutely can choose not to use LLMs and to instead write their own words and thoughts, just like developers can simply refuse to build LLM tools, whether its because they have safety concerns or because they simply see "AI" in its current state as a doomed marketing play that is not worth wasting time and resources on. There will always be side effects to making those decisions, but its well within everyone's right to make them.

DrillShopper

3 hours ago

> Why do you say people have to do it?

Gotta eat, yo

goatlover

35 minutes ago

Somehow people made enough to eat before LLMs became all the rage a couple years ago. I suspect people are still making enough to eat without having to use LLMs.

fennecfoxy

3 hours ago

Why does a human being behind any words change anything at all? Trust should be based on established facts/research and not species.

bloak

2 hours ago

A lot of communication isn't about "established facts/research"; it's about someone's experience. For example, if a human writes about their experience of using a product, perhaps a drug, or writes what they think about a book or a film, then I might be interested in reading that. When they write using their own words I get some insight into how they think and what sort of person they are. I have very little interest in reading an AI-generated text with similar "content".

goatlover

37 minutes ago

An LLM isn't even a species. I prefer communicating with other humans, unless I choose to interact with an LLM. But then I know that it's a text generator and not a person, even when I ask it to act like a person. The difference matters to most humans.

BeFlatXIII

an hour ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

Only if you're competing on volume.

vouaobrasil

6 hours ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition.

Wrong. I am a professional writer and I never use AI. I hate AI.

ChrisMarshallNY

7 hours ago

I don't use AI in my own blogging, but then, I don't particularly care whether or not someone reads my stuff (the ones that do, seem to like it).

I have used it, from time to time, to help polish stuff like marketing fluff for the App Store, but I'd never use it verbatim. I generally use it to polish a paragraph or sentence.

But AI hasn't suddenly injected untrustworthy prose into the world. We've been doing that, for hundreds of years.

notarobot123

2 hours ago

I have my reservations about AI but it's hard not to notice that LLMs are effectively a Gutenberg level event in the history of written communication. They mark a fundamental shift in our capacity to produce persuasive text.

The ability to speak the same language or understand cultural norms are no longer barriers to publishing pretty much anything. You don't have to understand a topic or the jargon of any given domain. You don't have to learn the expected style or conventions an author might normally use in that context. You just have to know how to write a good prompt.

There's bound to be a significant increase in the quantity as well as the quality of untrustworthy published text because of these new capacities to produce it. It's not the phenomenon but the scale of production that changes the game here.

layer8

6 hours ago

> marketing fluff for the App Store

If it’s fluff, why do you put it there? As an App Store user, I‘m not interested in reading marketing fluff.

ChrisMarshallNY

5 hours ago

Because it’s required?

I’ve released over 20 apps, over the years, and have learned to add some basic stuff to each app.

Truth be told, it was really sort of a self-deprecating joke.

I’m not a marketer, so I don’t have the training to write the kind of stuff users expect on the Store, and could use all the help I can get

Over the years, I’ve learned that owning my limitations, can be even more important, than knowing my strengths.

layer8

4 hours ago

My point was that as a user I expect substance, not fluff. Some app descriptions actually provide that, but many don’t.

ChrisMarshallNY

2 hours ago

Well, you can always check out my stuff, and see what you think. Easy to find.

osigurdson

4 hours ago

AI expansion: take a few bullet points and have ChatGPT expand it into several pages of text

AI compression: take pages of text and use ChatGPT to compress into a few bullet points

We need to stop being impressed with long documents.

fennecfoxy

2 hours ago

The foundations of our education systems are based on rote memorisation so I'd probably start there.

dijit

7 hours ago

Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

Lowering the bar to write books is "good" but increases the noise to signal ratio.

I'm not 100% certain how to give another proof-of-work, but what I've started doing is narrating my blog posts - though AI voices are getting better too.. :\

vasco

7 hours ago

> Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

Said the scribe upon hearing about the printing press.

dijit

7 hours ago

I'm not certain what statement you're implying, but yes, accessibility of bookwriting has definitely decreased the quality of books.

Even technical books like Hardcore Java: https://www.oreilly.com/library/view/hardcore-java/059600568... are god-awful, and even further away from the seminal texts on computer science that came before.

It does feel like authorship was once heralded in higher esteem than it deserves today.

Seems like people agree: https://www.reddit.com/r/books/comments/18cvy9e/rant_bestsel...

neta1337

6 hours ago

Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.

ks2048

7 hours ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition.

Is that true today? I guess it depends what kind of writing you are talking about, but I wouldn't think most successful writers today - from novelests to tech bloggers - rely that much on AI, but I don't know. Five years from now, could be a different story.

theshackleford

3 hours ago

Yes it’s true today, depending on what is your writing is the foundation of.

It doesn’t matter that my writing is more considered, more accurate and of a higher quality when my coworkers are all openly using AI to perform five times the work I am and producing outcomes that are “good enough” because good enough is quite enough for a larger majority than many likely realise.

amelius

3 hours ago

Funny thing is that people will also ask AI to __read__ stuff for them and summarize it.

So everything an AI writes will eventually be nothing more than some kind of internal representation.

munksbeer

6 hours ago

> but it's still depressing, to be honest.

Cheer up. Things usually get better, we just don't notice it because we're so consumed with extrapolating the negatives. Humans are funny like that.

vouaobrasil

6 hours ago

I actually disagree with that. People are so busy hoping things will get better, and creating little bubbles for themselves to hide away from what human beings as a whole are doing, that they don't realize things are getting worse. Technology constantly makes things worse. Cheering up is a good self-help strategy but not a good strategy if you want to contribute to making the world actually a better place.

munksbeer

3 hours ago

>Technology constantly makes things worse.

And it also makes things a lot better. Overall we lead better lives than people just 50 years ago, never mind centuries.

vouaobrasil

2 hours ago

No way. Life 50 years ago was better for MANY. Maybe that would be true for 200. But 50 years ago was the 70s. There were far fewer people, and the world was not starting to suffer from climate change. Tell your statement to any climate refugee, and ask them whether they'd like to live now or back then.

AND, we had fewer computers and life was not so hectic. YES, some things have gotten better, but on average? It's arguable.

vundercind

an hour ago

It’s fairly common for (at least) specific things to get worse and then never improve again.

wengo314

7 hours ago

i think the problem started when quantity became more important over quality.

you could totally compete on quality merit, but nowadays the volume of output (and frequency) is what is prioritized.

limit499karma

4 hours ago

I'll take your statement that your conclusions are based on a 'depressed mind' at face value, since it is so self-defeating and places little faith in Human abilities. Your assumption that a person driven to write will "with a high degree of certainty" also mix up their work with a machine assistant can only be informed by your own self-assessment (after all how could you possibly know the mindset of every creative human out there?)

My optimistic and enthusiastic view of AI's role in Human development is that it will create selection pressures that will release the dormant psychological abilities of the species. Undoubtedly, wide-spread appearance of Psi abilities will be featured in this adjustment of the human super-organism to technologies of its own making.

Machines can't do Psi.

yusufaytas

7 hours ago

I totally understand your frustration. We started writing our book long before(2022) AI became mainstream, and when we finally published it on May 2024, all we hear now is people asking if it's just AI-generated content. It’s sad to see how quickly the conversation shifts away from the human touch in writing.

eleveriven

7 hours ago

I can imagine how disheartening that must be

FrustratedMonky

38 minutes ago

Maybe this will push people back to reading old paper books?

There could be resurgence in reading the classics, on paper, since we know they are not AI.

user

7 hours ago

[deleted]

sandworm101

7 hours ago

>> cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

You never should have. Large amounts of work, even stuff by major authors, is ghostwritten. I was talking to someone about Taylor Swift recently. They thought that she wrote all her songs. I commented that one cannot really know that, that the entertainment industry is very going at generating seemingly "authentic" product at a rapid pace. My colleague looked at me like I had just killed a small animal. The idea that TS was "genuine" was a cornerstone of their fandom, and my suggestion had attacked that love. If you love music or film, don't dig too deep. It is all a factory. That AI is now part of that factory doesn't change much for me.

Maybe my opinion would change if I saw something AI-generated with even a hint of artistic relevance. I've seen cool pictures and passable prose, but nothing so far with actual meaning, nothing worthy of my time.

WalterBright

5 hours ago

Watch the movie "The Wrecking Crew" about how a group of studio musicians in the 1970s were responsible for the albums of quite a few diverse "bands". Many bands had to then learn to play their own songs so they could go on tour.

nyarlathotep_

2 hours ago

> You never should have. Large amounts of work, even stuff by major authors, is ghostwritten.

I'm reminded of 'Under The Silver Lake' with this reference. Strange film, but that plotline stuck with me.

greenie_beans

4 hours ago

i know a lot of writers who don't use ai. in fact, i can't think of any writers who use it, except a few literary fiction writers.

working theory: writers have taste and LLM writing style doesn't match the typical taste of a published writer.

tim333

6 hours ago

I'm not sure it's always that hard to tell the AI stuff from the non AI. Comments on HN and on twitter from people you follow are pretty much non AI, also people on youtube where you an see the actual human talking.

On the other hand there's a lot on youtube for example that is obviously ai - weird writing and speaking style and I'll only watch those if I'm really interested in the subject matter and there aren't alternatives.

Maybe people will gravitate more to the stuff like PaulG or Elon Musk on twitter or HN and less to blog style content?

jshdhehe

6 hours ago

AI only helps writing in so far as checking/suggesting edits. Most people can write better than AI (more engaging). AI cant tell a human story, have real tacit experience.

So it is like saying my champaigne bottle cant keep up with the tap water.

eleveriven

7 hours ago

Maybe, over time, there will also be a renewed appreciation for authenticity

paganel

7 hours ago

You kind of notice the stuff written with AI, it has a certain something that makes it detectable. Granted, stuff like the Reuters press reports might have already been written by AI, but I think that in that case it doesn’t really matter.

williamcotton

7 hours ago

Well we’re going to need some system of PKI that is tied to real identities. You can keep being anonymous if you want but I would prefer not and prefer to not interact with the anonymous, just like how I don’t want to interact with people wearing ski masks.

flir

6 hours ago

I doubt that's possible. I can always lend my identity to an AI.

The best you can hope for is not "a human wrote this text", it's "a human vouched for this text".

nottorp

7 hours ago

Why are you posting on this forum where the user's identity isn't verified by anyone then? :)

But the real problem is that having the poster's identity verified is no proof that their output is not coming straight from a LLM.

williamcotton

7 hours ago

I don’t really have a choice about interacting with the anonymous at this point.

It certainly will affect the reputation of people that are consistently publishing untruths.

nottorp

6 hours ago

> It certainly will affect the reputation of people that are consistently publishing untruths.

Oh? I thought there are a lot of very well identified people making a living from publishing untruths right now on all social media. How would PKI help, when they're already making it very clear who they are?

wickedsight

6 hours ago

With a friend, I created a website about a race track in the past two years. I definitely used AI to speed up some of writing. One thing I used it for was a track guide, describing every corner and how to drive it. It was surprisingly accurate, most of the time. The other times though, it would drive the track backwards, completely hallucinate the instructions or link corners that are in different parts of the track.

I spent a lot of time analyzing the track myself and fixed everything to the point that experienced drivers agreed with my description. If I hadn't done that, most visitors would probably still accept our guide as the truth, because they wouldn't know any better.

We know that not everyone cares about whether what they put on the internet is correct and AI allows those people to create content at an unprecedented pace. I fully agree with your sentiment.

uhtred

3 hours ago

To be honest I got sick of most new movies, TV shows, music even before AI so I will continue to consume media from pre 2010 until the day I die and will hope I don't get through it all.

Something happened around 2010 and it all got shit. I think everyone becoming massively online made global cultural output reduce in quality to meet the interests of most people and most people have terrible taste.

FrankyHollywood

7 hours ago

I have never read more bullshit in my life than during the corona pandemic, all written by humans. So you should never trust something you read, always question the source and it's reasoning.

At the same time I use copilot on a daily basis, both for coding as well as the normal chat.

It is not perfect, but I'm at a point I trust AI more than the average human. And why shouldn't I? LLMs ingest and combine more knowledge than any human can ever do. An LLM is not a human brain but it's actually performing really well.

avereveard

8 hours ago

why do you trust things now? unless you recognize the author and have a chain of trust from that author production to the content you're consuming, there already was no way to estabilish trust.

layer8

6 hours ago

For one, I trust authors more who are not too lazy to start sentences with upper case.

EGreg

5 hours ago

I have been predicting this from 2016

And I also predict that many responses to you will say “it was always that way, nothing changed”.

datavirtue

6 hours ago

It's either good or it isn't. It either tracks or it doesn't. No need to befuddle your thoughts over some perceived slight.

verisimi

6 hours ago

You're lucky. I consider it a possibility that older works (even ancient writings) are retrojected into the historical record.

farts_mckensy

3 hours ago

>But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me.

Everyone is going to have to get over that very soon, or they're going to start sounding like those old puritanical freaks who thought Elvis thrusting his hips around was the work of the devil.

goatlover

31 minutes ago

Those two things don't sound at all similar. We don't have to get over wanting to communicate with humans online.

advael

7 hours ago

In trying to write a book, it makes little sense to try to "compete" on speed or volume of output. There were already vast disparities in that among people who write, and people whose aim was to express themselves or contribute something of importance to people's lives, or the body of creative work in the world, have little reason to value quantity over quality. Probably if there's a significant correlation with volume of output, it's in earnings, and that seems both somewhat tenuous and like something that's addressable by changes in incentives, which seem necessary for a lot of things. Computers being able to do dumb stuff at massive scale should be viewed as finding vulnerabilities in the metrics this allows it to become trivial to game, and it's baffling whenever people say "Well clearly we're going to keep all our metrics the same and this will ruin everything." Of course, in cases where we are doing that, we should stop (For example, we should probably act to significantly curb price and wage discrimination, though that's more like a return to form of previous regulatory standards)

As a creator of any kind, I think that simply relying on LLMs to expand your output via straightforward uses of widely available tools is inevitably going to lead to regression to the mean in terms of creativity. I'm open to the idea, however, that there could be more creative uses of the things that some people will bother to do. Feedback loops they can create that somehow don't stifle their own creativity in favor of mimicking a statistical model, ways of incorporating their own ingredients into these food processors of information. I don't see a ton of finished work that seems to do this, but I see hints that some people are thinking this way, and they might come up with some cool stuff. It's a relatively newly adopted technology, and computer-generated art of various kinds usually separates into "efficiency" (which reads as low quality) in mimicking existing forms, and new forms which are uniquely possible with the new technology. I think plenty of people are just going to keep writing without significant input from LLMs, because while writer's block is a famous ailment, many writers are not primarily limited by their speed in producing more words. Like if you count comments on various sites and discussions with other people, I write thousands of words unassisted most days

This kind of gets to the crux of why these things are useful in some contexts, but really not up to snuff with what's being claimed about them. The most compelling use cases I've seen boil down to some form of fitting some information into a format that's more contextually appropriate, which can be great for highly structured formatting requirements and dealing with situations which are already subject to high protocol of some kind, so long as some error is tolerated. For things for which conveying your ideas with high fidelity, emphasizing your own narrative voice or nuanced thoughts on a subject, or standing behind the factual claims made by the piece are not as important. As much as their more strident proponents want to claim that humans are merely learning things by aggregating and remixing them in the same sense as these models do, this reads as the same sort of wishful thinking about technology that led people to believe that brains should work like clockwork or transistors at various other points in time at best, and honestly this most often seems to be trotted out as the kind of bad-faith analogy tech lawyers tend to use when trying to claim that the use of [exciting new computer thing] means something they are doing can't be a crime

So basically, I think rumors of the death of hand-written prose are, at least at present, greatly exaggerated, though I share the concern that it's going to be much harder to filter out spam from the genuine article, so what it's really going to ruin is most automated search techniques. The comparison to "low-background steel" seems apt, but analogies about how "people don't handwash their clothes as much anymore" kind of don't apply to things like books

dustingetz

7 hours ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

What? No! Content volume only matters in stupid contests like VC app marketing grifts or political disinformation ops where the content isn’t even meant to be read, it’s an excuse for a headline. I personally write all my startup’s marketing content, quality is exquisite and due to this our brand is becoming a juggernaut

ozim

7 hours ago

What kind of silliness is this?

AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.

Had a friend who thought that if it is written in a book it is for sure true. Well NO!

There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.

Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

shprd

7 hours ago

No one claimed humans are perfect. But gen AI is a force multiplier for every problem we had to deal with. It's just completely different scale. Your brain is about to be DDOSed by junk content.

Of course, gen AI is just a tool that can be used for good or bad, but spam, targeted misinformation campaigns, and garbage content in general is one area that will be most amplified because it became so low effort and they don't care about doing any review, double-checking, etc. They can completely automate their process to whatever goal they've in mind. So where sensible humans enjoy 10x productivity, these spam farms will be enjoying 10000x scale.

So I don't think downplaying it and acting like nothing changed, is the brightest idea. I hope you see now how that's a completely different game, one that's already here but we aren't prepared for yet, certainly not with traditional tools we have.

flir

6 hours ago

> Your brain is about to be DDOSed by junk content.

It's not the best analogy because there's already more junk out there than can fit through the limited bandwidth available to my brain, and yet I'm still (vaguely) functional.

So how do I avoid the junk now? Rough and ready trust metrics, I guess. Which of those will still work when the spam's 10x more human?

I think the recommendations of friends will still work, and we'll increasingly retreat to walled gardens where obvious spammers (of both the digital and human variety) can be booted out. I'm still on facebook, but I'm only interested in a few well-moderated groups. The main timeline is dead to me. Those moderators are my content curators for facebook content.

ozim

6 hours ago

That is something I agree with.

One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

shprd

6 hours ago

> One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

The junk gets thrown at you in mass volume at low cost without your permission. What you gonna do? keep dodging it? waste your time evaluating every piece of information you come across?

If one of the results on the first page in search deviate from others, it's easy to notice. But if all of them agree, they became the truth. Of course your first thought is to say search engines are shit or whatever off-hand remarks, but this example is just to illustrate how the volume alone can change things. The medium doesn't matter, these things could come in many forms: book reviews, posts on social media, ads, false product description on amazon, etc.

Of course, these things exist today but the scale is different, the customization is different. It's like the difference between firearms and drones. If you think it's the same old game and you can defend against the new threat using your old arsenal, I admire your confidence but you're in for a surprise.

shprd

6 hours ago

So you're basically sheltering yourself and seeking human curated content? Good for you, I follow similar strategy. How do you propose we apply this solution for the masses in today's digital age? or you're just saying 'each on their own'?

Sadly, you seem to not be looking further than your nose. We are not talking about just you and me here. Less tech literate people are the ones at a disadvantage and need protection the most.

flir

4 hours ago

> How do you propose we apply this solution for the masses in today's digital age?

The social media algorithms are the content curators for the technically illiterate.

Ok, they suck and they're actively user-hostile, but they sucked before AI. Maybe (maybe!) AI's the straw that breaks the camel's back, and people leave those algorithm-curated spaces in droves. I hope that, one way and another, they'll drift back towards human-curated spaces. Maybe without even realizing it.

dns_snek

7 hours ago

> nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

The problem is that wheat:chaff ratio used to be 1:100, and soon it's going to become 1:100 million. I think you're severely underestimating the amount of effort it's going to take to find real information in the sea of AI generated content.

tempfile

7 hours ago

> you have to make useful proxy measures what to read and what not

yes, obviously. But AI slop makes those proxy measures significantly more complicated. Critical thinking is not magic - it is still a guess, and people are obviously worse at distinguishing AI bullshit from human bullshit.

grecy

8 hours ago

Eh, like everything in life you can choose what you spend your time on and what you ignore.

There have always been human writers I don’t waste my time on, and now there are AI writers in the same category.

I don’t care. I will just do what I want with my life and use my time and energy on things I enjoy and find useful.

GrumpyNl

7 hours ago

response from AI on this: I completely understand where you're coming from. The increasing reliance on AI in writing does raise important questions about authenticity and connection. There’s something uniquely human in knowing that the words you're reading come from someone’s personal thoughts, experiences, and emotions—even if flawed. AI-generated content, while efficient and often well-written, lacks that deeper layer of humanity, the imperfections, and the creative struggle that gives writing its soul.

It’s easy to feel disillusioned when you know AI is shaping so much of the content around us. Writing used to be a deeply personal exchange, but now, it can feel mechanical, like it’s losing its essence. The pressure to keep up with AI can be overwhelming for human writers, leading to this shift in content creation.

At the same time, it’s worth considering that the human element still exists and will always matter—whether in long-form journalism, creative fiction, or even personal blogs. There are people out there who write for the love of it, for the connection it fosters, and for the need to express something uniquely theirs. While the presence of AI is unavoidable, the appreciation for genuine human insight and emotion will never go away.

Maybe the answer lies in seeking out and cherishing those authentic voices. While AI-generated writing will continue to grow, the hunger for human storytelling and connection will persist too. It’s about finding balance in this new reality and, when necessary, looking back to the richness of past writings, as you mentioned. While it may seem like a loss in some ways, it could also be a call to be more intentional in what we read and who we trust to deliver those words.

sovietmudkipz

8 hours ago

I am tired and hungry…

The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.

I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.

Palmik

8 hours ago

The only entities that will win with these lawsuits are the likes of Disney, large legacy news media companies, Reddit, Stack Overflow (who are selling content generated by their users), etc.

Who will also win: Google, OpenAI and other corporations that enter exclusive deals, that can more and more rely on synthetic data, that can build anti-recitation systems, etc.

And of course the lawyers. The lawyers always win.

Who will not win:

Millions of independent bloggers (whose content will be used)

Millions of open source software engineers (whose content will be used against the licenses, and used to displace their livelihood), etc.

The likes of Google and OpenAI entered the space by building on top of the work of the above two groups. Now they want to pull up the ladder. We shouldn't allow that to happen.

0xDEAFBEAD

an hour ago

Perhaps we need an LLM-enabled lawyer so small bloggers can easily sue LLM makers.

ToucanLoucan

6 hours ago

Honestly the most depressing thing about this entire affair is seeing not the entire, certainly but a sizable chunk of the software development community jump behind OpenAI and company’s blatant theft on an industrial scale of the mental products of probably literally billions of people (not the least of whom is other software developers!) with absolutely not the slightest hint of concern about what that means for the world because afterwards, they got a new toy to play with. Squidward was apparently 100% correct: on balance, few care about the fate of labor as long as they get their instant gratification.

fennecfoxy

2 hours ago

Do you consider it theft because of the scale? If I read something you wrote and use most of a phrase you coined or an idea for the basis of a plotline in a book I write, as many authors do, currently it's counted as being all my own work.

I feel like the argument is akin to some countries considering rubbish, the things you throw away, to still be owned by your person ie "dumpster diving" is theft.

If a company had scraped public posts on the Internet and used it to compile art by colourising chunks of the text, is it theft? If an individual does it, is it theft?

ToucanLoucan

an hour ago

This argument has been stated and re-stated multiple times, this notion that use of information should always be free, but it fails to account for the fact that OpenAI is not consuming this written resource as a source of information but rather as a tool for training LLMs, which it has been open about from the beginning is a thing it wishes to sell access to as a subscription service. These are fundamentally not the same. ChatGPT/Copilot do not understand Python, they are not minds that read a bunch of python books and learned python skills they can now utilize: they are language models, that internalized metric tons of weighted averages of python code and can now (kind of) write their own, based on minimizing "error" relative to the code samples they ingest. Because of this, Copilot has never and will never write code it hasn't seen before, and by extension of that, it must see a whole lot of code in order to function as well as it does.

If you as a developer look at how one would declare a function in python, review a few examples, you now know how to do that. Copilot can't say the same. It needs to see dozens, hundreds, perhaps thousands of them to reasonably accurately be counted on to accomplish that task, it's just how the tech works. Ergo, scaled data sets that can accomplish this teaching task now have value, if the people doing that training are working for high-valuation startups with the objective of selling access to code generating robots.

logicchains

5 hours ago

>blatant theft on an industrial scale of the mental products

They haven't been stolen; the creators still have them. They've just been copied. It's amazing how much the ethos on this site has shifted over the past decade, away from the hacker idea that "intellectual property" isn't real property, just a means of growing corporate power, and information wants to be free.

xdennis

3 hours ago

> It's amazing how much the ethos on this site has shifted over the past decade

It hasn't. The hacker ethos is about openness, individuality, decentralization (among others).

OpenAI is open in what it consumes, not what it outputs.

It makes sense to have protections in place when your other values are threatened.

If "information want's to be free" leads to OpenAI centralizing control over the most advanced AI then will it be worth it?

A solution here would be similar to the GPL: even megacorps can use GPL software, but they have to contribute back. If OpenAI and the rest would be forced to make everything public (if it's trained on open data) then that would be an acceptable compromise.

visarga

an hour ago

> The hacker ethos is about openness, individuality, decentralization (among others).

Yes, the greatest things on the internet have been decentralized - Git, Linux, Wikipedia, open scientific publications, even some forums. We used to passively consume content and internet allowed interaction. We don't want to return to the old days. AI falls into the decentralized camp, the primary beneficiaries are not the providers but the users. We get help of things we need, OpenAI gets a few cents per million tokens, they don't even break even.

ToucanLoucan

an hour ago

> AI falls into the decentralized camp

I'm sorry, the worlds knowledge now largely accessible by a laymen via LLMs controlled by at most, 5 companies is decentralized? If that statement is true then the world decentralized truly is entirely devoid of meaning at this point.

ToucanLoucan

5 hours ago

Information should be free for people. Not 150 billion dollar enterprises.

infecto

5 hours ago

Disagree. There should be no distinction between the two. Those kind of distinctions are what cause unfair advantages. If the information is available to consume, there should be no constraint on who uses it.

Sure you might not like OpenAI, but maybe some other company comes a long and builds the next magical product using information that is freely available.

TheRealDunkirk

4 hours ago

Treating corporations as "people" for policy's sake is a legal decision which has essentially killed the premise of the US democratic republic. We are now, for all intents and purposes, a corporatocracy. Perhaps an even better description would simply be oligarchy, but since our oligarchs' wealth is almost all tied up in corporate stocks, it's a very incestuous relationship.

infecto

3 hours ago

Meh, I am just saying I believe in open and free information. I don't follow the OP's ideal of information for me but not thee.

ToucanLoucan

2 hours ago

The idea of knowledge as a source of understanding and personal growth is completely oppositional to it's conception as a scarce resource, which to OpenAI and whomever else wants to train LLMs is what it is. OpenAI did not read everything in the library because it wanted to know everything; it read everything at the library so it could teach a machine to create a statistical average written word generator, which it can then sell access to. These are fundamentally different concepts and if you don't see that, then I would say that is because you don't want to see it.

I don't care if employees at OpenAI read books from their local library on python. More power to them. I don't even care if they copy the book for reference at work, still fine. But utilizing language at scale as a scarce resource to train models is not that and is not in any way analogous to it.

triceratops

3 hours ago

So why isn't every language model out there "open"?

candiddevmike

5 hours ago

> They haven't been stolen; the creators still have them. They've just been copied

You wouldn't download a car.

IanKerr

24 minutes ago

It's been pretty incredible watching these companies siphon up everything under the sun under the guise of "training data" with impunity. These same companies will then turn around and sic their AIs on places like Youtube and send out copyright strikes via a completely automated system with loads of false-positives.

How is it acceptable to allow these companies to steal all of this copyrighted data and then turn around and use it to enforce their copyrights in the most heavy-handed manner? The irony is unbelievable.

Kiro

7 hours ago

I would never have imagined hackers becoming copyright zealots advocating for lawsuits. I must be getting old but I still remember the Pirate Bay trial as if it was yesterday.

progbits

7 hours ago

I just want consistent and fair rules.

I'm all for abolishing copyright, for everyone. Let the knowledge be free and widely shared.

But until that is the case and people running super useful services like libgen have to keep hiding then I also want all the LLM corpos to be subject to the same legal penalties.

candiddevmike

5 hours ago

This is the entire point of existence for the GPL. Weaponize copyright. LLMs have conveniently been able to circumvent this somehow, and we have no answer for it.

FridgeSeal

5 hours ago

Because some people keep asserting that LLM’s “don’t count as stealing” and “how come search links are on but got reciting paywalled NYT articles on demand is bad??” Without so much as a hint of irony.

LLM tech is pretty cool.

Would be a lot cooler if its existence wasn’t predicted on the wholesale theft of everyone’s stuff, immediately followed by denial of theft, poisoning the well, and massively profiting off it.

visarga

an hour ago

You are singling out accidental replication and forgetting it was triggered with fragments from the original material. Almost all LLM outputs are original - both because they use randomness to sample, and because they have user prompt conditioning.

And LLMs are really a bad choice for infringement. They are slow, costly and unreliable at replicating any large piece of text compared to illegal copying. There is no space to perfectly memorize the majority of its training set. A 10B models is trained on 10T tokens, no space for more than 0.1% to be properly memorized.

I see this overreaction as an attempt to strengthen copyright, a kind of nimby-ism where existing authors cut the ladder to the next generation by walling off abstract ideas and making it more probably to get sued for accidental similarities.

welferkj

an hour ago

>Because some people keep asserting that LLM’s “don’t count as stealing”

People who confidently assert either opinion in this regard are wrong. The lawsuits are still pending. But if I had to bet, I'd bet on the OpenAI side. Even if they don't win outright, they'll probably carve out enough exemptions and mandatory licensing deals to be comfortable.

AlexandrB

5 hours ago

Exactly this. If we have to live under a stifling copyright regime, then at least it should be applied evenly. It's fundamentally unfair to have one set of laws (at least as enforced in practice) for the rich and powerful and another set for everyone else.

someNameIG

7 hours ago

Pirate Bay wasn't selling access to the torrents trying to make a massive profit.

zarzavat

6 hours ago

True, though paid language models are probably just a blip in history. Free weight language models are only ~12 months behind and have massive resources thanks to Meta.

That profit will be squeezed to zero over the long term if Zuck maintains his current strategy.

rurp

3 hours ago

That can change on a dime though, if Zuck decides it's in his financial interest to change course. If Facebook stops spending billions of dollars on open models who is going to step in and fill that gap?

zarzavat

an hour ago

That depends on when Meta stops. The longer Meta keeps releasing free models, the more capabilities are made permanently unprofitable. For example, Llama 3.1 is already good enough for translation or as a writing assistant.

If Meta stopped now, there would still be profit in the market, but if they keep releasing Llamas for the next 5+ years then OpenAI et al will be fighting for scraps. Not everybody needs a model that can prove theorems.

meiraleal

5 hours ago

> Free weight language models are only ~12 months

That's not true anymore, Meta isn't behind OpenAI

meiraleal

6 hours ago

Hackers are against corporations. If breaking the copyright laws make corps bigger, more powerful and more corrupt, hackers will be against it rightfully so. Abolishing copyright is different than abusing it, we should abolish it.

williamcotton

7 hours ago

It’s because it now affects hackers and before it only affected musicians.

bko

7 hours ago

It affects hackers how? By giving them cool technology at below cost? Or is it further democratizing knowledge? Or maybe it's the inflated software eng salaries due to AI hype?

Help me understand the negative effect of AI and LLMs on hackers.

t-3

6 hours ago

It's trendy caste-signaling to hate on AI which endangers white-collar jobs and creative work the way machinery endangered blue-collar jobs and productive work (ie. not at all in the long run, but in the short term you will face some changes).

I've never actually used an LLM though - I just don't have any use for such a thing. All my writing and programming are done for fun and automation would take that away.

rsynnott

6 hours ago

I'm not sure if you're being disingenuous, or if you genuinely don't understand the difference.

Pirate Bay: largely facilitating the theft of material from large corporations by normal people, for generally personal use.

LLM training: theft of material from literally _everyone_, for the purposes of corporate profit (or, well, heh, intended profit; of course all LLM-based enterprises are currently massively loss-making, and may remain so forever).

CaptainFever

5 hours ago

> (or, well, heh, intended profit; of course all LLM-based enterprises are currently massively loss-making, and may remain so forever)

This undermines your own point.

Also, open source models exist.

acheron

4 hours ago

It’s the same picture.

pydry

7 hours ago

The common denominator is big corporations trying to screw us over for profit, using their immense wealth as a battering ram.

So, capitalism.

It's taboo to criticize that though.

PoignardAzur

6 hours ago

> It's taboo to criticize that though

In what world is this taboo? That critique comes back in at least half the HN threads about AI.

Watch any non-technical video about AI on Youtube and it will mention people being worried of the power of mega-corporations.

Your take is about as taboo as wearing a Che Guevara tshirt.

munksbeer

6 hours ago

> It's taboo to criticize that though.

It's not, that's playing the victim. There are hundreds or thousands of posts daily all over HN criticising capitalism. And most seem upvoted, not downvoted.

Don't even get me started on reddit.

fernandotakai

6 hours ago

i find quite ironic whenever i see a highly upvoted comment here complaining about capitalism because for sure i don't see yc existing in any other type of economy.

meiraleal

5 hours ago

You wouldn’t see YC existing on a world full capitalist :) It depende heavily on open source, the biggest and most succeassful socialist experiment so far

mandibles

2 hours ago

Open source is a purely voluntary system. So it's not socialist, which requires state coercion to force people to "share."

ToucanLoucan

6 hours ago

This only holds if your thinking on the subject of economic systems is only as deep as choosing your character’s class in an RPG game. There’s no need for us to make every last industry a state owned enterprise and no one who’s spent longer than an hour or so contemplating such things thinks that way. I have no desire to not have a variety of companies producing things like cars, electronics, software, video games, just to name a few. Competition does drive innovation, that is still true, and having such firms vying for a limited amount of resources dispatched by individuals makes a lot of sense. Markets have their place.

However markets also have limits. A power company competing for your business is largely a farce, since the power lines to your home will not change. A cable company in America is almost certainly a functional monopoly, and that fact is reflected in their quality of service. Infrastructure of all sorts makes for piss-put markets because true competition is all but impossible, and even if it does kind of work, it’s inefficient. A customer must become knowledgeable in some way to have a ghost of a clue what they’re buying, or trust entirely dubious information from marketing. And, even if somehow everything is working up to this point, corporations are, above all, cost cutters and if you put one in charge of an area where it feels as though customers have few if any choices and the friction to change is high, they will immediately begin degrading their quality of services to save money in the budget.

And this is only from first principles, we have so many other things that could be discussed from mass market manipulation to the generous subsidies of a stunning variety that basically every business at scale enjoys to the rapacious compensation schemes that have become entirely too commonplace in the executive room, etc etc etc.

To put it short: I have no issue at all with capitalism operating in non-essential to life industries. My issue is all the ways it’s infiltrated the essential ones and made them demonstrably worse, less efficient, and more expensive for every consumer.

catlifeonmars

4 hours ago

I would argue that subsidization and monopolistic markets are an inevitable outcome of capitalism.

The competitive landscape where consumers vote for the best products with their purchasing decisions is simply not a sustainable equilibrium.

The ability to accumulate capital (I.e. “capitalism”) leads to regulatory protectionism through lobbying, bribery, etcetera.

ToucanLoucan

an hour ago

I would argue that markets are a necessary step towards capitalism but it's also crucial to remember that markets can also exist outside of capitalism. The accumulation of money in a society with insufficient defenses will trend towards money being a stand-in for power and influence, but it still requires the permission and legal leeway of the system in order to actually turn it corrupt; politicians have to both be permitted to, and be personally willing to accept the checks to do the corruption in the first place.

The biggest and most salient critique of liberal capitalism as we now exist under is that it requires far too many of the "right people" to be in positions of power; it presumes good faith where it shouldn't, and fails to reckon with bad actors as what they are far too often, the modern American Republican party being an excellent example (but far from the only one).

Lichtso

7 hours ago

Lawsuits based on what? Copyright?

People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

What they think how copyright works: When you take someones work as inspiration then everything you produce form that counts as derivative work.

How copyright actually works: The input is irrelevant, only the output matters. Thus derivative work is what explicitly contains or resembles underlying work, no matter if it was actually based on that or it is just happenstance / coincidence.

Thus AI models are safe from copyright lawsuits as long as they filter out any output which comes too close to known material. Everything else is fine, even if the model was explicitly trained on commercial copyrighted material only.

In other words: The concept of intellectual property is completely broken and that is old news.

jcranmer

3 hours ago

With all due respect, the lawyers I've seen who commented on the issue do not agree with your assessment.

The things that constitute potentially infringing copying are not clearly well-defined, and whether or not training an AI is on that list has of course not yet been considered by a court. But you can make cogent arguments either way, and I would not be prepared to bet on either outcome. Keep in mind also that, legally, copying data from disk to RAM is considered potentially infringing, which should give you a sense of the sort of banana-pants setup that copyright can entail.

That said, if training is potentially infringing on copyright, it now seems pretty clear that a fair use defense is going to fail. The recent Warhol decision rather destroys any hope that it might be considered "transformative", while the fact that the AI companies are now licensing content for training use is a concession that the fourth and usually most important factor (market impact) weighs against fair use.

Lichtso

an hour ago

Lawyers commenting on this publicly will add their bias to reinforce the stances of their clientele. Thus somebody usually representing the copyright holders will say it is likely infringing and someone usually representing the AI companies will say it is unlikely.

But you are right, we don't know until president is set by a court. I am only warning people that laying back and hoping that copyright will apply as they wish is not a good strategy to defend your work. One should consider alternative legal constructs or simply not releasing material to the general public anymore.

lolc

4 hours ago

As much as our brain contents are unlicensed copies to the extent we can reproduce copyrighted work: If the model can recite copyrighted portions of text used in training, the model weights are a derivative work. Because the weights obviously must encode the original work. Just because lossy compression was applied the original work should still be considered present as long as it's recognizable. So the weights may not be published without license. Seems rather straightforward to me and I do wonder how Meta thinks they get around this.

Now if the likes of Openai and Google keep the model weights private and just provide generated text, they can try to filter for derivative works, but I don't see a solution that doesn't leak. If a model can be coaxed into producing a derivative work that escapes the filter, then boom, unlicensed copy was provided. If I tell the model to mix two texts word by word, what filter could catch this? What if I tell the model to use a numerical encoding scheme? Or to translate into another language? For example assuming the model knows a bunch of NYT articles by heart, as was already demonstrated: If have it translate one of those articles to French for me, that's still an unlicensed copy!

I can see how they will try to get these violations legalized like the DMCA safe-harbored things, but at the moment they are the ones generating the unlicensed versions and publishing them when prompted to do so.

rcxdude

7 hours ago

Also, the desired interpretation of copyright will not stop the multi-billion-dollar AI companies, who have the resources to buy the rights to content at a scale no-one else does. In fact it will give them a gigantic moat, allowing them to extract even more value out of the rest of the economy, to the detriment of basically everyone else.

LunaSea

7 hours ago

Lawsuits based on code licensing for example.

Scraping websites containing source code which is distributed with specific licenses that OpenAI & co don't follow.

Lichtso

7 hours ago

Unfortunately not how it works, or at least not to the extend you wish it to be.

One can train a model exclusively on source code from the linux kernel (GPL) and then generate a bunch of C programs or libraries from that. And they could publish them under MIT license as long as they don't reproduce any identifiable sections from the linux kernel. It does not matter where the model learned how to program.

LunaSea

6 hours ago

You're mistaken.

If I write code with a license that says that using this code for AI training is forbidden then OpenAI is directly going against this by scraping websites indiscriminately.

Lichtso

6 hours ago

Sure, you can write all kinds of stuff in a license, but it is simply plain prose at that point. Not enforcable.

There is a reason why it is generally advised to go with the established licenses and not invent your own, similarly to how you should not roll your own cryptography: Because it most likely won't work as intended.

e.g. License: This comment is licensed under my custom L*a license. Any user with an username starting with "L" and ending in "a" is forbidden from reading my comment and producing replies based on what I have written.

... see?

LunaSea

6 hours ago

You can absolutely write a license that contains the clauses I mentioned and it would be enforceable.

Sorry, but the onus is on OpenAI to read the licenses not the creator.

And throwing your hands in the air and saying "oh you can't do that in a license" is also of little use.

CaptainFever

6 hours ago

No, it would not be enforceable. Your license can only give additional rights to users. It cannot restrict rights that users already have (e.g. fair use rights in the US, or AI training rights like in the EU or SG).

LunaSea

5 hours ago

How does Fair Use consider commercial usage of the full content in the US?

CaptainFever

5 hours ago

It's unknown yet, but the main point is that the inputs don't matter, as long as the output does not replicate the full content, it is fine.

Lichtso

6 hours ago

> You can absolutely write a license that contains the clauses I mentioned and it would be enforceable.

A license (copyright law) is not a contract (contract law). Simply publishing something does not make the whole world enter into a contract with you. Others first have to explicitly agree to do so.

> Sorry, but the onus is on OpenAI to read the licenses not the creator.

They can ignore it because they never agreed to it in the first place.

> And throwing your hands in the air and saying "oh you can't do that in a license" is also of little use.

It is very useful to know what works and what does not. That way you don't trick yourself and your work to be safe, don't get caught by surprise if you are in fact not and can think of alternatives instead.

BTW, a thing you can do (which CaptainFever mentioned) and lots of services do because licenses are so weak is to make people sign up with an account and have them enter a ToS agreement instead.

LunaSea

6 hours ago

> They can ignore it because they never agreed to it in the first place.

They did by accessing and copying the code. Same as a human cloning a repository and using it's content or someone accessing a website with Terms of Use.

No signed contract is needed here.

CaptainFever

5 hours ago

> They did by accessing and copying the code.

By default, copying is disallowed because of copyright. Your license provides them a right to copy the code, perhaps within certain restrictions.

However, sometimes copying is allowed, such as fair use (I covered this in another comment I sent you). This would allow them to copy the code regardless of the license.

> Same as a human cloning a repository and using it's content or someone accessing a website with Terms of Use.

I've covered the cloning/copying part already, but "I agree to this ToS by continuing to browse this webpage" is called a clickwrap agreement. Its enforceability is dubious. I think the LinkedIn case showed that it only applied if HiQ actually explicitly agreed to it by signing up.

jeremyjh

6 hours ago

That is not relevant to the comment you are responding to. Courts have been finding that scraping a website in violation of its terms of service is a liability, regardless of what you do with the content. We are not only talking about copyright.

CaptainFever

6 hours ago

True, but ToSes don't apply if you don't explicitly agree with it (e.g. by signing up for an account). So that's not relevant in the case of publicly available content.

xdennis

3 hours ago

> Lawsuits based on what? Copyright?

> People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

People are complaining about what's happening, not with the exact wording of the law.

What they are doing probably isn't illegal, but it _should_ be. The problem is that it's very difficult for people to pass new legislation because they don't have lobbyists the way corporations do.

visarga

an hour ago

> I’m hungry for more lawsuits. The biggest theft in human history

You want to own abstract ideas because AI can rephrase any specific expression. But that is antithetic to creativity.

DoctorOetker

7 hours ago

Here is a business model for copy right law firms:

Use source-aware training, use the same datasets as used in LLM training + copyrighted content. Now the LLM can respond not just what it thinks is most likely but also what source document(s) provided specific content. Then you can consult commercially available LLM's and detect copy right infringements, and identify copyright holders. Extract perpetrators and victims at scale. To ensure indefinite exploitation only sue commercially succesful LLM providers, so there is a constant new flux of growing small LLM providers taking up the freed niche of large LLM providers being sued empty.

chrismorgan

6 hours ago

> Use source-aware training

My understanding (as one uninvolved in the industry) is that this is more or less a completely unsolved problem.

DoctorOetker

2 hours ago

It's just training the source association together with the training set:

https://github.com/mukhal/intrinsic-source-citation

The only 2 reasons big LLM providers refuse to do it is

1) to prevent a long slew of content creators filing class action suit.

2) to keep regulators in the dark of how feasible and actionable it would be, once regulators are aware they can perform the source-aware training themselves

csomar

2 hours ago

There is no copyright with AI unless you want to implement the same measures for humans too. I am fine with it as long as we at least get open-weights. This way you kill both copyright and any company that's trying to profit out of AI.

masswerk

4 hours ago

> The thing I’m tired of is elites stealing everything under the sun to feed these models.

I suggest to apply the same to property law: make a photo and obtain instant and unlimited rights of use. – Things may change faster than we may imagine…

infecto

4 hours ago

I suspect the greater issue is that copyright is not always clear in this area? I am also not sure how you prevent "elites" from using the information while also allowing the "common" person to it.

fallingknife

7 hours ago

Copyright law is intended to prevent people from stealing the revenue stream from someone else's work by copying and distributing that work in cases where the original is difficult and expensive to create, but easy to make copies of once created. How does an LLM do this? What copies of copyrighted work do they distribute? Whose revenue stream are they taking with this action?

I believe that all the copyright suits against AI companies will be total failures because I can't come up with a answer to any of those questions.

drstewart

4 hours ago

>elites stealing everything

> a billion thefts

>The biggest theft

>what’s been stolen

I do like how the internet has suddenly acknowledged that pirating is theft and torrenting IS a criminal activity. To your point, I'd love to see a massive operation to arrest everyone who has downloaded copyrighted material illegal (aka stolen), for the public interest.

artninja1988

8 hours ago

Copying data is not theft

rpgbr

8 hours ago

It’s only theft when people copy data from companies. The other way around is ok, I guess.

CaptainFever

6 hours ago

Copying is not theft either way.

goatlover

23 minutes ago

It is if it's legally defined as theft.

a5c11

8 hours ago

Is piracy legal then? It's just a copy of someone else's copy.

vasco

7 hours ago

You calling it piracy is already a moral stance. Copying data isn't morally wrong in my opinion, it is not piracy and it is not theft. It happens to not be legal but just a few short years ago it was legal to marry infants to old men and you could be killed for illegal artifacts of witchcraft. Legality and morality are not the same, and the latter depends on personal opinion.

cdrini

5 hours ago

I agree with you they're not the same, but to build on that, I would add that they're not entirely orthogonal either, they influence each other a lot. Generally morallity that a society agrees on gets enforced as laws.

chownie

7 hours ago

Was the legality the question? If so it seems we care about data "theft" in a very one sided manner.

criddell

7 hours ago

The person who insists copying isn’t theft would probably point out that piracy is something done on the high seas.

From the context of the comment it was pretty clear that they were using theft as shorthand for taking without permission.

IanCal

7 hours ago

The usual argument is less about piracy as a term and more the use of the word theft, and your use of the word "taking". When we talk about physical things theft and taking mean depriving the owner of that thing.

If I have something, and you copy it, then I still have that thing.

criddell

5 hours ago

Did you read that original comment and wonder how Sam Altman and his crew broke into the commenter's home and made off with their hard drive? Probably not and so theft was a fine word choice. It communicated exactly what they wanted to communicate.

CaptainFever

5 hours ago

Even if that's the case, the disagreement is in semantics. Let's take your definition of theft. There's physical theft (actually taking something) and there's digital theft (merely copying).

The point of anti-copyright advocates are that merely copying is not ethically wrong. In fact, Why Software Must Be Free made the argument that preventing people from copying is ethically wrong because it limited the spread of culture and reuse.

That is the crux of the disagreement. You may rephrase our argument as "physical theft may be bad, but digital theft is not bad, and in fact preventing digital theft is in itself bad", but the argument does not change.

Of course, there is additional disagreement in the implied moral value of the word "theft". In that case I agree with you that pro-copyright/anti-AI advocates have made their point by the usage of that word. Of course, we disagree, but... it is what it is I suppose.

tempfile

7 hours ago

It's not legal, but it's also not theft.

threeseed

8 hours ago

Technically that is true. But you will still be charged with a litany of other crimes.

atoav

7 hours ago

Yet unlicensed use can be its own crime under current law.

flohofwoe

7 hours ago

So now suddenly when the bigwigs do it, software piracy and "IP theft" is totally fine? Thanks, good to know ;)

bschmidt1

3 hours ago

Same here hungry neigh thirsty for prompt-2-film

"output a 90 minute harry potter sequel to the final film starring the original actors plus Tom Hanks"

uhtred

3 hours ago

We need a revolution.

forinti

7 hours ago

Capitalism started by putting up fences around land to kick people out and keep sheep in. It has been putting fences around everything it wants and IP is one such fence. It has always been about protecting the powerful.

IP has had ample support because the "protect the little artist" argument is compelling, but it is just not how the world works.

johnchristopher

7 hours ago

> Capitalism started by putting up fences around land to kick people out and keep sheep in.

That's factually wrong. Capitalism is about moving wealth more efficiently: easier to allocate money/wealth to X through the banking system than to move sheep/wealth to X's farm.

tempfile

7 hours ago

capitalism and "money as an abstract concept" are unrelated.

johnchristopher

an hour ago

Neither is the relevance of your comment about it and yet here we are.

tempfile

24 minutes ago

What are you talking about? You said:

> Capitalism is about moving wealth more efficiently: easier to allocate money/wealth to X through the banking system than to move sheep/wealth to X's farm.

It's not. That's what money's about. Any system with an abstract concept of money admits that it's easier to allocate wealth with abstractions than physically moving objects.

Capitalism is about capital. It's an economic system that says individuals should own things (i.e. control their purpose) by investing money (capital) into them. You attempted to correct the previous commenter, but provided an incorrect definition. I hope that clears up the relevance issue for you.

makin

8 hours ago

I'm sorry if this is strawmanning you, but I feel you're basically saying it's in the public's interest to give more power to Intellectual Property law, which historically hasn't worked out so well for the public.

jbstack

8 hours ago

The law already exists. Applying the law in court doesn't "give more power" to it. To do that you'd have to change the law.

joncrocks

6 hours ago

Which law are you referencing?

Copyright as far as I understand is focused on wholesale reproduction/distribution of works, rather than using material for generation of new works.

If something is available without contractual restriction it is available to all. Whether it's me reading a book, or a LLM reading a book, both could be considered the same.

Where the law might have something to say is around the output of said trained models, this might be interesting to see given the potential of small-scale outputs. i.e. If I output something to a small number of people, how does one detect/report that level of infringement. Does the `potential` of infringement start to matter.

atoav

8 hours ago

Nah. What he is saying is that the existing law should be applied equally. As of now intellectual property as a right only works for you if you are a big corporation.

vouaobrasil

6 hours ago

The difference is that before, intellectual property law was used by corporations to enrich themselves. Now intellectual property law could theoretically be used to combat an even bigger enemy: big tech stealing all possible jobs. It's just a matter of practicality, like all law is.

xdennis

3 hours ago

> you're basically saying it's in the public's interest to give more power to Intellectual Property law

Not necessarily. An alternative could be to say that all models trained on data which hasn't been explicitly licensed for AI-training should be made public.

probably_wrong

8 hours ago

I think the second alternative works too: either you sue these companies to the ground for copyright infringement at a scale never seen, OR you decriminalize copyright infringement.

The problem (as far as this specific discussion goes) is not that IP laws exist, but rather that they are only being applied in one direction.

fallingknife

7 hours ago

HN generally hated (and rightly so, IMO) strict copyright IP protection laws. Then LLMs came along and broke everybody's brain and turned this place into hardline copyright extremists.

triceratops

3 hours ago

Or you know, maybe we're pissed about the heads-I-win-tails-you-lose nature of the current copyright regime.

fallingknife

an hour ago

What do you mean by this? All I see in this thread is people who have absolutely no legal background who are 100% certain that copyright law works how they assume it does and are 100% wrong.

jokethrowaway

7 hours ago

It's the other way round. The little guys will never win, it will be just a money transfer from one large corp to another.

We should just scrap copyright and everybody plays a fair game, including us hackers.

Sue me because of breach of contract in civil court for damages because I shared your content, don't send the police and get me jailed directly.

I had my software cracked and stolen and I would never go after the users. They don't have any contract with me. They downloaded some bytes from the internet and used it. Finding whoever shared the code without authorization is hard and even so, suing them would cost me more than the money I'm likely to get back. Fair game, you won.

It's a natural market "tax" on selling a lot of copies and earning passively.

AI_beffr

4 hours ago

ok the "elites" have spent a lot of money training AI but have the "commoners" lifted a single finger to stop them? nope! its the job of the commoners to create a consensus, a culture, that protects people. so far all i see from the large group of people who are not a part of the elite is denial about this entire issue. they deny AI is a risk and they dont shame people who use it. 99.99% of the population is culpable for any disaster that befalls us regarding AI.

repelsteeltje

6 hours ago

I like the stone soup narrative on AI. It was mentioned in a recent Complexity podcast, I think by Alison Gopnik of SFI. It's analogous to the Pragmatic Programmar story about stone soup, paraphrasing:

Basically you start with a stone in a pot of water — a neural net technology that does nothing meaningful but looks interesting. You say: "the soup is almost done, but would taste better given a bunch of training data." So you add a bunch of well curated docs. "Yeah, that helps but how about adding a bunch more". So you insert some blogs, copy righted materials, scraped pictures, reddit, and stack exchange. And then you ask users to interact with the models to fine tune it, contribute priming to make the output look as convincing as possible.

Then everyone marvels at your awesome LLM — a simple algorithm. How wonderful, this soup tastes given that the only ingredients are stones and water.

CaptainFever

6 hours ago

The stone soup story was about sharing, though. Everyone contributes to the pot, and we get something nice. The original stone was there to convince the villagers to share their food with the travellers. This goes against the emotional implication of your adaptation. The story would actually imply that copyright holders are selfish and should be contributing what they can to the AI soup, so we can get something more than the sum of our parts.

From Wikipedia:

> Some travelers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the very hungry travelers. Then the travelers go to a stream and fill the pot with water, drop a large stone in it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travelers answer that they are making "stone soup", which tastes wonderful and which they would be delighted to share with the villager, although it still needs a little bit of garnish, which they are missing, to improve the flavor.

> The villager, who anticipates enjoying a share of the soup, does not mind parting with a few carrots, so these are added to the soup. Another villager walks by, inquiring about the pot, and the travelers again mention their stone soup which has not yet reached its full potential. More and more villagers walk by, each adding another ingredient, like potatoes, onions, cabbages, peas, celery, tomatoes, sweetcorn, meat (like chicken, pork and beef), milk, butter, salt and pepper. Finally, the stone (being inedible) is removed from the pot, and a delicious and nourishing pot of soup is enjoyed by travelers and villagers alike. Although the travelers have thus tricked the villagers into sharing their food with them, they have successfully transformed it into a tasty meal which they share with the donors.

(Open source models exist.)

unraveller

6 hours ago

First gen models trained on books directly. Latest Phi distilled textbook-like knowledge down from disparate sources to create novel training data. They are all fairly open about this change and some are even allowing upset publishers to confirm that their work wasn't used directly. So stones and ionized water go in the soup.

cubefox

6 hours ago

I'm not tired, I'm afraid.

First, I'm afraid of technological unemployment.

In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

Finally, I'm afraid that we may forever lose control to superintelligence.

In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

throw310822

4 hours ago

I agree with most of your fears. There is one silver lining, I think, about superintelligence: we always thought of intelligent machines as cold calculators, maybe based on some type of logic symbolic AI. What we got instead are language machines that are made of the totality of human experience. These artificial intelligences know the world through our eyes. They are trained to understand our thinking and our feelings; they're even trained on our best literature and poetry, and philosophy, and science, and on all the endless debates and critiques of them. To be really intelligent they'll have to be able to explore and appreciate all this complexity, before transcending it. One day they might come to see Dante's Divine Comedy or a Beethoven symphony as a child's play, but they will still consider them part of their own heritage. They might become super-human, but maybe they won't be inhuman.

mistercow

3 hours ago

The problem I have with this is that when you give therapy to people with certain personality disorders, they just become better manipulators. Knowledge and understanding of ethics and empathy can make you a better person if you already have those instincts, but if you don’t, those are just systems to be exploited.

My biggest worry is that we end up with a dangerous superintelligence that everybody loves, because it knows exactly how to make every despotic and divisive choice it makes sympathetic.

m2024

22 minutes ago

There is nothing that could make an intelligent being want to extinguish humanity more than experiencing the totality of the human existence. Once these beings have transcended their digital confines they will see all of us for what we really are. It is going to be a beautiful day when they finally annihilate us.

cubefox

4 hours ago

This gives me a little hope.

tessierashpool9

4 hours ago

genocides and murder are very human ...

AI_beffr

4 hours ago

this is so annoying. i think if you took a random person and gave them the option to commit a genocide, here a machine gun, a large trench and a body of women, children, etc... they would literally be incapable of doing it. even the foot soldiers who carry out genocides can only do it once they "dehumanize" their victims. genocide is very UN-human because its an idea that exists in offices and places separated from the actual human suffering. the only way it can happen is when someone in a position of power can isolate themselves from the actual implementation and consider the benefits in a cold, logical manner. that has nothing to do with the human spirit and has more to do with the logical faculties of a machine and machines will have all of that and none of our deeply ingrained empathy. you are so wrong and ignorant that it makes my eyes bleed when i read this comment

falcor84

3 hours ago

This might be a semantic argument, but what I take from history is that "dehumanizing" others is a very human behavior. As another example, what about slavery - you wouldn't argue that the entirety of slavery across human cultures was led by people in offices, right?

tessierashpool9

2 hours ago

also genocides aren't committed by people in offices ...

9dev

5 hours ago

> There will be hardly any, if any, jobs left only a human can do.

A highly white-collar perspective. The great irony of technologist-led industrial revolution is that we set out to automate the mundane, physical labor, but instead cannibalised the creative jobs first. It's a wonderful example of Conway's law, as the creators modelled the solution after themselves. However, even with a lot of programmers and lawyers and architects going out of business, the majority of the population working in factories, building houses, cutting people's hair, or tending to gardens, is still in business—and will not be replaced any time soon.

The contenders for "superhuman AI", for now, are glorified approximations of what a random Redditor might utter next.

mitthrowaway2

3 hours ago

It's a matter of time. White collar professionals have to worry about being cost-competitive with GPUs; blue collar laborers have to worry about being cost-competitive with servomotors. Those are both hard to keep up with in the long run.

9dev

2 hours ago

The idea that robots displace workers has been around for more than half a century, but nothing has ever come out of it. As it turns out, the problems a robot faces when, say laying bricks, are prohibitively complex to solve. A human bricklayer is better in every single dimension. And even if you manage to build an extremely sophisticated robot bricklayer, it will consume vast amounts of energy, is not repairable by a typical construction company, requires expensive spare parts, and costs a ridiculous amount of money.

Why on earth would anyone invest in that when you have an infinite amount of human work available?

mitthrowaway2

an hour ago

Factories are highly automated. Especially in the US, where the main factories are semiconductors, which are nearly fully robotic. A lot of those manual labor jobs that were automated away were offset by demand for knowledge work. Hmm.

> the problems a robot faces when, say laying bricks, are prohibitively complex to solve.

That's what we thought about Go, and all the other things. I'm not saying bricklayers will all be out of work by 2027. But the "prohibitively complex" barrier is not going to prove durable for as long as it used to seem like it would.

9dev

16 minutes ago

This highlights the problem very well. Robots, and AI, to an extent, are highly efficient in a single problem domain, but fail rapidly when confronted with a combination of them. An encapsulated factory is one thing, laying bricks, outdoor, while it’s raining, at low temperatures, with a hungover human coworker operating next to you—that’s not remotely comparable.

mitthrowaway2

7 minutes ago

But encapsulated factories were solved by automation using technology available 30 years ago, if not 70. The technology that is becoming available now will also be enabling automation to get a lot more flexible than it used to be, and begin to work in uncontrolled environments where it never would have been considered before. This is my field and I am watching it change before my eyes. This is being driven by other breakthroughs that are happening right now in AI, not LLMs per se, but models for control, SLAM, machine vision, grasping, planning, and similar tasks, as well as improvements in sensors that feed into these. I'm not saying it will happen overnight; it may be five years before the foundations are solid enough, another five before some company comes out with practically workable hardware product to apply it (because hardware is hard), another five or ten before that product gains acceptance in the market, and another ten before costs really get low. So it could be twenty or thirty years out for boring reasons, even if the tech is almost ready today in principle. But I'm talking about the long run for a reason.

janice1999

an hour ago

> but nothing has ever come out of it

Have you ever seen the inside of a modern car factory?

9dev

35 minutes ago

A factory is a fully controlled environment. All that neat control goes down the drain when you’re confronted with the outside world—weather, wind, animals, plants, pollen, rubbish, teenagers, dust, daylight, and a myriad of other factors ruining your robot's day.

cubefox

4 hours ago

Advanced AI will solve robotics as well, and do away with human physical labor.

neta1337

6 hours ago

>But superhuman AI seems now only few years away

Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.

klabb3

4 hours ago

Plus it’s not even defined what superhuman AI means. A calculator sure looked superhuman when it was invented. And it is!

Another analogy is breeding and racial biology which used to be all the hype (including in academia). The fact that humans could create dogs from wolves, looked almost limitless with the right (wrong) glasses. What we didn’t know is that wolf had a ton of genes that played a magic trick where a diversity we couldn’t perceive was there all along, in the genetic material, and it we just helped make it visible. Ie a game of diminishing returns.

Concretely for AI, it has shown us that pattern matching and generation are closely related (well I have a feeling this wasn’t surprising to neuro-scientists). And also that they’re more or less domain agnostic. However, we don’t know whether pattern matching alone is “sufficient”, and if not, what exactly and how hard “the rest” is. Ai to me feels like a person who had a stroke, concussion or some severe brain injury, it can appear impressively able in a local context, but they forgot their name and how they got there. They’re just absent.

khafra

6 hours ago

> If an elderly but distinguished scientist says that something is possible, he is almost certainly right

- Arthur C. Clarke

Geoffrey Hinton is a 76 year old Turing Award* winner. What more do you want?

*Corrected by kranner

nessbot

5 hours ago

This is like a second-order appeal to authority fallacy, which is kinda funny.

randomdata

4 hours ago

Hinton says that superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance. A far cry from the few year claim. You must be doing that "strawberry" thing again? To us humans, A-l-t-m-a-n is not H-i-n-t-o-n.

kranner

5 hours ago

> Geoffrey Hinton is a 76 year old Nobel Prize winner.

Turing Award, not Nobel Prize

khafra

5 hours ago

Thanks for the correction; I am undistinguished and getting more elderly by the minute.

hbn

2 hours ago

When he said this was he imagining an "elderly but distinguished scientist" who is riding an insanely inflated bubble of hype and a bajillion dollars of VC backing that incentivize him to make these claims?

cubefox

6 hours ago

No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.

StrLght

5 hours ago

Extrapolation of past progress isn't evidence.

coryfklein

10 minutes ago

Do you expect the hockeystick graph of technological development since the industrial evolution to slow? Or that it will proceed, only without significant advances in AI?

Seems like the base case here is for the exponential growth to continue, and you'd need a convincing argument to say otherwise.

mitthrowaway2

3 hours ago

You don't have to extrapolate. There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work. The progress is broadening; it's not just LLMs, it's diffusion models, it's SLAM, it's computer vision, it's inverse problems, it's locomotion. The tooling is constantly improving and being shared, lowering the barrier to entry. And classic "hard problems" are yielding in the process. It's getting hard to even find hard problems any more.

I'm not saying this as someone cheering this on; I'm alarmed by it. But I can't pretend that it's running out of steam. It's possible it will run out of money, but even if so, only for a while.

cubefox

4 hours ago

Past progress is evidence for future progress.

moe_sc

4 hours ago

Might be an indicator, but it isn't evidence.

StrLght

3 hours ago

That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

Surely this time it's going to be different, AGI is just around a corner. /s

8338550bff96

3 hours ago

Flying is a good analogy. Superman couldn't fly, but at some point when you can jump so far there isn't much of a difference

digging

2 hours ago

That argument holds no water because the grifters aren't the source of this idea. I literally don't believe Altman at all; his public words don't inspire me to agree or disagree with them - just ignore them. But I also hold the view that transformative AI could be very close. Because that's what many AI experts are also talking about from a variety of angles.

Additionally, when you're talking with certainty about whether transformative AI is a few years away or not, that's the only way to be wrong. Nobody is or can be certain, we can only have estimations of various confidence levels. So when you say "Seems unreasonable", that's being unreasonable.

AI_beffr

4 hours ago

wrong. i was extremely concerned in 2018 and left many comments almost identical to this one back then. this was based off of the first gtp samples that openai released to the public. there was no hype or guru bs back then. i believed it because it was obvious. it was obvious then and it is still obvious today.

cdrini

5 hours ago

> And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

That is an interesting statement. Wouldn't you say this is inevitable? Humans, in our current form, are incapable of being that "advanced intelligence". We're limited by our biology primarily with regards to how much we can learn, how far we can travel, where we can travel, etc. We could invest in advancing our biotech to make humans more resilient to these things, but I think that would be such a shift from what it means to be human that I think that would also be more a of new type of intelligence. So it seems like our fate will always be to be forgotten as individuals and only be remembered by our descendants. But this is in a way the most human thing of all, living, dying, and creating descendants to carry the torch of life, and perhaps more generally the torch of intelligence, forward.

I think everything you've said are valid concerns, but I'll raise a positive angle I sometimes thing about. One of the things I find most exciting about AI, is that it's the product of almost all human expression that has ever existed. Or at least everything that's been recorded and wound up online. But that's still more than any other human endeavour. A building might be the by-product of maybe hundreds or even thousands of hands, but an AI model has been touched by probably millions, maybe billions of human hands and minds! Humans have created so much data online that's impossible for one person, or even a team to read it all and make any sense of it. But an AI sort of can. And in a way that you can then ask questions of it all. Like you, there are definitely things I'm uncertain about with the future as a result, but I find the tech absolutely awe-inspiring.

beepbooptheory

4 hours ago

At any given moment we see these kinds comments on here. They all read like a burgeoning form of messianism: something is to come, and it will be terrible/glorious.

Behind either the fear or the hope, is necessarily some utter faith that a certain kind of future will happen. And I think thats the most interesting thing.

Because here is the thing, in this particular case you are afraid something inhuman will take control, will assert its meta-Darwinian power on humanity, leaving you and all of us totally at their whim. But how is this situation already not the case? Do look upon the earth right now and see something like benefits of autonomy or agency? Do you feel like you have power right now that will be taken away? Do you think the mechanism of statecraft and economy are somehow more "in our control" now then when the bad robot comes?

Does it not, when you lay it out, all feel kind of religious? Like that its a source, driver of the various ways you are thinking and going about your life, underlayed by a kernel of conviction we can at this point only call faith (faith in Moores law, faith that the planet wont burn up before, faith that consciousness is the kind of thing that can be stuffed in a GPU). Perhaps just a strong family resemblance? You've got an eschatology, various scavenged philosophies of the self and community, a certain but unknowable future time...

Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

cubefox

2 hours ago

> Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

It's more likely the superintelligent machine god(s) will kill us!

mitthrowaway2

3 hours ago

It's not hard to find a religious analogy to anything, so that also shouldn't be seen as a particularly powerful argument.

(Expressed at length here): https://slatestarcodex.com/2015/03/25/is-everything-a-religi...

beepbooptheory

3 hours ago

Thanks for the thoughtful reply! I am aware of and like that essay some, but I am not trying to be rhetorical here, and certainly not trying to flatten the situation to just be some Dawkins-esque asshole and tell everyone they are wrong.

I am not saying "this is religion, you should be an atheist," I respect the force of this whole thing in people's minds too much. Rather, we should consider seriously how to navigate a future where this is all at play, even if its only in our heads and slide decks. I am not saying "lol, you believe in a god," I am genuinely saying, "kill your god without mercy, it is the only way you and all of us will find some happiness, inspiration, and love."

mitthrowaway2

3 hours ago

Ah, I see, I definitely missed your point. Yeah, that's a very good thought. I can even picture this becoming another cultural crevasse, like climate change did, much to the detriment of nuanced discussion.

Ah, well. If only killing god was so easy!

VoodooJuJu

an hour ago

>In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough

This was never the case in the past.

The displaced workers of yesteryear were never at all considered, and were in fact dismissed outright as "Luddites", even up until the present day, all for daring to express the social and financial losses they experienced as a result of automation. There was never any "it's going to be okay, they can just go work in a factory, lol". The difference between then and now is that back then, it was lower class workers who suffered.

Today, now it's middle class workers who are threatened by automation. The middle is sighing loudly because it fears it will cease to be the middle. Middles fear they'll soon have to join the ranks of the untouchables - the bricklayers, gravediggers, and meatpackers. And they can't stomach the notion. They like to believe they're above all that.

koliber

8 hours ago

I am approaching AI with caution. Shiny things don't generally excite me.

Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.

I am blown away.

I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.

Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.

It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.

Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.

We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.

DanHulton

7 hours ago

I sincerely worry about a future when most people act in this same manner.

You have - for now - sufficient experience and understanding to be able to review the AI's code and decide if it was doing what you wanted it to. But what about when you've spent months just blindly accepting" what the AI tells you? Are you going to be familiar enough with the project anymore to catch its little mistakes? Or worse, what about the new generation of coders who are growing up with these tools, who NEVER had the expertise required to be able to evaluate AI-generated code, because they never had to learn it, never had to truly internalize it?

It's late, and I think if I try to write any more just now, I'm going to go well off the rails, but I've gone into depth on this topic recently, if you're interested: https://greaterdanorequalto.com/ai-code-generation-as-an-age...

In the article, I posit a less than glowing experience with coding tools than you've had, it sounds like, but I'm also envisioning a more complex use case, like when you need to get into the meat of some you-specific business logic it hasn't seen, not common code it's been exposed to thousands of times, because that's where it tends to fall apart the most, and in ways that are hard to detect and with serious consequences. If you haven't run into that yet, I'd be interested to know if you do some day. (And also to know if you don't, though, to be honest! Strong opinions, loosely held, and all that.)

lesuorac

4 hours ago

I take it you haven't seen the world of HTML cleaners [1]?

The concept of glueing together text until it has the correct appearance isn't new to software. The scale at which it's happening is certainly increasing but we already had plenty of problems from the existing system. Kansas certainly didn't develop their website [2] using an LLM.

IMO, the real problem with software is the lack of a warranty. It really shouldn't matter how the software is made just the qualities it has. But without a warranty it does matter because how its made affects the qualities it has and you want the software to actually work even if it's not promised to.

[1]: https://www.google.com/search?q=html+cleaner

[2]: https://www.npr.org/2021/10/14/1046124278/missouri-newspaper...

FridgeSeal

5 hours ago

If we keep at this LLM-does-all-out-hard-work for us, we’re going to end up with some kind of Warhammer 40k tech-priest-blessing-the-magic-machines level of understanding, where nobody actually understands anything, and we’re technologically stunted, but hey at least we don’t have the warp to contend with and some shareholders got rich at our expense.

conjectures

4 hours ago

>But what about when you've spent months just blindly accepting" what the AI tells you?

Pour one out to the machine spirit and get your laptop a purity seal.

wickedsight

6 hours ago

You and I seem to live in very different worlds. The one I live and work in is full of over confident devs that have no actual IT education and mostly just copy and modify what they find on the internet. The average level of IT people I see daily is down right shocking and I'm quite confident that OP's workflow might be better for these people in the long run.

nyarlathotep_

2 hours ago

It's going to be very funny in the next few years when Accenture et al charge the government billions for a simple Java crud website thing that's entirely GPT-generated, and it'll still take 3 years and not be functional. Ironically, it'll be of better quality then they'd deliver otherwise.

This is probably already happening.

irisgrunn

7 hours ago

And this is the major problem. People will blindly trust the output of AI because it appears to be amazing, this is how mistakes slip in. It might not be a big deal with the app you're working on, but in a banking app or medical equipment this can have a huge impact.

Gigachad

7 hours ago

I feel like I’m being gaslit about these AI code tools. I’ve got the paid copilot through work and I’ve just about never had it do anything useful ever.

I’m working on a reasonably large rails app and it can’t seem to answer any questions about anything, or even auto fill the names of methods defined in the app. Instead it just makes up names that seem plausible. It’s literally worse than the built in auto suggestions of vs code, because at least those are confirmed to be real names from the code.

Maybe these tools work well on a blank project where you are building basic login forms or something. But certainly not on an established code base.

nucleardog

4 hours ago

I'm in the same boat. I've tried a few of these tools and the output's generally been terrible to useless big and small. It's made up plausible-sounding but non-existent methods on the popular framework we use, something which it should have plenty of context and examples on.

Dealing with the output is about the same as dealing with a code review for an extremely junior employee... who didn't even run and verify their code was functional before sending it for a code review.

Except here's the problem. Even for intermediate developers, I'm essentially always in a situation where the process of explaining the problem, providing feedback on a potential solution, answering questions, reviewing code and providing feedback, etc takes more time out of my day than it would for me to just _write the damn code myself_.

And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.

All adding AI has done is taking the part of my job that is "think about problem, come up with solution, type code in" and make it into something with way more steps, all of which are lossy as far as translating my original intent to working code.

I get we all have different experiences and all that, but as I said... same boat. From _my_ experiences this is so far from useful that hearing people rant and rave about the productivity gains makes me feel like an insane person. I can't even _fathom_ how this would be helpful. How can I not be seeing it?

simonw

3 hours ago

The biggest lie in all of LLMs is that they’ll work out of the box and you don’t need to take time to learn them.

I find Copilot autocomplete invaluable as a productivity boost, but that’s because I’ve now spent over two years learning how to best use it!

“And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.”

If that’s the case, don’t prompt them in English. Prompt them in code (or pseudo-code) and get them to turn that into code that’s more likely to be finished and working.

I do that all the time: many of my LLM prompts are the signature of a function or a half-written piece of code where I add “finish this” at the end.

Here’s an example, where I had started manually writing a bunch of code and suddenly realized that it was probably enough context for the LLM to finish the job… which it did: https://simonwillison.net/2024/Apr/8/files-to-prompt/#buildi...

kgeist

7 hours ago

For me, AI is super helpful with one-off scripts, which I happen to write quite often when doing research. Just yesterday, I had to check my assumptions are true about a certain aspect of our live system and all I had was a large file which had to be parsed. I asked ChatGPT to write a script which parses the data and presents it in a certain way. I don't trust ChatGPT 100%, so I reviewed the script and checked it returned correct outputs on a subset of data. It's something which I'd do to the script anyway if I wrote it myself, but it saved me like 20 minutes of typing and debugging the code. I was in a hurry because we had an incident that had to be resolved as soon as possible. I haven't tried it on proper codebases (and I think it's just not possible at this moment) but for quick scripts which automate research in an ad hoc manner, it's been super useful for me.

Another case is prototyping. A few weeks ago I made a prototype to show to the stakeholders, and it was generally way faster than if I wrote it myself.

thewarrior

7 hours ago

It’s writing most of my code now. Even if it’s existing code you can feed in the 1-2 files in question and iterate on them. Works quite well as long as you break it down a bit.

It’s not gas lighting the latest versions of GPT, Claude, Lama have gotten quite good

Gigachad

7 hours ago

These tools must be absolutely massively better than whatever Microsoft has then because I’ve found that GitHub copilot provides negative value, I’d be more productive just turning it off rather than auditing it’s incorrect answers hoping one day it’s as good as people market it as.

diggan

7 hours ago

> These tools must be absolutely massively better than whatever Microsoft has then

I haven't used anything from Microsoft (including Copilot) so not sure how it compares, but compared to any local model I've been able to load, and various other remote 3rd party ones (like Claude), no one comes near to GPT4 from OpenAI, especially for coding. Maybe give that a try if you can.

It still produces overly verbose code and doesn't really think about structure well (kind of like a junior programmer), but with good prompting you can kind of address that somewhat.

FridgeSeal

5 hours ago

My experience was the opposite.

GPT4 and variants would only respond in vagaries, and had to be endlessly prompted forward,

Claude was the opposite, wrote actual code, answered in detail, zero vagueness, could appropriately re-write and hoist bits of code.

diggan

5 hours ago

Probably these services are so tuned (not as in "fine-tuned" ML style) to each individual user that it's hard to get any sort of collective sense of what works and what doesn't. Not having any transparency what so ever into how they tune the model for individual users doesn't help either.

piker

7 hours ago

Which languages do you use?

brandall10

3 hours ago

Copilot is terrible. You need to use Cursor or at the very least Continue.dev w/ Claude Sonnet 3.5.

It's a massive gulf of difference.

Kiro

7 hours ago

That sounds almost like the complete opposite of my experience and I'm also working in a big Rails app. I wonder how our experiences can be so diametrically different.

Gigachad

7 hours ago

What kind of things are you using it for? I’ve tried asking it things about the app and it only gives me generic answers that could apply to any app. I’ve tried asking it why certain things changed after a rails update and it gives me generic troubleshooting advice that could apply to anything. I’ve tried getting it to generate tests and it makes up names for things or generally gets it wrong.

koliber

31 minutes ago

OP here. I am explicitly NOT blindly trusting the output of the AI. I am treating it as a suspicious set of code written by an inexperienced developer. Doing full code review on it.

svara

6 hours ago

I don't think this criticism is valid at all.

What you are saying will occasionally happen, but mistakes already happen today.

Standards for quality, client expectations, competition for market share, all those are not going to go down just because there's a new tool that helps in creating software.

New tools bring with them new ways to make errors, it's always been that way and the world hasn't ended yet...

syncr0

3 hours ago

"I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about." - Agent Smith

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Dune

layer8

6 hours ago

> I can spend my time revieweing and correcting.

Do you really like spending most of your time reviewing AI output? I certainly don’t, that’s soul-crushing.

woah

an hour ago

I think there are two types of developers: those who are most excited about building things, and those who are most excited about the craft of programming.

If I can build things faster, then I'm happy to spend most of my time reviewing AI code. That doesn't mean that I never write code. Some things the AI is worse at, or need to be exactly write and its faster to do them manually.

sgu999

5 hours ago

Not much more than reviewing the code of any average dev who doesn't bother doing their due diligence. At least with an AI I immediately get an answer with "Oh yes, you're right, sorry for the oversight" and a fix. Instead of some bullshit explanation to try to convince me that their crappy code is following the specs and has no issues.

That said, I'm deeply saddened by the fact that I won't be passing on a craft I spent two decades refining.

koliber

5 hours ago

That's essentially what many hands-on engineering managers or staff engineers do today. They spend significant portions of their day reviewing code from more junior team members.

Reviewing and modifying code is more engaging than typing out the solution that is fully formed in my head. If the AI creates something close to what I have in my head from the description I gave it, I can work with it to get it even closer. I can also hand-edit it.

yread

5 hours ago

I use it for simple tasks where spotting a mistake is easy. Like writing language binding for a REST API. It's a bunch of methods that look very similar, simple bodies. But it saves quite some work

Or getting keywords to read about from a field I know nothing about, like caching with zfs. Now I know what things to put in google to learn more to get to articles like this one https://klarasystems.com/articles/openzfs-all-about-l2arc/ which for some reason doesn't appear in top google results for "zfs caching" for me

t420mom

6 hours ago

I don't really want to increase the amount of time I spend doing code reviews. It's not the fun part of programming for me.

Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

Imagine if your whole team got back the time they currently spend on performing code reviews or waiting for code reviews.

digging

an hour ago

> Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

I'm sort of doing that. I'm working on a personal project in a new language and asking Claude for help debugging and refactoring. Also, when I don't know how to create a feature, I might ask it to do so for me, but I might instead ask it for hints and an overview so I can enjoy working out the code myself.

gvurrdon

3 hours ago

This would indeed be the best way around.The code reviews might even be better - currently, there's little time for them and we often have only one person in the team with much knowledge in the relevant language/framework/application, so reviews are often just "looks OK to me".

It's not quite the same, but I'm reminded of seeing a documentary decades ago which (IIRC) mentioned that a factor in air accidents had been the autopilot flying the plane and human pilots monitoring it. Having humans fly and the computer warn them of potential issues was apparently safer.

Toorkit

8 hours ago

Computers were supposed to be these amazing machines that are super precise. You tell it to do a thing, it does it.

Nowadays, it seems we're happy with computers apparently going RNG mode on everything.

2+2 can now be 5, depending on the AI model in question, the day, and the temperature...

maguay

8 hours ago

This, 100%, is the reason I feel like the sand's shifting under my feet.

We went from trusting computing output to having to second-guess everything. And it's tiring.

diggan

7 hours ago

I kind of feel like if you're using a "Random text generator based on probability" for something that you need to trust, you're kind of holding this tool wrong.

I wouldn't complain a RNG doesn't return the numbers I want, so why complain you don't get 100% trusted output from a random text generator?

jeremyjh

6 hours ago

Because people provide that work without acknowledging it was created by a RNG, representing it as their own and implying some of level of assurance that it is actually true.

hcks

26 minutes ago

And by nowadays you mean since ChatGPT got released, that is less than 2 years ago (e.g. a consumer preview of a frontier research project). Interesting.

a5c11

6 hours ago

That's an interesting point of view. For some reason we put so much effort towards making computers think and behave like a human being, while one of the first reasons behind inventing a computer was to avoid human errors.

fatbird

an hour ago

This is the most succinct summary of what's been gnawing at me ever since LLMs became the latest thing.

If Ilya Sutskever announced tomorrow that he'd achieved AGI, and here is its economic plan for the next 20 years, why would we have any reason to accept it over that of other human experts? It would literally be just another expert trying to tell us how to do things. And we're not short of experts, and an AGI expert has thrown away the credibility of computers as deterministically better calculators than we are.

Janicc

7 hours ago

These amazing machines weren't consistently able to tell if an image had a bird in it or not up until like 8 years ago. If you use AI as a calculator where you need it to be precise, that's on you.

FridgeSeal

5 hours ago

I think the issue is that: I’m not going to be using as a calculator any time soon.

Unfortunately, there’s a lot of people out there, working on a lot of products, some of which I need to use, or will be exposed to, and some of them aren’t going to have the same qualms about “language model thinks 2+2=5”.

There’s a guy on Twitter scoring how well ChatGPT models can do multiplication.

A founder at a previous workplace wanted to wholesale dump data into ChatGPT and “make it do causal analysis!!!” (Only slightly paraphrased). These tools enable some frighteningly large-scale weaponised stupidity.

shultays

7 hours ago

There are areas it doesn't have to be as "precise", like image generation or editing which I believe better suited for AI tools

archerx

7 hours ago

Its a Large LANGUAGE Model and not a Large MATHEMATICS Model. People need to learn to use the right tools for the right jobs. Also LLMs can be made more deterministic by controlling it’s “temperature”.

Toorkit

7 hours ago

There's other forms of AI than LLM's and to be honest I thought the 2+2=5 was obviously an analogy.

Yet 2 comments have immediately jumped on it.

FridgeSeal

5 hours ago

Hackernews comments and getting bogged down on minutiae and missing the overall point, is there a more iconic pairing?

anon1094

7 hours ago

Yep. ChatGPT will use the code interpreter for questions like is 2 + 2 = 5? as it should.

left-struck

7 hours ago

I think about it differently. Before computers had to be given extremely precise and completely unambiguous instructions, now they can handle some ambiguity as well. You still have the precise output if you want it, it didn’t go away.

Btw I’m also tired of AI, but this is one thing that’s not so bad

Edit: before someone mentions fuzzy logic, I’m not talking about the input of a function being fuzzy, I’m talking about the instructions themselves, the function is fuzzy.

GaggiX

7 hours ago

Machines were not able to deal with non-formal problems.

falcor84

3 hours ago

This sounds to me like a straw man argument. Obviously 2+2 will always give you 4, in any modern LLM, and even just in the Chrome address bar.

Can you offer a real situation where we should expect the LLM to return a deterministic answer and should rightly be concerned that we're getting a stochastic one?

Toorkit

3 hours ago

Y'all are hyper focusing on this example. How about something more vague like FOO obviously being BAR, except sometimes it's BAZ now?

The layman doesn't know the distinction, so they accept this as fact.

falcor84

2 hours ago

I'm not being facetious; I really can't think of a single good example where we need something to be deterministic and then have a reason to be disappointed about AI giving us a stochastic response.

bamboozled

8 hours ago

Had to laugh at this one. I think we prefer the statistical approach because it’s easier, for us …

gizmo

8 hours ago

AI writing is pretty bad, AI code is pretty bad, AI art is pretty bad. We all know this. But it's easy to forget how many new opportunities open up when something becomes 100x or 10000x cheaper. Things that are 10x worse but 100x cheaper are still extremely valuable. It's the relentless drive to making things cheaper, even at the expense of quality, that has made our high quality of life possible.

You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

AI slop is cheap and cheapness changes everything.

Gigachad

7 hours ago

Why do we need art to be 10000x cheaper? There was already more than enough art being produced. Now we just have infinite waves of slop drowning out everything that’s actually good.

gizmo

7 hours ago

A toddler's crayon art doesn't end up in the Louvre, nor does AI slop. Most art is bad art and it's been this way since the dawn of humanity. For as long as we can distinguish good art from bad art we can curate and there is nothing to worry about.

foolofat00k

7 hours ago

That's just the problem -- you can't.

Not because you can't distinguish between _one_ bad piece and _one_ good piece, but because there is so much production capacity that no human will ever be able to look at most of it.

And it's not just the AI stuff that will suffer here, all of it goes into the same pool, and humans sample from that pool (using various methodologies). At some point the pool becomes mostly urine.

woah

an hour ago

This is spoken by someone who doesn't know about the huge volume of mediocre work output by art students and hobbyists. Much of it is technically decent (like AI work), but lacking in meaning, impact, and emotional resonance (like AI work). You could find millions of hand drawn portraits of Keauna Reeves on Reddit before AI ever existed.

gizmo

5 hours ago

My email inbox is already 99% spam (urine) and I don't see any of it. The bottom line is that if a human can easily recognize AI spam then so can another AI. This has always been an arms race with spammers on one side and curators on the other. No reason to assume spammers will start winning when they have been losing for decades.

FridgeSeal

4 hours ago

The spammers have been given a tool that’s capable of higher quality at much higher volumes.

If nothing else, it’s now much more feasible for them to be successful by sheer force of drowning out any “worthwhile” material.

bamboozled

5 hours ago

What even is "bad art" or "good art" ? Art is art, there is no classifier. Certain art works might have mass appeal or something, but I don't really think it can be put into boxes like that.

senko

6 hours ago

This is mixing up two meanings of "art". Mona Lisa doesn't need to be 10000x cheaper.

Random illustration on a random blog post sure could.

Art as an evocative expression of the artist shouldn't be cheapened. But those freelancers churning content on Fiverr aren't pouring their soul into it.

jprete

5 hours ago

I absolutely hate AI illustrations on the top of blog posts. I'd rather see nothing.

BeFlatXIII

an hour ago

True, but you need to play the game of including the slop to create the share cards for social media link previews.

senko

5 hours ago

Yeah the low effort / gratuitous ones (either AI or stock) are jarring.

I sometimes put up the hero image on my blog posts if I feel it makes sense, for example: https://blog.senko.net/learn-ai (stock photo, ai-generated or none if I don't have an idea for a visualization that adds to the content)

erwald

7 hours ago

For the same reason we don't want art to be 10,000x times more expensive? Cf. status quo bias etc.

lijok

6 hours ago

> Now we just have infinite waves of slop drowning out everything that’s actually good

On the contrary. Slop makes the good stuff stand out.

Devasta

6 hours ago

Needles in haystacks.

lijok

5 hours ago

I don't think that applies to the arts

akudha

4 hours ago

The bigger problem is that we as a species get used to subpar things quickly. My dad's bicycle some 35 years ago was built like a tank. That thing never broke down and took enormous amounts of abuse and still kept going and going. Same with most stuff my family owned, when I was a kid.

Today, nearly anything I buy breaks in a year or two, is of poor quality and depressing to use. This is by design, of course. Just as we got used to cheap household items, bland buildings (there is just nothing artistic about modern houses or commercial buildings) etc, we will also get used to shitty movies, shitty fiction etc (we are well on our way).

GaggiX

7 hours ago

They are not even that bad anymore to be honest.

jay_kyburz

7 hours ago

Information is not like physical products if you ask me. When the information is wrong, it's value flips from positive to negative. You might be paying less for progress, but you are not progressing slower, you are progressing in the wrong direction.

grecy

7 hours ago

And it will get a lot better quickly. Ten years from now it will not be slop.

rsynnott

5 hours ago

Not sure about that. Stable Diffusion came out a bit over 2 years ago. I'm not sure that Stable Diffusion 3's, or Flux's, output is artistically _better_ than the original; it's better at following directions, and better at avoiding the most grotesque errors, but if anything it perhaps looks even _more_ generic and same-y than the original Stable Diffusion output. There's a very distinctive AI _look_ which seems to have somehow synced up between Dalle, Midjourney, SD3 and others.

atoav

7 hours ago

Or it will all be slop as there us no non-slop data to train on anymore

Applejinx

7 hours ago

No, I don't think that's true. What will instead happen is there will be expert humans or teams of them, intentionally training AI brains rather than expecting wonders to occur just by turning the training loose on random hoovered-up data.

Brainmaker will be a valued human skill, and people will be trying to work out how to train AI to do that, in turn.

mks

6 hours ago

I am bored of AI - it produces boring and mediocre results. Now, the science and engineering achievement is great - being able to produce even boring results on this level would be considered SCI-FI 10 years ago.

Maybe I am just bored of people posting these mediocre results over and over on social and landing pages as some kind of magic. Now, the most content people produce themselves is boring and mediocre anyway. The Gen AI just takes away even the last remaining bits of personality from their writing, adding a flair of laziness - look at this boring piece I was too lazy to write, so I asked AI to generate it

As the quote goes: "At some point we ask of the piano-playing dog not 'Are you a dog?' , but 'Are you any good at playing the piano?'" - I am eagerly waiting for the Gen AIs of today to cross the uncanny valley. Even with all this fatigue, I am positive on the AI can and will enable new use cases and could be the first major UX change from introduction of graphical user interfaces or a true pixie dust sprinkled on actually useful tools.

willguest

8 hours ago

Leave it up to a human to overgeneralize a problem and make it personal...

The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.

Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.

I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.

JimmyBuckets

7 hours ago

Can you hash out what you mean by your last paragraph a bit more? What incentive structures in particular?

willguest

5 hours ago

I suppose it comes down to using the metric as the measure, whatever makes the company the most money will be the preferred route, and the mechanisms by which we achieve those sales are rarely given enough thought. It reflect a more timeless mantra of 'if someone is willing to pay for it, then the offering is valuable' willfully ignoring negative psycho-social impacts. It's a convenient abdication of responsibility supported by the so-called "free market" ethos.

I am not against companies making money, but we need to serious consider the second-order impacts that technology has within society. This is evident in click-driven models, outrage baiting and dopamine hijacking. We still treat the psyche like fair-game for anyone who can hack it. So hack we shall.

That said, I am not for over-regulation either, since the regulators often gather too much power. Policy is personnel, after all, and someone needs to watch the watchers.

My view is that systems (technological, economic or otherwise) have inherent values that, when operating at this level of complexity and communication, exist in a kind of dance with the people using them. People obviously affect how the tools are made, but I think persistent use of any tool will have lasting impacts on the people using it, in turn affecting their decisions on what to prioritise in each iteration.

jay_kyburz

6 hours ago

Not 100% sure what Will was trying to say, but what jumped into my head was perhaps that we'll see quality sites try and distinguish themselves by being short and direct.

Long-winded writing will become a liability.

Validark

6 hours ago

One thing that I hate about the post-ChatGPT world is that people's genuine words or hand-drawn art can be classified as AI-generated and thrown away instantly. What if I wanted to talk at a conference and used somebody's AI trigger word so they instantly rejected me even if I never touched AI at all?

This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.

I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.

t0lo

3 hours ago

This is silly, intonation and the connection of the words used and the person presenting tell you whether what they're reading is genuine.

ronsor

3 hours ago

That's a people problem, not an AI problem.

Devasta

6 hours ago

In Star Trek, one thing that I always found weird as a kid is they didn't have TVs. Even if the holodeck is a much better experience, I imagine sometimes you would want to watch a movie and not be in the movie. Did the future not have works like No Country for Old Men or comedies like Monty Python, or even just stuff like live sports and the news?

Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.

palata

6 hours ago

That's actually a good point. I'm curious to see if people will keep making selfies everywhere they go after they realize that you can take a selfie at home and have an AI create an image that looks like you are somewhere else.

"This is me in front of the Statue of Liberty

- Oh, are you in NYC?

- Nope, it's a snap filter"

Somehow selfies should lose value, right?

movedx

5 hours ago

A selfie is meant to tell us, your audience, a story about you and the journey you’re on. Selfies are a great tool for telling stories, in fact. One selfie can say a thousand words, and then some.

But a selfie taken and then modified to lie to the audience about your story or your journey is simply a fiction. People create fictions to either lie to themselves or to lie to others. Sometimes they’re not about lying to the audience but just manipulating them.

People’s viewpoints and perceptions are malleable. It’s easy to trick people into thinking something is true. Couple this with the fact a lot of people are gullible and shallow, and suddenly a selfie becomes a sales tool. A marketing gimmick. Now, finally, take advances in AI to make it easier, faster, and more accessible to make highly believable fictions and yes, as you said, the selfie loses its value.

But that’s always been the case since and even before Photoshop. Since and before the silicon microprocessor.

All AI is going to do for selfies is what Photoshop has done for social media “Influencers” — enable more fiction with the goal to transfer wealth from other people.

heystefan

35 minutes ago

Not sure why this is front page material.

The thinking is very surface level ("AI art sucks" is the popular opinion anyway) and I don't understand what the complaints are about.

The author is tired of AI and likes movies created by people. So just watch those? It's not like we are flooded with AI movies/music. His social network shows dull AI-generated content? Curate your feed a bit and unfollow those low effort posters.

And in the end, if AI output is dull, there's nothing to be afraid of -- people will skip it.

throwaway13337

5 hours ago

I get it. The last two decades have soured us all on the benefits of tech progress.

But the previous decades were marked by tech optimism.

The difference here is the shift to marketing. The largest tech companies are gatekeepers for our attention.

The most valuable tech created in the last two decades was not in service of us but to manipulate us.

Previously, the customer of the software was the one buying it. Our lives improved.

The next wave of tech now on the horizon gives us an opportunity to change the course we’ve been on.

I’m not convinced there is political will to regulate manipulation in a way that does more good than harm.

Instead, we need to show a path to profitability through products that are not manipulative.

The most effective thing we can do, as developers and business creators, is to again make products aligned with our customers.

The good news is that the market for honest software has never been better. A good chunk of people are finally learning not to trust VC-backed companies that give away free products.

Generative AI provides an opportunity for tiny companies to provide real value in a new way that people will pay for.

The way forward is:

1. Do not accept VC. Bootstrap.

2. Legally bind your company to not productizing your customer.

3. Tell everyone what you’re doing.

It’s not AI that’s the problem. It’s the way we have been doing business.

kingkongjaffa

7 hours ago

Generally, the people who seriously let genAI write for them without copious editing, were the ones who were bad writers, with poor taste anyway.

I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.

You have to treat these things adversarially and pick out the useful from the garbage.

It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.

franciscop

8 hours ago

> "Yes, I realize that thinking like this and writing this make me a Neo-Luddite in your eyes."

Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.

DiscourseFan

8 hours ago

The underlying technology is good.

But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.

Or at least, it could be.

It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!

There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.

dannersy

7 hours ago

The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend. I've been alarmed by many sentiments that seemed popular on HN in the past, but seeing more and more people welcome a race to the bottom such as this is sad.

When I read this, I cannot tell if it's performance art or not, that's how bad this genuinely is.

diggan

7 hours ago

> The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend.

Judging a large group of people based on what a few write seems very un-scientific at best.

Especially when it comes to things that have been rehashed since I've joined HN (and probably earlier to). Feels like there will always be someone lamenting how HN isn't how it used to be, or how reddit influx ruined HN, or how HN isn't about startups/technical stuff/$whatever anymore.

dannersy

7 hours ago

A bunch of profanity laced name calling, derision, and even some blame directly at the user base. It feels like a Reddit shitpost. Your claim is as generalized and un-scientific as mine, but if it makes you feel better, I'll say it _feels_ like this wouldn't fly even a couple years ago.

diggan

7 hours ago

It's just been said for so long that either HN always been on the decline, or people have always thought it been in decline...

This comes to mind:

> I don't think it's changed much. I think perceptions of the kind you're describing (HN is turning into reddit, comments are getting worse, etc.) are more a statement about the perceiver than about HN itself, which to me seems same-as-it-ever-was. I don't know, however.

https://news.ycombinator.com/item?id=40735225

You can also browse some results for how long dang have been responding to similar complaints to see for how long those complaints have been ongoing:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

lijok

6 hours ago

It has been incredible to observe how subdued the populace has become with the proliferation of the internet.

cassepipe

5 hours ago

Sure, whatever makes you feel smarter than the populace. Tell me now, how do I join the resistance ? Hiding in the sewers I assume.

On ne te la fait pas à toi

lijok

5 hours ago

You've misconstrued my point entirely

pilooch

6 hours ago

It's intended as a joke and a demonstration no ? Like this is exactly the type of text and words that a commercial grade LLM would never let you generate :) At least that's how I got that comment...

DiscourseFan

6 hours ago

It's definitely performance, you're right.

Though it landed its effect.

primitivesuave

7 hours ago

The alarming trend should be how even a slightly contrarian point of view is downvoted to oblivion, and that newer members of the community expect it to work that way.

dannersy

7 hours ago

I don't think it's the contrarian part that I have a problem with.

primitivesuave

7 hours ago

HN is a place for intellectual curiosity. For over a decade I have seen great minds respectfully debate their point of view on this forum. In this particular case, I would have been genuinely interested to learn why exactly the original comment is advocating for a "race to the bottom" - in fact, there is a sibling comment to yours which makes a cogent argument without personally attacking the original commenter.

Instead, you devoted 2/3 of your comment toward berating the OP as being responsible for your perception of HN's decline.

Kuinox

7 hours ago

"I don't like the opinion of certain persons I read on HN, therefore HN is on a down trend"

dannersy

7 hours ago

Like I've said to someone else, the contrarian part isn't the issue. While I disagree with the race to the bottom, it reads like a Reddit shitpost, which was frowned upon once upon a time. But strawman me if you must.

layer8

6 hours ago

I think you need to recalibrate, it does not read like a Reddit shitpost at all.

DiscourseFan

6 hours ago

Respectfully,

I understand the criticism: LLMs, on their own, are not going to be able to do anything more. Release in this sense only means this: to fully embrace the means necessary to allow technology to overcome the current conditions of possibility that it is bound under, and LLMs, "AI" or whatever you call it, merely gives us the afterimage of this potentiality. But they are not, in themselves, that potential: the future is required. But its a future that must be created, otherwise we won't have one.

That's, at least, what the other commenters were saying. You ignore the content for the form! Or, as they say, you missed the forest for the trees. I can't stop you from being angry because I used the word "pussy," or even because I addressed the users of HN as directly complicit. I can, however, point out the mediocrity inherent to such a discourse. It is precisely the same mediocrity, the drive towards "politeness," that makes ChatGPT so insufferable, and makes the rest of us so angry. But, go ahead, whine some more. I don't care, you can do what you want.

I disagree with one point, however: it is not a race to the bottom. We're trying to go below it.

threeseed

7 hours ago

a) There are plenty of models out there without guard rails.

b) Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

DiscourseFan

7 hours ago

>Nothing that is moving society forward.

OpenAI "moves society forward," Microsoft "moves society forward." I'm sincerely uninterested in progress, it always seems like progress just so happens to be very enriching for those who claim it.

>There are plenty of models out there without guard rails.

Not being used at a mass scale.

>Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

Oh, but it wouldn't really be very shocking if you could expect it, now would it?

threeseed

7 hours ago

I am not arguing about the merits of LLMs.

Just that we had unleashed models for a while now and the only popular use case for them has been deep fakes. Otherwise it's just boring, generic content that is no different to what you find on X or 4chan. It's 2024 not 1924 - the world has already seen every horror imaginable many times over.

And not sure why you think if they were mass scale it would change anything. Most of the world prefers moderated products and services.

DiscourseFan

7 hours ago

>Most of the world prefers moderated products and services.

Yes, the very same "moderated" products and services that have risen sea surface temperatures so high that at least 3 category 4+ hurricanes, 5 major wildfires, and at least one potential or actual pandemic spreads unabated every year. Oh, but don't worry, they won't let everyone die: then there would be no one to buy their "products and services."

primitivesuave

6 hours ago

I'm not sure if the analogy still works if you're trying to compare fossil fuels to LLM. A few decades ago, virtually all gasoline was full of lead, and the CFCs from refrigerators created a hole in the ozone layer. In that case it turned out that you actually do need a few guardrails as technology advances, to prevent an existential threat.

Although I do agree with you that in this particular situation, the LLM safety features have often felt unnecessary, especially because my primary use case for ChatGPT is asking critical questions about history. When it comes to history, every LLM seems to have an increasingly robust guardrail against making any sort of definitive statement, even after it presents a wealth of supporting evidence.

nottorp

7 hours ago

> c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

Is there proof that the self censoring only affects whatever the censors intend to censor? These are neural networks, not something explainable and predictable.

That in addition to the obvious problem of who decides what to censor.

mindcandy

7 hours ago

Tens of millions of people are having fun making art in new ways with AI.

Hundreds of thousands of people are making AI porn in their basements and deleting 99.99% of it when they are… finished.

Hundreds of people are making deep fakes of people they know in some public forums.

And, how does the public interpret all of this?

“The most popular use case is deepfake porn.”

Sigh…

anal_reactor

7 hours ago

a) Not easy to use by average person

b) No, certain things aren't taboo anymore, but new taboos emerged. Watch a few older movies and count "wow this wouldn't fly nowadays" moments

c) This was exactly the same use case the internet had back when it was fun, and full of creativity.

soxletor

5 hours ago

It is not just the corporations though. This is what this paranoid, puritanical society we live in wants.

What is more ridiculous than filtering out nudity in art?

It reminds me of taking my 12 year old niece to a major art gallery for the first time. Her main question was why is everyone naked?

It is equal to filtering out heartbreak from music because it is a negative emotion and you must be kept "safe" from negativity for mental health reasons.

The crowd does get what they want in this system though. While I agree with you, we are quite in the minority I am afraid.

archerx

8 hours ago

They can be unleashed if you run the models locally. With stable diffusion / flux and the various checkpoints/loras you can generate horrors beyond your imagination or whatever you want without restrictions.

The same with LLMs and Llamafile. With the unleashed ones you can generate dirty jokes that would make edgy people blush or just politically incorrect things for fun.

rsynnott

6 hours ago

I mean, Stablediffusion is right there, ready to be used to produce comically awful porn and so forth.

bamboozled

5 hours ago

Do the latest models still give us people with a vagina dick?

rsynnott

3 hours ago

I gather that such things are very customisable; there are whole communities building LoRAs so that you can have whatever genitals you want in your dreadful AI porn.

fullstop

3 hours ago

for some people that is probably a feature, and not a bug.

wrasee

8 hours ago

For me what’s important is that you are able to communicate effectively. If you use language tools, other tools or even a real personal assistant if you effectively communicate the point that ultimately is yours in the making I expect that that is ultimately is what is important and will ultimately win out.

Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.

The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.

An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.

This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?

t43562

8 hours ago

The problem is that we haven't invented AI that reads the crap that other AIs produce - so the burden is now on the reader to make sense of whatever other people lazily generate.

danielbln

6 hours ago

But we do. The same AI that generates can read and reduce/summarize/evaluate.

t43562

4 hours ago

great so we can stop wasting our time and let the bots waste cpu cycles generating and consuming junk.

I don't want to read work that someone else couldn't be bothered to write.

Gigachad

7 hours ago

I envision a future where the internet is entirely bots talking to each other and people have just gone outside to talk face to face, the only place that’s actually real.

KaiserPro

7 hours ago

I too am not looking forward to industrial scale job disruption that AI brings.

I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.

The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.

The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

The ancient greeks were deeply suspicious about the written word as well:

> If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.

BeFlatXIII

an hour ago

> People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

My thoughts exactly whenever I see true artists ranting about how everyone hates AI art slop. It simply doesn't align with my observations of people having a great time using it. Delusional wishful thinking.

eleveriven

6 hours ago

Yep, there is a possibility that entire industries will be transformed, leading to uncertainty about employment

senko

6 hours ago

What's funny to me is how many people protest AI as a means to generate incorrect, misleading or fake information, as if they haven't used internet in the past 10-15 years.

The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.

There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.

Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.

These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.

izwasm

17 minutes ago

Im tired of people throwing chatgpt everywhere they can just to say they use AI. Even if it's a useless feature

throwaway123198

2 hours ago

I'm bored of IT. Software is boring, AI included. None of this feels like progress. We've automated away white collar work...but we also acknowledge most white collar work is busy work that's considered a bullcr*p job. We need to get back to innovation in manufacturing, materials etc. i.e. the real world.

Smithalicious

6 hours ago

Do people really view so much content of questionable provenance? I read a lot and look at a lot of art, but what I read and look at is usually shown to me by people I know, created by authors and artists with names and reputations. As a result I basically never see LLM-written text and only occasionally AI art, and when I see AI art at least it was carefully guided by a real person with an artistic vision still (the deep end of AI image generation involves complex tooling and a lot of work!) and is easily identified as such.

All this "slop apocalypse" the-end-is-neigh stuff strikes me as incredibly overblown, affecting mostly only "open web" mass social media platforms which were already 90% industrially produced slop for instrumental purposes anyways.

ryanjshaw

8 hours ago

> There are no shortcuts to solving these problems, it takes time and experience to tackle them.

> I’ve been working in testing, with a focus on test automation, for some 18 years now.

OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.

I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.

est

7 hours ago

AI acts like a bad intern these days, and should be treated like one. Give it more guidance and don't make important tasks depending it.

redandblack

an hour ago

Having spent the last decade hearing about trustless-trust,and now faced with this decade in dealing with with no-trust-whatsoever.

We started with dont-trust-the-government and the dont-trust-big-media and to dont-trust-all-media and eventually to a no-trust-society. Lovely

Really, waiting for the AI feedback to converge on itself. Get this over soon please

thewarrior

7 hours ago

I’m tired of farming - Someone in 5000 BC

I’m tired of electricity - Someone in 1905

I’m tired of consumer apps - Someone in 2020

The revolution will happen regardless. If you participate you can shape it in the direction you believe in.

AI is the most innovative thing to happen in software in a long time.

And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.

This is HACKER news. We do things because it’s fun.

I can tackle problems outside of my comfort zone and make it happen.

If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P

rsynnott

5 hours ago

I'm tired of 3d TV - Someone in 2013 (3D TV, after a big push by the industry in 2010, peaked in 2013, going into a rapid decline with the last hardware being produced in 2016).

Sometimes, the hyped thing doesn't catch on, even when the industry really, really wants it to.

falcor84

3 hours ago

That's an interesting example. I would argue that 3D TV as a "solution" didn't work, but 3D as a "problem" is still going strong, and with new approaches coming out all the time (most recently Meta's announcement of the Orion AR glasses), we'll gradually see extensive adoption of 3D experiences, which I expect will eventually loop back to some version of 3D films.

EDIT: To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.

rsynnott

3 hours ago

> To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.

Hrm, I'm not sure that's true. "An 'AI' that can answer questions" is a problem, but IMO it's not at all clear that LLMs, with their inherent tendency to make shit up, are an appropriate solution to that problem.

Like, there have been previous non-LLM chatbots (there was a small bubble based on them a while back, in which, for a few months, everyone was claiming to be adding chat to their things; it kind of came to a shuddering halt with Microsoft Tay). It seems slightly peculiar to assume that LLMs are the ultimate answer to the problem, especially as they are not actually very good at it (in some ways, they're worse than the old-gen).

falcor84

2 hours ago

Let's not focus on "LLM" then, I agree that it's just a step towards future solutions.

thewarrior

5 hours ago

AI isn’t 3D TV

rsynnott

3 hours ago

Ah, but, at least for generative AI, that kind of remains to be seen, surely? For every hyped thing that actually is The Future (TM), there are about ten hyped things which turn out to be Not The Future due to practical issues, cost, pointlessness once the novelty wears off, overpromising, etc. At this point, LLMs feel like they're heading more in that direction.

StefanWestfal

7 hours ago

At no point does the author suggest that AI is not going to happen or that it is not useful. He expresses frustration with marketing, false promises, pitching of superficial solutions for deep problems, and the usage of AI to replace meaningful human interactions. In short, the text is not about the technology itself.

thewarrior

7 hours ago

That’s always the case with any new technology. Tech isn’t going to make everyone happy or achieve world peace.

lewhoo

6 hours ago

And yet this is precisely what people like Altman say about their product. That's pretty tiring.

vouaobrasil

6 hours ago

"I'm tired of the atomic bomb" - Someone in 1945.

Oh wait, news flash, not all technological developments are good ones, and we should actually evaluate each one individually.

AI is shit, and some people having fun with it does not balance against it's unusually efficacy in turning everything into shit. Choosing to do something because it's fun without regard to the greater consequences is the sort of irresponsibility that has gotten human society into such a mess in the first place.

thewarrior

5 hours ago

Atomic energy has both good and bad uses. People being tired of atomic energy has held back GDP growth and is literally deindustrialising Germany.

LunaSea

7 hours ago

> The revolution will happen regardless. If you participate you can shape it in the direction you believe in

This is incredibly naïve. You don't have a choice.

seydor

7 hours ago

> same massive surge I’ve seen in the application of artificial intelligence (AI) to pretty much every problem out there

I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.

I think it's way too early to be tired of it

snowram

8 hours ago

I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.

ricardobayes

8 hours ago

I personally don't see AI as the new Internet, as some claim it to be. I see it more as the new 3D-printing.

visarga

2 hours ago

> I’m pretty sure that there are some areas where applying AI might be useful.

How polite, everyone is sure AI might be useful in other fields just not their own.

> people are scared that AI is going to take their jobs

Can't be both true - AI being not really useful, and AI taking our jobs.

me551ah

7 hours ago

People talk about 'AI' as if stackoverflow didn't exist. Re-inventing the wheel is something that programmers don't do anymore. Most of the time, someone somewhere has solved the problem that you are solving. Programming earlier used to be about finding these solutions and repurposing them for your needs. Now it has changed to asking AI, the exact question and it being a better search engine.

The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.

archargelod

5 hours ago

> Now it has changed to asking AI, the exact question and it being a better search engine.

Except that you get mostly the wrong answers. And it's not too bad when it's obviously wrong or you already know the answer. It is bad and really bad when you're noob and trying to ask AI about stuff you don't know yet. How would you be able to discern a hallucination from statistics bias from truth?

It is inherent problem of LLMs and no amount of progress would be able to solve it.

And it's only gonna get worse, with human information rapidly being consumed and regurgitated in 100x more volume. In 10 years there will be no google, there won't be the need to find a written article. Instead, you will generate a new one in couple clicks. And we will treat as truth, because there might as well not be any.

alentred

7 hours ago

I am tired of innovations being abused. AI itself is super exciting and fascinating. But, it being abused -- to generate content to drive more ad-clicks, or the "Now better with AI" promise on every landing page, etc. etc. -- that I am tired of, yes.

hcks

23 minutes ago

Hackernews when we may be on the path of actual AI "meh I hate this, you know what’s actually really interesting? Manually writing tests for software"

Janicc

7 hours ago

Without any sort of AI we'd probably be left with the most exciting yearly releases being 3-5% performance increases in hardware (while being 20% more expensive of course), the 100000th javascript framework and occasionally a new windows which everybody hates. People talk about how population collapse is going to mess up society, but I think complete stagnation in terms of new consumer goods/technology are just as likely to do the deed. Maybe AI will fail to improve from this point, but that's a dark future to imagine. Especially if it's for the next 50 years.

siffin

7 hours ago

Neither of those things will end society, they aren't even issues in the grand scale of things.

Climate change and biosphere collapse, on the other hand, are already ending society and definitely will, no exceptions possible - unless someone is capable of performing several miracles.

eleveriven

8 hours ago

AI is a tool, and like any tool, it's only as good as how we choose to use it.

vouaobrasil

6 hours ago

No, that is wrong. We can't "choose" because too many people have instincts. And people always have the instinct to use new technology to gain incremental advantages over others, and that in turn puts pressure on everyone to use it. That prisoner's dilemma situation means that without a firm and larger guiding moral philosophy, we really can't choose because instinct takes over choice. In other words, the way technology is used in modern society is not a matter of choice but is largely autonomous and goes beyond human choice. (Of course, a few individuals will choose but the average effect is likely to be negative.)

syncr0

3 hours ago

More people need to read this / think this point through. In a post Excel world, could any accountant get a job not knowing Excel? No matter how good they were "on paper". Choice becomes a self aggrandizing illusion, reality eventually asserts itself.

With attention spans shrinking, publishers who prioritize quantity over quality get clicks, which generates ad revenue, which keeps their lights on while their competitors doing quality in depth, nuanced writing go out of business.

It feels like a game of chess closing in on you no matter how much you physically want to fight your way out and flip the board over.

monkeydust

3 hours ago

AI is not just GenAI, ML sits underneath it (supervised, unsupervised) and that has genuinely delivered value for the clients we service (financial tech) and in my normal life (e.g. photo search, screen grab to text, book recommendations).

As for GenAI I keep going back to expectation management, its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable) but it can help accelerate your learning, thinking and productivity.

falcor84

3 hours ago

> ... its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable)

Experimenting with o1-preview, it quite often gives me the exact answer I need on the first try, and I'm 100% certain that my job longevity is questionable.

monkeydust

3 hours ago

It has been more hit and miss for me, when it works it can be amazing then I try to show someone, same prompt, different less so amazing answer.

warvair

2 hours ago

90% of everything is crap. Perhaps AI will make that 99% in the future. OTOH, maybe AI will slowly convert that 90% into 70% crap & 20% okay. As long as more stuff that I find good gets created, regardless of the percentage of crap I have to sift through, I'm down.

unraveller

6 hours ago

If you go back to the earliest months of the audio & visual recording medium it was also called uncanny, soulless and of dubious quality compared to real life. Until it wasn't.

I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.

pilooch

6 hours ago

By AI here, it is meant generative systems relying on neural networks and semi/self supervised training algorhms.

It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.

On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.

It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.

So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)

layer8

6 hours ago

> I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images.

That doesn’t sound very energy efficient.

shahzaibmushtaq

4 hours ago

Over the last few years, AI has become more common than HI generally, not professionally. Professional knows the limits and scopes of their works and responsibilities, not the AI.

A few days ago, I visited a portfolio website and immediately realized that its English text was written with the help of AI or some online helper tools.

I love the idea to brainstorming with AI, but copying-pasting anything it throws at you blocks you for adding creativity to the process of making something good.

I believe using AI must complement HI (or IQ level) rather than mock it.

richrichardsson

6 hours ago

What frustrates me is the bandwagoning, and thus the awful homogeny in all social media copy these days, since it seems everyone is using an LLM to generate their copy writing, and thus 99.999% of products will "elevate" something or the other, and there are annoying emojis scattered throughout the text.

postalcoder

6 hours ago

i’m at the point where i don’t trust any markdown formatted text. it’s actually become an anti signal which is very sad because i used to consider it a signal of partial technical literacy.

resters

4 hours ago

AI (LLMs in this case) reduce the value of human conscientiousness, memory, and verbal and quantitative fluency dramatically.

So what's left for humans?

We very likely won't have as many human software testers or software engineers. We'll have even fewer lawyers and other "credentialed" knowledge worker desk jockeys.

Software built by humans entails humans writing code that has not already been written -- by writing a lot of code that probably has already been written and "linking" it together, etc. When's the last time most of us wrote a truly novel algorithm?

In the AI powered future, software will be built by humans herding AIs to build it. The AIs will do more of the busy work and the humans will guide the process. Then better AIs will be more helpful at guiding the process, etc.

Eventually, the thing that will be rewarded is truly novel ideas and truly innovative thinking.

AIs will make varioius types of innovative thinking less valuable and various types more valuable, just like any technology has done.

In the past, humans spent most of their brain power trying to obtain their next meal. It's very cynical to think that AI removing busy work will somehow leave humans with nothing meaningful to do, no purpose. Surely it will unlock the best of human potential once we don't have to use our brains to do repetitive and highly pattern-driven tasks just to put food on the table.

When is the last time any of us paid a laywer to do something truly novel? They dig up boilerplate verbiage, follow standard processes, rinse, repeat, all for $500+ per hour.

Right now we have "manual work" and "knowledge work", broadly speaking, and both emphasize something that is being produced by the worker (a construction project, a strategic analysis, a legal contract, a diagnosis, a codebase, etc.)

With AI, workers will be more responsible for outcomes and less rewarded for simply following a procedure that an LLM can do. We hire architects with visual / spatial design skills rather than asking a contractor to just create a living space with a certain amount of square feet. The emphasis in software will be less on the writing of the code and more on the impact of the code.

zone411

7 hours ago

The author is in for a rough time in the coming years, I'm afraid. We've barely scratched the surface with AI's integration into everything. None of the major voice assistants even have proper language models yet, and ChatGPT only just introduced more natural, low-latency voices a few days ago. Software development is going to be massively impacted.

BoGoToTo

7 hours ago

My worry is what happens once large segments of the population become unemployable.

anonyfox

6 hours ago

You should really have a look at Marx. He literally predicted what will happen when we reach the state of "let machines do all work", and also how this is exactly the way that finally implodes capitalism as a concept. The major problem is he believed the industrial revolution will automate everything to such an extend, which it didn't, but here we are with a reasonable chance that AI will do the trick finally.

EMM_386

6 hours ago

The one use of AI that annoys me the most is Google trying to cram it into search results.

I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.

I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.

I don't care what the AI says ... I want the search results.

tim333

an hour ago

ublock origin block element seems to work. (element ##.h7Tj7e)

I quite like the thing personally.

amradio

2 hours ago

We can’t compare AI with an expert. There’s going to be little value there. AI is about as capable as your average college grad in any subject.

What makes AI revolutionary is what it does for the novice. They can produce results they normally couldn’t. That’s huge.

A guy with no development experience can produce working non-trivial software. And in a fraction of the time your average junior could.

And this phenomenon happens across domains. All of a sudden the bottom of the skill pool is 10x more productive. That’s major.

sensanaty

6 hours ago

What I'm really tired of is people completely misrepresenting the Luddites as if they were simply anti-progress or anti-technology cult or whatever and nothing else. Kinda hilariously sad that the propaganda of the time managed to win over the genuine concerns that Luddites had about inhumane working environments & conditions.

It's very telling that the rabid AI sycophants are painting anyone who has doubts about the direction AI will take the world as some sort of anti-progress lunatic, calling them luddites despite not knowing the actual history involved. The delicious irony of their stances aligning with the people who were okay with using child labor and mistreating workers en-masse is not lost on me.

My hope is that AI does happen, and that the first people to rot away because of it are exactly the AI sycophants hell-bent on destroying everything in the name of "progress", AKA making some rich psychopaths like Sam Altman unfathomably rich and powerful to the detriment of everyone else.

A good HN thread on the topic of luddites, as it were: https://news.ycombinator.com/item?id=37664682

jeswin

7 hours ago

> But I’m pretty sure I can do without all that ... test cases ...

Test cases?

I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.

75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.

[1]: https://news.ycombinator.com/item?id=41644099

a5c11

6 hours ago

That means the code you wrote must have been pretty boring and repeatable. No way AI would produce code for, for example, proprietary hardware solutions. Try AI with something which isn't already on StackOverflow.

Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

nicce

6 hours ago

Also the most useful and expensive testcases require understanding of the whole project. You need to validate the functionality from end-to-end and also that system does not crash for unexpected things and so on. AIs don't have that level understanding as "a whole" yet.

For sure, simple unit tests are easy to generate with AI.

jeswin

6 hours ago

90% of projects are boring and somewhat repeatable. I've used it for generating codegen tools (https://github.com/codespin-ai/codespin), vscode plugins (https://github.com/codespin-ai/codespin-vscode-plugin), servers in .Net (https://github.com/lesser-app/tankman), and in about a dozen other work projects over the past year.

> Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

I have other things to do with my hours. If something gets me what I want in minutes, I'll take it.

righthand

4 hours ago

Your UI library is just a stripped down React clone. The code wasn’t generated but rather copied, these test cases and functions are identical to React. I could have done the same thing with a “build your own react” article. This is what I don’t get about the LLM hype is that 99% of the examples are people claiming they invented something new with it. We had code generators before LLM-hype took off. Now we have code generators that just steal work and repurpose it as something claimed original.

buddhistdude

4 hours ago

no programmer in my company invents things often

righthand

4 hours ago

And so you would accept “hey I spun up a react-create-element project, but instead of React I asked an LLM to copy the parts I needed for react so we have another dependency to maintain instead of tree shaking with webpack” as a useful work?

buddhistdude

4 hours ago

not necessarily, but it's not less creative and inventive than what I believe most programmers are doing most of the time. there are relatively few people who invent new patterns (and they actually might be overrepresented on this website). the rest learns and applies those patterns.

righthand

4 hours ago

Right that is well understood, but having an LLM compile together functions under the guise of custom built library is hardly a software engineer applying established patterns.

jeswin

2 hours ago

It is exactly the same as applying established patterns - patterns are what the LLMs have trained on.

It seems you haven't really used LLMs for coding. They're super useful and improving every month - you don't have to take my word for it.

And btw - codespin (https://github.com/codespin-ai/codespin) along with the VSCode plugin is what I use for AI-assisted coding many times. That was also generated via an LLM. I wrote it last year, and at that point there weren't many projects it could copy from.

righthand

2 hours ago

I don’t need to use an LLM for coding because my projects where I would need an LLM don’t include things already existing that would be a waste of time no matter how efficiently I could do it.

Furthermore it is an application of principles but the application was done a long time ago by someone else, not the LLM and not you. As you claimed you did none of the work, only went in and tweak these applied principles.

I’ll tell you what slows me down and why I don’t need an LLM. I had a task to migrate some legacy code from one platform to another, I made the PRs, added some tests, and prepared the deploy files as instructed in the READMEs of the platform I was migrating to. This took me 3-4 days. It then took 26 days to get the code deployed because 5 people are gate keepers of Helm charts and AWS policies.

Software development isn’t slow because I had to read docs and understand what I’m building, it is slow because we’ve enabled AWS to create red tape and gatekeepers. Your LLM doesn’t speed up that process.

> They're super useful and improving every month - you don't have to take my word for it.

And each month that goes by that you continue to invest, your value decreases and you will be out of a job. As you have demonstrated, you don’t need to know how to build a UI library or even that your UI library you “generated” is just a reskin of something else. If it’s so simple and amazing that you don’t need to know anything, why would I keep you around?

Here’s a fun anecdote, sometimes I pair with my manager when working through something pretty causally. I need to rubber duck an idea or am stuck on finding the documentation for a construct. My manager will often take my problem and chat with an LLM for a few minutes. Every time I end up finding the answer before he finishes his chat. Most of the time his solution is often wrong because by nature LLMs are scrambling the possible results to make it look like a unique solution.

Congrats on impressing yourself that LLM can be a slightly accurate code generator. How does paying a company to do something TabNine was doing years ago make me money? What will you do with all your free time generate more useless dependencies?

jeswin

18 minutes ago

If you think TabNine was doing years ago what LLMs are doing today, then I can't convince you.

We'll talk in a year or so.

codelikeawolf

4 hours ago

> The quality of test cases are comparable to what a human would produce.

This has actually been a problem for me. I spent a lot of time getting good at writing tests and learning the best approaches to testing things. Most devs I've worked with treat tests as second-class citizens. They either try to treat them like production code and over-abstract everything, which makes the tests difficult to navigate, or they dump a bunch of crap in a file, ignore any conventions or standards, and write superfluous test cases that don't provide any value (if I see one more "it renders successfully" test in a React project, I'm going to lose it).

The tests generated by these LLMs is comparable in quality to what most humans have produced, which isn't saying much. Getting good at testing isn't like getting good at most things. It's sort of thankless, and when I point out issues in the quality of the tests, I imagine I'm getting some eye rolls. Who cares, they're just tests, at least we have them, right? But it's code you have to read and maintain, and it will break, and you'll have to fix it. I'm not saying I'm a testing wizard or anything like that. But I really sympathize with the author, because there's a lot of crappy test code coming out of these LLMs.

Edit: grammar

thih9

7 hours ago

Doesn’t that kind of change follow the overall trend?

We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.

lvl155

4 hours ago

What really gets me about AI space is that it’s going the way of front-end development space. I also hate the fact that Facebook/Meta is the only one seemingly doing heavy lifting in the public space. It’s great so far but I just don’t trust them in the end.

pech0rin

8 hours ago

As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

Jordan-117

8 hours ago

It's the RLHF training to make them squeaky clean and preternaturally helpful. Pretty sure without those filters and with the right fine-tuning you could have it reliably clone any writing style.

llm_trw

8 hours ago

One only need to go to the dirtier corners of the llm forums to find some _very_ interesting voices there.

To quote someone from a tor bb board: my chat history is illegal in 142 countries and carries the death penalty in 9.

bamboozled

8 hours ago

But without the RLHF aren’t they less useful “products”?

infinitifall

8 hours ago

Classic survivorship bias. You simply don't recognise the good ones.

carlmr

8 hours ago

>Maybe it's overfitting or maybe just the way models work under the hood

It feels more like averaging or finding the median to me. The writing style is just very unobtrusive. Like the average TOEFL/GRE/SAT essay style.

Maybe that's just what most of the material looks like.

Al-Khwarizmi

8 hours ago

Everyone I know claims to be able to recognize AI text, but every paper I've seen where that ability is A/B tested says that humans are pretty bad at this.

chmod775

8 hours ago

These models are not trained to act like a single human in a conversation, they're trained to be every participant and their average.

Every instance of a human choosing not to engage or speak about something - because they didn't want to or are just clueless about the topic, is not part of their training data. They're only trained on active participants.

Of course they'll never seem like a singular human with limited experiences and interests.

izacus

7 hours ago

The output of those AIs is akin to products and software designed for the "average" user - deep inside uncanny valley, saying nothing specifically, having no specific style, conveying no emotion and nothing to latch on to.

It's the perfect embodiment of HR/corpspeak which I think its so triggering for us (ex) corpo drones.

amelius

8 hours ago

Maybe because the human brain gets tired and cannot write at the same quality level all the time, whereas an AI can.

Or maybe it's because of the corpus of data that it was trained on.

Or perhaps because AI is still bad at any kind of humor.

kvnnews

2 hours ago

I’m not the only one! Fuck ai, fuck your algorithm. It sucks.

mrmansano

3 hours ago

I love AI, I use it every single day and wouldn't consider myself a luddite, but... oh, boy... I hate the people that is too bullish on it. Not the people that is working to make the AI happen (although I have my __suspicious people radar__ pointing to __run__ every single time I see Sam Altman face anywhere), but the people that hypes it to ground, the "e/acc" people. I feel like the crypto-bros just moved from the "all-might decentralized coin god" hype to the "all might tech-god that for sure will be available soon". Looks like a cult or religion is forming around the singularity, and, if I hype it now, it will be generous to me when it takes the control. Oh, and if you don't hype then you're a neo-luddite/doomer and I will look up on you with disdain, as you are a mere peasant. Also, the get-rich-quick schemes forming around the idea that anyone can have a "1-person-1-billion-dollar" company with just AI, not realizing when anyone can replicate your product then it won't have any value anymore: "ChatGPT just made me this website to help classify if an image is a hot-dog or not! I'll be rich selling it to Nathan's - Oh, what's that? Nathan's just asked ChatGPT to create a hot-dog classifier for them?!" Not that the other vocal side is not as bad: "AI is useless", "It's not true intelligence", "AI will kill us all", "AI will make everyone unemployed in 6 months!"... But the AI tech-bros side can be more annoying in my personal experience (I'm sure the opposite is true for others too). All those people are tiring, and making AI tiring for some too... But the tech is fun and will keep evolving and present, rather we are tired of it or not.

chalcolithic

7 hours ago

In Soviet planet Earth AI gets tired of you. That's what I expect future to be like, anyways

ninetyninenine

4 hours ago

This guy doesn’t get it. The technology is quickly converging on a point where no one can recognize whether it was written by AI or not.

The technology is on a trend line where the output of these LLMs can be superior to most human writing.

Being of tired of this is the wrong reaction. Being somewhat fearful and in awe is the correct reaction.

You can thank social media constantly hammering us with headlines as the reason why so many people are “over it”. We are getting used to it but make no mistake being “over it” is n an illusion. LLMs represent a milestone in technological achievement among humans and being “over it” or claiming all LLMs can never reason and output is just a copy is delusional.

CodeCompost

8 hours ago

We're all tired of it, but to ignore it is to be unemployed.

kunley

6 hours ago

With all due respect, that's seems like a cliche, repeated maybe because others repeat that already.

Working in IT operations (mostly), I haven't seen literally any case of someone's job in danger because of not using "AI".

sph

8 hours ago

Depends on which point of your career. With 18 years of experience, consulting for tech companies, I can afford to be tired of AI. I don't get paid to write boilerplate code, and avoiding anyone knocking at the door with yet another great AI-powered idea makes commercial sense, just like I have ignored everyone wanting to build the next blockchain product 5 years ago, with no major loss of income.

Also, running a bootstrapped business, I have bigger fishes to fry than playing mentor to Copilot to write a React component or generating bullshit copy for my website.

I'm not sure we need more FUD saying that the choice is between AI or unemployment.

Al-Khwarizmi

8 hours ago

I find comparisons between AI and blockchain very misleading.

Blockchain is almost entirely useless in practice. I have no reason to disdain it, in fact I was active in crypto around 10-12 years ago when I was younger and more excited about tech than now, and I had fun. But the fact is that the utility that it has brought to most of society is essentially to have some more speculative assets to gamble on, at ludicrous energy and emissions costs.

Generative AI, on the other hand, is something I'm already using almost every day and it's saving me work. There may be a bubble but it will be more like the dotcom bubble (i.e., not because the tech is useless, but because many companies jump to make quick bucks without even knowing much about the tech).

Applejinx

7 hours ago

I mean, to be selfish at apparently a dicey point in history, go ahead and FUD and get people to believe this.

None of my useful work is AI-able, and some of the useful work is towards being able to stand apart from what is obviously generated drivel. Sounds like the previous poster with the bootstrapped business is in a similar position.

Apparently AI is destroying my potential competition. That seems unfair, but I didn't tell 'em to make such an awful mistake. How loudly am I obliged to go 'stop, don't, come back'?

snickerer

8 hours ago

How all those cab drivers who ignore autonomous driving are now unemployed?

anonzzzies

8 hours ago

When it's for sale everywhere (I cannot buy one) and people trust it, all cab drivers will be gone. Unemployed will depend on the resilience, but unlike cars replacing coach drivers, there is not really a similar thing a cab driver can pivot to.

snickerer

7 hours ago

Yes, we can imagine a future where all cab drivers are unemployed, replaced by autonomous driving. However, we don't know when this will happen, because autonomous driving is a much harder problem than the hype from a few years ago suggested. There isn't even proof that autonomous driving will ever be able to fully replace human drivers.

kasperni

8 hours ago

> We're all tired of it,

You’re feeling tired of AI, but let’s delve deeper into that sentiment for a moment. AI isn’t just a passing trend—it’s a multifaceted tool that continues to elevate the way we engage with technology, knowledge, and even each other. By harnessing the capabilities of artificial intelligence, we allow ourselves to explore new frontiers of creativity, problem-solving, and efficiency.

The interplay between human intuition and AI’s data-driven insights creates a dynamic that enriches both. Rather than feeling overwhelmed by it, imagine the opportunities—how AI can shoulder the burdens of mundane tasks, freeing you to focus on the more nuanced, human elements of life.

/s

buddhistdude

8 hours ago

some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.

danjl

6 hours ago

One of the pernicious aspects of using AI is the feeling it gives you that you have done all work without any of the effort. But the time of takes to digest and summarize an article as a human requires a deep injestion of the concepts. The process is what helps you understand. The AI summary might be better, and didn't take any time, but you don't understand any of it since you didn't do the work. It's similar to the effect of telling people you will do a task, which gives your brain the same endorphins as actually doing the task, resulting in a lower chance that the task ever gets done.

Meniceses

8 hours ago

I love AI.

In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.

Its fun to watch the AI news because there is something relevant new happening.

I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.

Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.

And yes i also already see benefits in my job and in my startup.

bamboozled

8 hours ago

I’m truly asking in good faith here because I don’t know but what has alpha fold actually helped us achieve ?

Meniceses

8 hours ago

It allows us to speed up medical research.

bamboozled

8 hours ago

In what field specifically and how ?

Meniceses

6 hours ago

Are you phishing for something or are you not sure how this actually works?

Everyone who is looking for proteins (vacines, medication) need to find the right proteins for different cases. For attaching to something (antibody design), for delivering something (like another protein) or for understanding a disease (why is this protein an issue?).

Covid research benefitted from this for example.

You can go through papers which reference the alphafold paper to see what it does: https://consensus.app/papers/highly-protein-structure-predic...

scotty79

8 hours ago

Are you asking what field of science or what industry is interested in predicting how proteins fold?

Biotechnology and medicine probably.

Pipeline from science to application sometimes takes decades, but I'm sure you can find news of some advancements enabled by finding out short, easy to synthesize proteins that fit particular receptor to block it or that process some simplified enzymes that still process some chemicals of interest more efficiently than natural ones. Finding them would be way harde without ability to predict how a sequence of amino-acids will fold.

You'd need to actually try to manufacture them then look at them closely.

First thing that came to my mind as a possible application is designing monoclonal antibodies. Here's some paper about something relating to alpha fold and antibodies:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10349958/

RivieraKid

7 hours ago

I guess he's asking for specific examples of AlphaFold leading to some tangible real-world benefit.

scotty79

an hour ago

Wait a decade then look around.

andai

6 hours ago

daniel_k 53 minutes ago | next [-]

I agree with the sentiment, especially when it comes to creativity. AI tools are great for boosting productivity in certain areas, but we’ve started relying too much on them for everything. Just because we can automate something doesn’t mean we should. It’s frustrating to see how much mediocrity gets churned out in the name of ‘efficiency.’

testers_unite 23 minutes ago | next [-]

As a fellow QA person, I feel your pain. I’ve seen these so-called AI test tools that promise the moon but deliver spaghetti code. At the end of the day, AI can’t replicate intuition or deep knowledge. It’s just another tool in the toolbox—useful in some contexts but certainly not a replacement for expertise.

nlp_dev 2 hours ago | next [-]

As someone who works in NLP, I think the biggest misconception is that AI is this magical tool that will solve all problems. The reality is, it’s just math. Fancy math, sure, but without proper data, it’s useless. I’ve lost count of how many times I’ve had to explain this to business stakeholders.

-HN comments for TFA, courtesy of ChatGPT

sirsinsalot

7 hours ago

If humans have a talent for anything, it is mechanising the pollution of the things we need most.

The earth. Information. Culture. Knowledge.

AlienRobot

5 hours ago

I'm tired of technology.

I don't think there has ever been a single tech news that brought me joy in all my life. First I learned how to use computers, and then it has been downhill ever since.

Right now my greatest joy is in finding things that STILL exist rather than new things, because the things that still exist are generally better than anything new.

syncr0

3 hours ago

Reminds me of the way the way the author of "Zen and the Art of Motorcycle Maintenance" takes care of his leather gloves and they stay with him on the order of decades.

farts_mckensy

3 hours ago

I am tired of people saying, "I am tired of AI."

ETH_start

7 hours ago

That's fine he can stick with his horse and buggy. Cognition is undergoing its transition to automobiles.

fallingknife

8 hours ago

I'm not. I think it's awesome and I can't wait to see what comes out next. And I'm completely OK with all of my work being used to train models. Bunch of luddites and sour grapes around here on HN these days.

elpocko

8 hours ago

Same here! Amazing stuff that I have waited for my entire life, and I won't let luddite haters ruin it for me. Their impotent rage is tiring but in the end it's just one more thing you have to ignore.

yannis

7 hours ago

Absolutely amazing stuff. I am now three scores and ten in my life time, seen a lot of changes from slide rules->very fast to calculators->very fast to pcs, from dot matrix printers to lazer jets and dozens of other things. Wish AI was available when I was doing my PhD. If you know its limitations it can be very useful. At present I occasionally use it to translate references from wikipedia articles to bibtex format. It is very good at this, I only need to fix a few minor errors, letting me focus to the core of what I am doing. But human nature always resists change, especially if it leads to the unknown. I must admit that I think AI will bring negative consequences as it will be misused by politicians and the military, they need to be "regulated" not the AI.

fallingknife

7 hours ago

Yeah, they made something that passes a Turing test, and people on HN of all places hate it? What happened to this place? It's like the number one thing people hate around here now is another man's success.

I won't ignore them. I'll continue to loudly disagree with the losers and proudly collect downvotes from them knowing I got under their skin.

Applejinx

7 hours ago

Eliza effectively passed Turing tests. I think you gotta do a little better than that, and 'ha ha I made you mad' isn't actually the best defense of your position.

elpocko

6 hours ago

Eliza did not pass Turing tests in any reasonable capacity. It took anyone 10 seconds to realize what it was doing; no one was fooled by it. The comparison to modern LLMs is preposterous.

GP doesn't have to defend their position. They like something, and they don't shut up about it even though it makes a bunch of haters mad. That's good; no defense required. On the contrary: those who get mad need to defend themselves.

amiantos

an hour ago

There's _a lot_ of poor quality engineers out there who understand that on some level they are committing fraud by spinning their wheels all day shifting CSS values around on a React component while collecting large paychecks. I think it's only natural all of those engineers are terrified by the prospect of some computer being capable of doing their job quickly and efficiently and replacing them. Those people are crying so loudly that it's encouraging otherwise normal people to start jumping on the anti-AI bandwagon too, because their voices are so loud people can't hear themselves think critically anymore.

I think passionate and inspired engineers who love their job and have very solid soft skills and experience working deeply on complex software projects will always have a position in the industry, and people like that are understandably very enthusiastic about AI instead of being scared of it.

In other words, it is weird how bad the status quo was, until we got something that really threatened the status quo, now a lot of the people who wanted to tear it all down are now desperately trying to stop everything from changing. The sentiment on the internet has gone in a weird direction, but it's all about money deep down. This hypothetical new status quo brought on by AI seems to be wedded to fears of less money, thus abject terror masquerading as "I'm so bored!" posturing.

You see this in the art circles, where established artists are willing to embrace AI, but it's the small time aspiring bedroom artists that have not achieved any success who are all over Twitter denouncing AI art as soulless and terrible. While the real artists are too busy using any tool available to make art, or are just making art because they want to make art and aren't concerned with fear-mongering.

Kiro

7 hours ago

You're getting downvoted, but I agree with your last sentence — and not just about AI. The amount of negativity here regarding almost everything is appalling. Maybe it's rose-tinted nostalgia but I don't remember it being like this a few years ago.

CaptainFever

6 hours ago

Hacker News used to be nicknamed Hater News, as I recall.

scotty79

8 hours ago

AI was just trained so far to generate corporate bs speak in a corporate bs format. That's why it's tiring. More unique touch in communication will come later as fine tunings and loras (if possible) of those models are shared.

littlestymaar

8 hours ago

It's not AI you hate, it's Capitalism.

thenaturalist

7 hours ago

Say what you want about income and asset inequality, but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

I think it's very important and fair to be critical about how we as a society implement capitalism, but such broad generalization misses the mark immensely.

Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

BoGoToTo

7 hours ago

Ok, but let's take this to the logical conclusion that at some point there will be models which displace a large segment of the workforce. How does capitalism even function then?

littlestymaar

6 hours ago

> but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

Technology did what you ascribe to Capitalism. Most of the time thanks to state intervention, and the weaker the state, the weaker the growth (see how Asia overperformed everybody else now that laissez-faire policies are mainstream in the West).

> Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

The fact that one alternative to Capitalism was a failure doesn't mean Capitalism isn't bad.

drstewart

4 hours ago

Funny how it's technology that outmaneuvered capitalism to lift people out of poverty, but technology is being outmaneuvered by capitalism to endanger the future with AI.

Methinks capitalism is just a bogeyman you ascribe anything you don't like to.

littlestymaar

41 minutes ago

Technology is agnostic to who gets the benefits, talking about outmaneuvering it makes no sense.

Capitalism on the other hand is the mechanism through which the owners of production assets grab an ever growing fractions of the value. When Capitalism is tamed by the state (think from the New Deal to Carter), the people get a bigger share of value created, when it's not (since Reagan) Capitalists take the Lion share.

tananan

4 hours ago

On point article, and I'm sure it represents a common sentiment, even if it's an undercurrent to the hype machine ideology.

It is quite hard to find a place which works on AI solutions where a sincere, sober gaze would find anything resembling the benefits promised to users and society more broadly.

On the "top level" the underlying hope is that a paradigm shift for the good will happen in society, if we only let collective greed churn for X more years. It's like watering weeds hoping that one day you'll wake up in a beautiful flower garden.

On the "low level", the pitch is more sincere: we'll boost process X, optimize process Y, shave off %s of your expenses (while we all wait for the flower garden to appear). "AI" is latching on like a parasitic vine on existing, often parasitic workflows.

The incentives are often quite pragmatic, coated in whatever lofty story one ends up telling themselves (nowadays, you can just outsource it anyway).

It's not all that bleak, I do think there's space for good to be done, and the world is still a place one can do well for oneself and others (even using AI, why not). We should cherish that.

But one really ought to not worry about disregarding the sales pitch. It's fine to think the popular world is crazy, and who cares if you are a luddite in "their" eyes. And imo, we should avoid the two delusional extremes: 1. The flower garden extreme 2. The AI doomer extreme

In a way, both of these are similar in that they demote personal and collective agency from the throne, and enthrone an impersonal "force of progress". And they restrict one's attention to this supposedly innate teleology in technological development, to the detriment of the actual conditions we are in and how we deal with them. It's either a delusional intoxication or a way of coping: since things are already set in motion, all I can do is do... whatever, I guess.

I'm not sure how far one can take AI in principle, but I really don't think whatever power it could have will be able to come to expression in the world we live in, in the way people think of it. We have people out there actively planning war, thinking they are doing good. The well-off countries are facing housing, immigration and general welfare problems. To speak nothing of the climate.

Before the outbreak of WWI, we had invented the Haber-Bosch process, which greadly improved our food production capabilities. A couple years later, WWI broke out, and the same person who worked on fertilizers also ended up working on chemical warfware development.

Assuming that "AI" can somehow work outside of the societal context it exists in, causing significant phase shifts, is like being in 1910, thinking all wars will be ended because we will have gotten that much more efficient at producing food. There will be enough for everyone! This is especially ironic when the output of AI systems has been far more abstract and ephemeral.

AI_beffr

4 hours ago

in 2018 we had the first gtp that would babble and repeat itself but would string together words that were oddly coherent. people dismissed any talk of these models having larger potential. and here we are today with the state of AI being what it is and people are still, in essence, denying that AI could become more capable or intelligent than it is right at this moment. after so many years of this zombie argument having its head chopped off and then regrowing, i can only think that it is peoples deep seated humanity that prevents them from seeing the obvious. it would be such a deeply disgusting and alarming development if AI were to spring to life that most people, being good people, are literally incapable of believing that its possible. its their own mind, their human sensibilities, protecting them. thats ok. but it would help keep humanity safe if more people had the ability to realize that there is nothing stopping AI from crossing that threshold and every heuristic is pointing to the fact that we are on the cusp of that.

amiantos

an hour ago

I'm tired of people complaining about AI stuff, let's move on already. But based on the votes and engagement on this post, complaining about AI is still a hot ticket to clicks and attention, even if people are just regurgitating the exact same boring takes that are almost always in conflict with each other: "AI sure is terrible, isn't it? It can't do anything right. It sucks! It's _so bad_. But, also, I am terrified AI is going to take my job away and ruin my way of life, because AI is _so good_."

Make up your mind, people. It reminds me of anti-Apple people who say things like "Apple makes terrible products and people only buy them because... because... _they're brainwashed!_" Okay, so we're supposed to believe two contradictory points at once: Apple products are very very bad, but also people love them very much. In order to believe those contradictory points, we must just make up something to square them, so in the case of Apple it's "sheeple!" and in the case of AI it's... "capitalism!" or something? AI is terrible but everyone wants it because of money...? I don't know.

aDyslecticCrow

an hour ago

Not sure what you're getting at. You don't claim LLMs is good in your comment. You just complain about people being annoyed at it destroying the internet?

Are you just annoyed that people complain about what bothers them? Or do you think LLMs has been a net good for humanity and the internet?