I know you didn't write this

93 pointsposted 5 hours ago
by cjlm

131 Comments

messe

5 hours ago

> Suspicions aroused, I clicked on the “Document History” button in the top right and saw a clean history of empty document – and then wham – fully-formed plan, as if it had just spilled out of someone’s brain, straight onto the screen, ready to share.

This isn't always a great indicator.

I can't stand Google Docs as an interface to write with, so use VIM and the copy/paste the completed document into it.

el_benhameen

5 hours ago

Yep. I do this because I explicitly do not want a third party to see my thought process. If I wanted the reader to see my edits and second thoughts, I would have included them in the final document.

twothamendment

5 hours ago

Another copy/paste reason - I can't count the number of times I've written up something for work on my own google account by mistake, then paste it into a new doc on the work account so I can share it.

QuercusMax

5 hours ago

You really should use separate browser profiles...

yjftsjthsd-h

5 hours ago

Or separate machines. It's not impossible to maintain sufficient separation in software, but it's a lot easier to skip the whole mess.

GaryBluto

5 hours ago

It's bizarre to me that this didn't occur even slightly to the post author.

NitpickLawyer

5 hours ago

As with many other things (em dashes, emojis, bullet lists, it's-not-x-it's-y constructs, triple adjectives, etc) seeing any one of them isn't a tell. Seeing all of them, or many of them in a single piece of content, is probably the tell.

When you use these tools you get a knack for what they do in "vanilla" situations. If you're doing a quick prompt, no guidance, no context and no specifics, you'll get a type of answer that checks many of the "smells" above. Getting the same over and over again gets you to a point where you can "spot" this pretty effectively.

pessimizer

4 hours ago

The author did not do this. The author thought it was wonderful, read the entire thing, then on a lark (they "twigged" it) checked out the edit history. They took the lack of it as instant confirmation ("So it’s definitely AI.")

The rest of the blog is just random subjective morality wank with implications of larger implications, constructed by borrowing the central points of a series of popular articles in their entirety and adding recently popular clichés ("why should I bother reading it if you couldn't bother to write it?")

No other explanations about why this was a bad document, or this particular event at all, but lots of self-debate about how we should detect, deal with, and feel about bad documents. All documents written by LLM are assumed to be bad, and no discussion is attempted about degrees of LLM assistance.

If I used AI to write some long detailed plan, I'd end up going back and forth with it and having it remove, rewrite, rethink, and refactor multiple times. It would have an edit history, because I'd have to hold on to old drafts in case my suggested improvements turned out not to be improvements.

The weirdest thing about the article is that it's about the burden of "verification," but it thinks that what people should be verifying is that LLMs had no part in what they've received. The discussion I've had about "verification" when it comes to LLMs is the verification that the content is not buggy garbage filled with inhuman mistakes. I don't care if it's LLM-created or assisted, other than a lot of people aren't reading and debugging their LLM code, and LLMs are dumb. I'm not hunting for em-dashes.

-----

edit: my 2¢; if you use LLMs to write something, you basically found it. If you send it to me, I want to read your review of it i.e. where you think it might have problems and why you think it would help me. I also want to hear about your process for determining those things.

People are confusing problems with low-effort contributors with problems with LLMs. The problem with low-effort contributors is that what they did with the LLM was low-effort and isn't saving you any work. You can also spend 5 minutes with the LLM. If you get some good LLM output that you think is worth showing to me, and you think it would take significant effort for me to get it myself, give me the prompts. That's the work you did, and there's nothing wrong with being proud of it.

jandrese

5 hours ago

Or the tell that the guy who usually writes fairly succinctly suddenly dumps five thousand words with all of the details that most people wouldn't bother to write down.

It would be interesting to see the history where the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document. Using AI isn't so much the problem as trusting it blindly.

plorkyeran

4 hours ago

Dumping the entire file into google docs and then editing and corrections applied top to bottom is exactly my normal workflow. I do my writing in vim, paste it into google docs, and then do a final editing pass while fixing the formatting.

like_any_other

4 hours ago

> the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document

This also happens if one first writes in an editor without spellchecking, then pastes into the Google Doc (or HN text box) that does have spellchecking.

Lerc

4 hours ago

I have seen a number of write ups where I think the only logical explanation is that they are not conveying what literally happened but spinning narrative to express their point.

There was an article the other day where the writer said something along the lines of it suddenly occurred to them that others might read content they had access to. They described thenselves as a security researcher. I couldn't imagine a security researcher having that occur to them, I would think that it is a concept continually present in their concept of what data is. I am not a security researcher and it certainly something I'm fairly constently aware of.

Similarly I'm not convinced the "shouldn't this plan be better" question is in good faith either. Perhaps it just betrays a fundamental misunderstanding of the operation being performed by a model, but my intuition is that they never expected it to be very good and are feigning surprise that it is not.

SkyeCA

2 hours ago

A world full of AI generated content and a world where we trust what we see seems to be mutually exclusive. I expect the default for a number of people going forward (myself included) will be extreme suspicion when presented with new images/videos/documents.

pgwhalen

4 hours ago

It probably did, but they didn't feel the need to fully explain why they were confident it was AI generated, since that's not the point of the article.

zephen

5 hours ago

> This isn't always a great indicator.

Right. Certainly not dispositive.

> use VIM and the copy/paste the completed document into it.

But he did mention tables. You'd think if they weren't just ASCII art, there'd be _some_ google docs history about fixing them up.

Izkata

5 hours ago

Different-sized headers too.

jadedtuna

5 hours ago

Fairly certain e.g. copy-pasting from Obsidian will copy over the tables as well.

fragmede

4 hours ago

every other time, paste with styling is the devil, but ever so occasionally, I do actually want that

clickety_clack

5 hours ago

I also interact with Google Docs as little as possible. I draft in Notes or Obsidian and copy the text in. I just hate the platform.

superultra

5 hours ago

I don’t need everyone seeing the dirty laundry of my first draft and edits. I too work in a working doc and then when completed I drop it at once into the final google doc.

jchw

5 hours ago

On another similar but different note, I don't think I've ever uploaded any code written by LLMs to GitHub, but I do sometimes upload fully complete projects under my "initial commit". Some people may legitimately just hide the edit history on purpose just because they don't want to "show their work". It's not really a particularly good habit, but I think a lot of us can relate.

LanceH

4 hours ago

A legit reason to hide your edit history is you might not remember what was in there. Say you have a moment of frustration and type out "this is an absolute garbage assignment by a braindead professor". Or you jot a quick note from the doctor because it happens to be open.

The simple fact is that the reader has no business reading the edit history, and the ability to make this happen should probably be far more prominent in document applications like Word or Google Docs.

user

3 hours ago

[deleted]

mystifyingpoi

4 hours ago

Same here. Confluence web editor has a thousand options but no option to comfortably edit text. I always write the entire document in Neovim and then format it later (or never, in case of yet another "please explain this thing only you know but we will ignore this page and call you anyway when it breaks").

el_benhameen

4 hours ago

Oh, another fun one: I once got an offer letter via Docs. The edit history included the original paste from another candidate’s offer letter, including their name and salary. Useful for benchmarking!

nereye

4 hours ago

Also, in some countries (e.g. Germany) applications explicitly do not track that information (such as how long a documented was edited) for legal reasons related to privacy laws.

elgertam

5 hours ago

I do something similar. I write markdown, then render it and copy-paste that in

necubi

5 hours ago

You can now paste markdown directly into Google docs (Edit -> Paste From Markdown)

(I have the same workflow, via Obsidian)

Veen

4 hours ago

Yes, I write everything in Obsidian and use "Paste from Markdown" in Google Docs. It's a habit I picked up years ago when Docs was much less reliable and lost work.

Plus, I want to deliver the completed document, not my edit history. Even on the occasions that I have written directly in Google Docs, I've copied the doc to obliterate the version history.

exe34

5 hours ago

yep, emacs, version control that doesn't suck, all my notes in one place. I'll copy and paste what I need to share into whatever hellscape you want to live in, but my copy will remain safe.

kianN

4 hours ago

I do the exact same thing and this was my first thought. To be fair, I would probably not be able to format tables in a single cope/paste

karaterobot

5 hours ago

I'm sometimes asked to produce meaningless 30-page documents that nobody ever reads. I mean literally nobody, since I can see the history of who has accessed it. Me and a proof-reader, and occasionally someone will open it up to check that it exists. But nobody reads them, let alone reads them closely. Not the distant funder who added it as a line-item requirement to their grant (their job is adding line items to grants, not reading documents), nor the actual people involved in the project, who don't have time to read a meaningless document, and don't need to. It's of use to no one, it's just something that must be done because we live in a stupid world.

I've started having AI write those documents. Each one used to take me a full week to produce, now it's maybe one day, including editing. I don't feel bad about it. I'm ecstatic about it, actually; this shouldn't be part of my job, so reducing its footprint in my life is a blessing. Someday, someone will realize that such documents do not need to exist in the first place, but that's not the world we live in right now, and I can't change it. I'm just glad AI exists for this kind of pointless yeoman's work.

zephen

4 hours ago

It's like burning fuel to till the soil so you can plant corn to make ethanol.

Almost an inverse Kafka universe; there are tools that can empower you to work the system in such a way that the effects of the externalities are very diffuse.

Still not good, but better than a typical Catch-22.

jiggawatts

2 hours ago

This is the same argument as “why is software X so bloated when nobody uses more than 10% of the features?”

Because everyone uses a different 10%.

I write these documents too and I’ve watched people “read” them. They all do the same thing: flip to the conclusions and then if there is a need they will skim the section that’s relevant to their role.

The project manager cares only about the risks, costs, and time estimates.

The architect just wants to see the diagram and maybe check that the naming conventions have been followed.

Sysops just wants to know what they’re on the hook for after go-live.

None of them read the whole document, but the whole document ends up being read.

PS: I’ve found I have to take care of distributing documents myself. All organisations big and small are shockingly bad at disseminating information. Help them!

isodev

4 hours ago

Once, I had a very frustrating slack chat with a fellow developer. We were discussing edge cases for a new feature and the experience from my perspective was that for each of my messages, I’d get a “in case of … how about …” style reply. The topic was focused around iOS vs. Android app lifecycle. Every now and then my colleague would suggest APIs or events that simply don’t exist.

This was before vibe coding, around the days of GPT 3.5. At the time I just thought it was a challenging topic and my colleague was probably preoccupied with other things so we parked the talk.

A few weeks later, while exploring ways to use GPT for technical tasks I suddenly remembered that slack chat and realised the person had been copy pasting my messages to gpt and back. I really felt bad at that moment, like… how can you do this to someone…? It’s not bad that you try tools to find information or whatever, but not disclosing that you’re effectively replacing your agency with that of a bot is just very suboptimal and probably disrespectful.

teaearlgraycold

4 hours ago

Anyone doing this should be fired. Both for the lack of trust they bring to the team but also because they’re just making themselves a middle man to an LLM. Why not cut out the middle man?

zephen

4 hours ago

People who make things don't make any money.

People who claim that they are disrupting with disintermediation, but actually simply replace the old intermediary with their own?

Those people get filthy rich.

People who _should_ be making things but are trying this intermediation technique themselves will most likely find that it's like other forms of lying. Go big or go home.

thatjoeoverthr

5 hours ago

“Any time saved by (their) AI prompting gets consumed by verification overhead, …”

This

When I receive a PR, of course it’s natural an AI is involved.

The mortal sin is the rubber stamp.

If they haven’t read their own PR, I only have so many warnings in me. And yes, it is highly visible.

a1j9o94

5 hours ago

I know I'm an outlier on HN, but I really don't care if AI was used to write something I'm reading. I just care whether or not the ideas are good and clear. And if we're talking about work output 99% of what people were putting out before AI wasn't particularly good. And in my genuine experience AI's output is better than things people I worked with would spend hours and days on.

I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.

skwirl

4 hours ago

>But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.

The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."

mrisoli

4 hours ago

I feel the same way, if all one is doing is feeding stuff into AI without doing any actual work themselves, just include your prompt and workflow into how you got AI to spit this content out, it might be useful for others to learn how to use these LLMs and shows train of thought.

I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.

dj_mc_merlin

4 hours ago

If ChatGPT can make a good plan for you from 5 bullet points, why was there a ticket for making a plan in the first place? If it makes a bad plan then the coworker submitted a bad plan and there's already avenues for when coworkers do bad work.

poemxo

4 hours ago

How do you know the coworker didn't bully the LLM for 20 minutes to get the desired output? It isn't often trivial to one-shot a task unless it's very basic and you don't care about details.

Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.

a1j9o94

4 hours ago

Honestly if you have a working relationship/communication norms where that's expected, I agree just send the 5 bullets.

In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.

amarant

4 hours ago

Agreed! I've reached the conclusion that a lot of people have completely misunderstood why we work.

It's all about the utility provided. That's the only thing that matters in the end.

Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!

The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.

Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.

A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3

meowface

4 hours ago

Ever since some non-native-English-speaking people within my company started using LLMs, I've found it much easier to interact and communicate with them in Jira tickets. The LLM conveys what they intend to say more clearly and comprehensively. It's obviously an LLM that's writing but I'm overall more productive and satisfied by talking to the LLM.

If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.

mystifyingpoi

4 hours ago

> I just care whether or not the ideas are good and clear

That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.

I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.

a1j9o94

4 hours ago

I would argue that's just your coworker giving you a bad answer. If you prompt a chatbot with the right business context, look at what it spits out, and layer in your judgement before you hit send, then it's fine if the AI typed it out.

If they answer your question with irrelevant context, then that's the problem, not that it was AI

axus

5 hours ago

Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways. The main potential problem I see is transmission of privileged information to a third party.

I assume they are working at a business to make money, not a school or a writing competition.

andy99

5 hours ago

Because AI can generate meritless works far faster than anyone can judge their merits. Asking someone to read your AI thing is basically asking someone to do the work for you. If you respect your colleagues time, you should be sharing your best version of inputs, not raw material. Not only that, you should have thought about and be able to defend it. Throwing some AI thing over the fence, you haven’t thought about it either, why would you expect your colleague to?

I’d add to that, long form AI output is really bad and basically unsuitable for anything.

Something like “I got GPT to make a few bullet points to structure the conversation” is probably acceptable in some cases if it’s short. The worst I can imagine is giving someone a “deep research” article to read as if that’s different from sending them to google.

tediousgraffit1

4 hours ago

This is a trust issue. If someone I trust hands me a big pr, I focus on the important details. If someone i dont trust hands me a big pr, i just reject it and ask them to break the problem down further. I dont waste my time on this kind of thing, regardless of whether it was hand written or generated.

axus

4 hours ago

Yes I made the assumption that the person who "put the plan together" did their own diligence of reviewing it before emailing, but maybe that is too charitable for an "AI plagiarist".

If someone sends me incomplete work I will judge them for that, the history of the work relationship matters and I didn't see it in the blog post.

zephen

5 hours ago

The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.

You can't know if it has been reviewed and checked for minimal sanity, or just chucked over the fence.

So you have to fully vet it.

And, if you have to fully vet it, then what value has the originator added? Might as well eliminate their position.

dj_mc_merlin

5 hours ago

> The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.

You can just ask them if they reviewed it in detail.

xeckr

5 hours ago

>Might as well eliminate their position.

It's where we're headed.

ben_w

4 hours ago

> Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways.

Situational.

I don't know this blogger or what the plan involved; but for sake of agument, let's say it was a business plan, and let's say in isolation it's really good, 99.9% chance of success with 10x returns kind of good.

Everyone in whatever problem space this is probably just got the same quality of advice from their own LLM prompting. That 99.9% is no longer "in isolation", it is a correlated failure where all the other people doing the same thing as you makes it less viable.

That's a good reason not to use a public tool, even when the output is good.

Correlated risk disguised as uncorrolated risk was a big part of the global financial crisis in the late 00s.

recursive

5 hours ago

The problem comes from the asymmetry between the effort that went into generating and judging. You can have one person spinning out documents that can keep a whole team busy and dragging everyone down.

Along the same lines as "A lie travels around the globe while the truth is putting on its shoes."

dj_mc_merlin

5 hours ago

If the documents they're putting out are bad, then they're doing bad work and that eventually comes with consequences from your coworkers and superiors. If they're doing good work, then great! Who cares if an LLM wrote most of it and they just edit it? That's not super different than the current relationship between senior and line workers.

recursive

3 hours ago

I guess I'm making some assumptions here. But I've been asked to review some documents before. Maybe I didn't notice the ones that were good. But my general assumption is that if someone gives me the output of an LLM to review, it's not going to be good work. In my experience it hasn't been good work generally.

unyttigfjelltol

4 hours ago

So many technologists offended at the use of technology. Next they’ll insist on pen-on-paper for truly authentic work product, and after that, 3 days’ wilderness meditation on it, to prove you really internalized it.

Look, it’s now like, email in 2004. You see spam, that it has found email. It doesn’t mean you refuse to interact with anyone by email, write geocities posts mocking email-users. You just acknowledge the technology (email) can be used for efficiency, results, and it also can be misused as a giant time-waster.

The author of the article here is basically saying “technology was used = work product is trash”. The ”spam” folks are seeing must be horrible to evoke this kind of condemnatory response.

acedTrex

5 hours ago

Because judging something on its merit is intrinsically tied to judging the underlying amount of effort that was put into something.

bakugo

4 hours ago

> Why can't the plan be judged on its merits?

Because of the difference in effort involved in generating it vs effort required to judge it.

Why are you entitled to "your" work being judged on its merits by a real human, when the work itself was not created by you, or any human? If you couldn't be bothered to write it, why should someone else be bothered to read it?

uhhhd

an hour ago

This is petty and bad business. No serious entrepreneur or leader worth his salt cares about this.

user

4 hours ago

[deleted]

uhhhd

an hour ago

Nowhere in the article does he explain why the use of AI is inherently problematic or why it necessitates rewriting the project plan. Work product is either good or bad, and responsibility for its quality rests with the person delivering it. The tools used to produce the work are irrelevant. In fact, for those who prioritize execution speed, the use of labor-saving tools should be encouraged wherever feasible.

This comment was generated by chatgpt (inspired by me).

jvanderbot

5 hours ago

If I discover you fed me AI output, directly from AI, it really makes me wonder what you are doing here. What did you add to this equation when I could have done it myself?

At least a "Generated by AI, reviewed and edited by xyz" tag would be some indicator of effort and accountability.

It may not be wrong to use AI to generate things whole cloth, but it definitely sidesteps something important and calls into question the "prompter's" contributions to the whole thing.

throw310822

2 hours ago

Sorry, but if you don't trust your coworker to review and shape the AI's output, why would you trust them with actually writing the whole document?

And if you think that at this point you could have done it yourself, then why don't you? The only important thing is that the document is fine, if it takes you too much to verify it then you need to trust your colleague, that was their job.

ArcHound

5 hours ago

> If you know in your heart of hearts that you didn’t put the work in, you’re undermining the social contract between you and your reader.

There's been a lot of social contract undermining lately. Does anyone please know about something that can be done to try and revert back? Social contract of "F you. I got mine" isn't very appealing to me, but that seems to be the current approach.

jdashg

4 hours ago

We literally have to be willing to get taken advantage of sometimes, and we have to come down hard on the "don't hate the player, hate the game" f-you-got-mine assholes.

It is not weakness, but strength, to make yourself (reasonably!) vulnerable to being taken advantage of. It is not strength, but weakness, to let bad behavior happen around you. You don't have to do everything, but you have to do something, or nothing changes.

We gotta spend less time explaining away (and tacitly excusing) bad behavior as unfortunate game theory, and more time coming down hard on people who violate trust.

Ante trust gladly, but come down hard on defectors.

ArcHound

4 hours ago

Consider this situation: security review before a project go-live.

I have never seen this team before and I'll "never" see this team after the fact. They might be contracted externally, they might leave before the second review.

Let's say I can sus out people doing this. I don't have the option of giving them the benefit of the doubt and they have the motivation to trick me.

I guess I've answered my own question a bit, such an environment isn't built to foster trust at all.

zephen

4 hours ago

Upvoted because this is true, but we need to establish coping mechanisms for this.

For example:

"Sorry, yes, I know the report is due tomorrow, but I don't have time to review it again because I wasted 2 hours on the first version."

or

"I found these three problems on the first page and stopped reading."

What else?

GMoromisato

4 hours ago

Before AI, if someone submitted a well-formatted, well-structured document, we could assume they spent a lot of time on it and probably got the substance right. It's like the document is a proof-of-work that means I can probably trust the results.

Maybe we need a different document structure--something that has verification/justification built in.

I'd like to see a conclusion up front ("We should invest $x billion on a new factory in Malaysia") followed by an interrogation dialogue with all the obvious questions answered: "Why Malaysia and not Indonesia?", "Why $x and not $y billion?", etc.

At that point, maybe I don't care if the whole thing was produced by AI. As long as I have the justification in front of me, I'm happy. And this format makes it easy to see what's missing. If there's a question I would have asked that's not in the document, then it's not ready.

Sharlin

5 hours ago

When it comes to LLMs, the only thing I hate more than the "I don't know, the AI wrote it" people is the "I wrote this" crowd. No you didn't, you asked someone else to write it. If you couldn't claim copyright for it in an IP court, you did not write it. Period.

zdragnar

5 hours ago

Has this actually been tried? Plenty of people have released AI generated (in part or nearly whole) media as their own, especially in music and fiction literature.

Personally, I'd love to see most of this stuff disappear from services that advertise it on par with human generated media like spotify and amazon (though I'll also admit to having a soft spot for the soul style AI covers of 50 cent and others).

zephen

4 hours ago

> Has this actually been tried?

Yes, Thaler v. Perlmutter.

I'm pretty sure, even though that's recent, that it fully comports with decades old law on patents, as well.

I can't find an older case, but Thaler v. Vidal is a recent patent case.

zdragnar

2 hours ago

All that case settled is that AI cannot hold a copyright. Humans can still claim copyright if they put their name on a largely AI produced work.

Your original complaint was that humans were saying "I wrote this", and those people are definitely going to be claiming copyright for it in court at some point... In fact, Thaler v. Perlmutter only makes that more likely as AI programs definitely cannot claim copyright themselves.

Hence my confusion. In principle I definitely agree with your original point though- people should produce content to express themselves, rather than becoming an expression of AI.

lbrito

5 hours ago

>Regardless of their intent I realised something subtle had happened. Any time saved by (their) AI prompting gets consumed by verification overhead, the work just gets passed along to someone else – in this case me.

This is _exactly_ how I feel. Any time saved by precooking a "plan" (typically halfbaked ideas) with AI isn't really time saved, it is a transfer of work from the planner to whoever is going to implement the plan.

ecshafer

4 hours ago

Lets steel man this:

1. If the output is solid, does it matter?

2. The author could simply have done the research, created the plan, and then gave an LLM the bullet list points of research and told it to "make this into a presentable plan". The author does the heavy work and actually does the creative work, and outsources the manual formatting to the LLM. My Wife speaks English as a second language, she much prefers telling an LLM what she is trying to say and to generate a business friendly email from this than writing it herself and letting in grammatical mistakes.

3. If I were to write a paper in my favorite text editor and then put it through pandoc to generate a word doc it would do the same thing.

phyzome

4 hours ago

How can you tell the output is solid?

The creation of a plan also implies that some work has gone into making sure it's a good one. That's one human (the author) asserting that it's solid. But now you're not even sure if that one vote exists.

user

an hour ago

[deleted]

eric-p7

5 hours ago

"Chat, expand these 3 points into 10 pages."

Later, at someone else's desk:

"Chat, summarize these 10 pages into 3 points."

andy99

5 hours ago

I don’t really find it better when someone ads a disclaimer. What am I supposed to do then? There’s still an expected default behavior of reading it, and if I don’t I need to confront them and say “I don’t care what you got an LLM to say, why not give me your view”. It’s inappropriate under any circumstances imo

embedding-shape

5 hours ago

When I receive something that either I suspect an LLM wrote 90% of, or the author is up front about it, I always ask "Did you check all of this yourself to verify before I pick it up?" and maybe half of the times I get a no, and then they return a day later with a new document and some fixes. Other half the time people say yes, I start digging into it, start finding bunch of weird stuff and send it back to the person.

I don't really care if it's a person or the LLM getting it wrong, if you're sending me stuff that you checked or haven't checked but it's wrong/ambiguous anyways, I'm sending it back to you to fix.

zephen

5 hours ago

> I don't really care if it's a person or the LLM getting it wrong,

You're nicer than some of us.

If it's an LLM getting it wrong, and it's not caught before it gets to you, then what value is the intermediary adding to the process?

embedding-shape

5 hours ago

I meant it in a way that I'll blame the person regardless of who actually wrote the text. The LLM messed up and you failed to notice? I blame you. You messed up and failed to notice? I still blame you.

zephen

4 hours ago

I get that.

But, as discussed in some other threads, the leverage provided by the LLM allows the miscreant to inundate you with slop by only pressing a few buttons.

And rejection is work. So they can produce more slop, requiring more rejections, faster than you can read the slop.

This is what's new. You reject it, they feed your rejection back into the LLM, and hand you something 5 minutes later with so many formatting changes that diff is unhelpful, and enough subtle substantive changes embedded in it that if you don't read the entire thing, you might have missed something important.

embedding-shape

43 minutes ago

Yeah, that's not how it works, in my experience. They hand me something, I notice it's likely written by LLM, ask them. They say "Yeah, some quick things blah blah" and then I ask "Did you check all the details themselves?" and they say "Yeah, most of them" and then I answer "Ok, check all the details, then I'll go through it" and then later they come back and things look a lot better.

If you're ending up doing this back and forth with someone more than once, just outright refuse to work with someone so unprofessional who doesn't even validate their own work. It wouldn't fly in most workplaces I've worked in.

tediousgraffit1

4 hours ago

Nowhere in here does it indicate that the generated plan was wrong or broken. I dont care if you use ai to write. I care if you write well. If the author trusted the other person, then it shouldn't matter. If the author didn't trust the other person, then they'd have to validate their output anyway. Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case. I am no ai booster but a lot people are expressing distaste for tools when they should be expressing distaste for fools.

GMoromisato

4 hours ago

It's just that AI gives fools more power.

It used to be that a well-written document was a proof-of-work that the author thought things through (or at least spent some time thinking about it).

I'm all for AI--I use it all the time. But I think our current style of work needs to change to adapt to both the strengths and weaknesses of AI.

tediousgraffit1

2 hours ago

> It used to be that a well-written document was a proof-of-work that the author thought things through (or at least spent some time thinking about it).

I think you hit the nail on the head here. The problem isn't so much that people can do bad work faster than ever now, its that we can no longer rely on the same heuristics for quickly assessing a given piece of work. I dont have a great answer. But I do still think it has something to do with trust and how we build relationships with each other.

zephen

4 hours ago

> Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case.

But even a rejection is work. So if they're generating more bs faster, they are generating more work for you. And, in some organizations, they will receive rewards for occasionally pressing buttons and inundating you with crap.

> a lot people are expressing distaste for tools when they should be expressing distaste for fools.

I'm pretty sure that the original article, and most of the derogatory comments here, are expressing distaste for fools rather than tools. Specifically, tool-using fools.

kazinator

4 hours ago

Plans usually start a short lists of ideas without a lot of detail. When people discuss and agree on things, then choices are decided upon. The branches of the "search tree" which are not being taken are pruned away and detail is added to the path taken.

If someone just generates an incredibly detailed plan in one go, that destroys the process. Others now are wasting time looking at details in something that may not even be a good idea if you step back.

The successive refinement flow doesn't preclude consideration of input from AI.

uragur27754

4 hours ago

I recently joined a project where the manager greeted me cheerfully that my new task is fully described and specified in a technical architecture doc. The manager left shortly after and considered my onboarding complete. It took the next couple of weeks to realize that the docs were AI generated, surprisingly detailed and accurate. However they were largely irrelevant to the actual problem the client has.

I was later asked why is it taking so long to complete the task when the document had a step by step recipe. I had to explain why the AI was solving the wrong problem in the wrong place. The PMs did not understand and scheduled more meetings to solve the problem. All they knew is that tickets were not moving on the board.

I suddenly realized that nobody had any idea of what’s going on at all on a technical level. Their contribution was to fret about target dates and executive reports. It’s like a pyramid scheme of technical ignorance. The consequence is some ICs forced to do uncompensated overtime to actually make working software.

These are the unintended consequences of the AI hype that CEOs are evangelizing.

juujian

5 hours ago

> But if you ship it and people use it, you’ve created an implicit promise: that you can maintain, debug, and extend what you’ve built. If AI assembled it and you can’t answer basic questions about how it works, you’ve misled users about what they can depend on.

Agree with the premise but this part is off. When I find a project online, I assume it will be abandoned within a year unless I see evidence of a substantive team and/or prior long-term time investments.

ascendantlogic

5 hours ago

The content of the document matters too. I don't really care if someone was AI-assisted writing a project plan. As long as it's sane and clear I'm not gonna lose sleep over that. However for my performance review I definitely want my manager to put in the effort and actually tell me nuanced thoughts on my performance. I don't want AI output for that part.

SoftTalker

4 hours ago

Wait until you find out that most managers write feedback using copy/paste boilerplate with maybe a few tweaks to personalize it. And this was happening long before LLMs.

ascendantlogic

4 hours ago

Oh I'm well aware. When I was an EM for a bit last year a bunch of colleagues told me they used ChatGPT to write their reviews. It was gross and I always hand crafted, small batch artisanal reviews when I'm in the managers chair.

SoftTalker

5 hours ago

This will all devolve to people submitting LLM work to people who can't tell (or don't care) that it's LLM work.

macrael

4 hours ago

I think it quickly needs to become good manners to indicate when text was written by AI rather than a person. I read that text differently and I shouldn't have to spend my time guessing.

patrickmay

4 hours ago

A footnote with the prompt used would be even more polite. Then I can just read that and skip the generated text.

btilly

5 hours ago

There is knowing, and then there is knowing.

For example suppose that someone likes to work in Markdown using VSCode. To get the kind of Word document that everyone else expects, you just copy and paste into Word. AI isn't involved, but it will look exactly like AI to you.

And there are more complicated hybrids. For example my wife has a workflow where everything that she does, communications, and so on, wind up in Markdown in Obsidian. She adds information about who was at the meeting that includes basuc research into them done by an agent (company directory, title, LinkedIn, and so on - all good to know for someone working in sales). Her AI assistant then extracts out bullet points, cross references, and so on. She uses that to create summaries that she references whenever she goes back to that project. And if someone wants to know what has happened or is currently planned for that project, AI extracts that from the same repository.

There's lots of AI in this workflow. But the content and thought is mostly from her. (With facts from searches that an agent did.) The fact that she's automated a lot of her organizational scutwork to an AI doesn't make the output "AI slop".

etothepii

2 hours ago

I'm amazed by how few people appreciate the quote attributed to Abraham Lincoln, "If I had more time I'd write a shorter letter."

Why aren't people using LLM to shorten rather than lengthen their plans? You know what you meant so can validate whether the shorter version still hits the points you care about. Whereas if I use an LLM to shorten your email there is always a risk I've now missed your main point.

Cleaning up grammar, punctuation spelling etc is a good thing worth doing but adding padding is exclusively irritating.

meowface

5 hours ago

I will admit to being an LLM workslopper. I don't ever send anything written by an LLM (because anyone who's seen enough LLM writing will recognize it's an LLM) without rewriting it by hand first - with exceptions for parts of READMEs - but for any other task it's pretty much 100% LLM.

I look at the output and ask it to re-re-verify its results, but at the end of the day the LLM is doing the work and I am handing that off to others.

rbbydotdev

4 hours ago

I would likely feel betrayed in a situation like this, but _sometimes_ there may be a sentence or two, capable of being more succinctly expressed via ai. This has happened to me personally. I have written something, came to a 'tip of the tongue' moment, then had ai help me to express it.

When used right, ideas could be distilled not extrapolated into slop. -- So maybe its not ALL BAD?

I propose a new quotation system, the 3 quote marker to disclose text written or assisted by ai:

'''You are absolutely right'''

ln809

4 hours ago

Many responses (correctly) identifying that edit history is not a reliable "tell" but this misses the broader point of the original article

EGreg

4 hours ago

I think that we should have revision control for intermediate stages - for code, documents, even paintings. So we can at least have some idea of provenance, how it's made.

Until AI is used to fake that, too.

zephen

4 hours ago

There are some issues with this.

1) For things made with LLMs: 1a) The fact that older versions aren't online forever. You literally might never be able to put the original prompt in and get the same result. 1b) A minor change in input prompt can result in a huge output change, rendering the original prompt practically meaningless, especially if modifications were required for the output of the LLM.

2) For things made the old-fashioned way, most history is boring and not useful. The best git repos have carefully curated history, with cohesive change sets that are both readable, and usable when bisecting the commit history for regressions.

EGreg

2 hours ago

I meant only for 2.

And I don’t care if it’s boring, it has to be available. Crime scene details or forgery details are mundane and boring too, but for the investigators they are essential.

zephen

2 hours ago

> And I don’t care if it’s boring, it has to be available.

Strong language, strong nope.

Demand to see shit I didn't even think was important when I was busy building stuff? Sucks to be you.

potsandpans

4 hours ago

It will probably be unpopular here, where people appear to have drawn the lines and formed unyielding positions, but...

The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.

My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.

There are still people who do Great Work, and even when they use llms the output is exceptional.

So my job hasn't changed much, I'm just reading more emojis.

If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).

SoftTalker

4 hours ago

If I find an emoji in a work document I'm rejecting it without further review.

zephen

4 hours ago

> My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.

Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer with a new set of changes requiring a new two hour review, by only pressing buttons for two minutes.

> If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).

Maybe your take on it is slightly different because your job function is somewhat different?

I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

potsandpans

4 hours ago

If it's important to the argument, my title is "Principal Software Engineer MTS". I review code, ADRs, meeting summaries, design docs, PRDs etc...

> I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

My point is, I've been in the game for coming up on 16 years, mostly in large corporate FAANG-adjacent environments. People have always been functionally incorrect and not to be trusted. It used to be a meme said with endearment, "don't trust my code, I'm a bug machine!" Zero trust. That's why we do code reviews.

> Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer...

With respect, "conceivably" is doing a lot of work here. I don't see it happening. I see more slop code, sure. But that doesn't mean I _have_ to review it with the same scrutiny.

My experience thus far has been that this is solved quite simply: After a quick scan, "Please give this more thought before resubmitting. Consider reviewing yourself, take a pass at refining and verify functionality."

> Maybe your take on it is slightly different because your job function is somewhat different? > I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

Interestingly, I see the opposite in the online space. First of all, as an aside, I don't see many people complaining at all in real life (other than the common commiseration of getting slop PRs, which has replaced the common commiseration of getting normal PRs of sub-par quality).

I primarily see people coming to the defense of human creativity and becoming incensed by reading (or I should say, "viewing" more generally) something that an llm has touched.

It appears that mostly people have accepted that llms are a useful tool for producing code and that when used unethically (first pass llm -> production), of course they're no good.

There is a moral outrage and indigence that I've observed however (on HN, and elsewhere) when an LLM has been used for the creative arts.

teeray

5 hours ago

> So it’s definitely AI. I felt betrayed and a little foolish. But why?

Because the prompter is basically gaslighting reviewers into doing work for them. They put their marks of authorship on the AI slop when they've barely looked at it at all which convinces the reviewer to look. When the comments come back, they pump the feedback into the LLM, more slop falls out and around we go again. The prompter isn't really doing work at all—the reviewers are.

zephen

4 hours ago

Not sure why this is being downvoted. It accurately and succinctly describes a likely reason for a _feeling_.

acedTrex

4 hours ago

There is nothing worse than this feeling, like fantastic, now i have to go read through this slop with incredible care and minutia. I may as well not read the slop and go redo all the work/thought myself, it will be easier that way.

turnsout

4 hours ago

Just a hot take, but if you ask someone to complete a rote task that AI can do, you should not be surprised when they use AI to do it.

The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?

mlhpdx

5 hours ago

Is writing with LLM assistance that different than writing with a typewriter 100+ years ago? Than using a computer and printer 30 years ago?

Each can be seen as using a tool to add false legitimacy. But ultimately they are just tools.

QuercusMax

5 hours ago

Those two things aren't even comparable. Both of those are using technology to physically imprint letters onto the page, but in both cases those are still your own ideas in your own words.

mlhpdx

3 hours ago

But not the appearance. It’s not the same, but it rhymes.

Edit: to clarify, people were judged by the clarity of their handwriting in the past and these tools made that impossible. Similarly, LLMs spackle over higher level language issues.

QuercusMax

an hour ago

Dictation has existed for millennia; alternatively, hiring someone to neatly write out your letters after making a messy draft has also existed for a very long time. My mom paid half her way through college in the 60s by typing people's papers for them who didn't know how to type properly.

These things are not remotely comparable.

zephen

5 hours ago

Yes, it's different.

All these tools provide leverage to the author, but only one of these tools provides non-deterministic leverage.

mlhpdx

3 hours ago

That’s a great distinction — the non-determinism is a huge difference.

user

5 hours ago

[deleted]

umanwizard

4 hours ago

Yes, it is obviously fundamentally different.

mlhpdx

3 hours ago

Explain?

umanwizard

3 hours ago

Because LLM-created content is not an expression of your own human creativity or intellect.

It's not like typewriters -- in a written work the content is the entire point, not the handwriting. So unlike previous tools, this one is replacing you for the part that actually matters.