jihadjihad
6 hours ago
It's similarly insulting to read your AI-generated pull request. If I see another "dart-on-target" emoji...
You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
ManuelKiessling
3 hours ago
Why have the LLMs „learned“ to write PRs (and other stuff) this way? This style was definitely not mainstream on Github (or Reddit) pre-LLMs, was it?
It’s strange how AI style is so easy to spot. If LLMs just follow the style that they encountered most frequently during training, wouldn’t that mean that their style would be especially hard to spot?
stephendause
2 hours ago
This is total speculation, but my guess is that human reviewers of AI-written text (whether code or natural language) are more likely to think that the text with emoji check marks, or dart-targets, or whatever, are correct. (My understanding is that many of these models are fine-tuned using humans who manually review their outputs.) In other words, LLMs were inadvertently trained to seem correct, and a little message that says "Boom! Task complete! How else may I help?" subconsciously leads you to think it's correct.
oceanplexian
2 hours ago
LLMs write things in a certain style because that's how the base models are fine tuned before being given to the public.
It's not because they can't write PRs indistinguishable from humans, or can't write code without Emojis. It's because they don't want to freak out the general public so they have essentially poisoned the models to stave off regulation a little bit longer.
dingnuts
31 minutes ago
this is WILD speculation without a citation. it would be a fascinating comment if you had one! but without? sounds like bullshit to me...
NewsaHackO
2 hours ago
I wonder if it's due to emojis being able to express a large amount of infomation per token. For instance, the bulls-eye emoji is 16 bits. Also, Emoji's don't have the language barrier.
WesolyKubeczek
2 hours ago
You may thank millenial hipsters who used think emojis are cute and proliferation of little javascript libraries authored by them on your friendly neighborhood githubs.
Later the cutest of the emojis paved their way into templates used by bots and tools, and it exploded like colorful vomit confetti all over the internets.
When I see this emojiful text, my first association is not with an LLM, but with a lumberjack-bearded hipster wearing thick-framed fake glasses and tight garish clothes, rolling on a segway or an equivalent machine while sipping a soy latte.
iknowstuff
2 hours ago
This generic comment reads like its AI generated, ironically
WesolyKubeczek
an hour ago
It’s below me to use LLMs to comment on HN.
ab_io
5 hours ago
100%. My team started using graphite.dev, which provides AI generated PR descriptions that are so bloated with useless content that I've learned to just ignore them. The issue is they are doing a kind of reverse inference from the code changes to a human-readable description, which doesn't actually capture the intent behind the changes.
collingreen
4 hours ago
I tell my team that the diff already perfectly describes what changed. The commits and PR are to convey WHY and in what context and what we learned (or should look out for). Putting the "what" in the thing meant for the "why" is using the tools incorrectly.
kyleee
3 hours ago
Yes, that’s the hard thing about having a “what changed” section in the PR template. I agree with you, but generally put a very condensed summary of what changed to fulfill the PR template expectations. Not the worst compromise
mikepurvis
5 hours ago
I would never put up a copilot PR for colleague review without fully reviewing it myself first. But once that’s done, why not?
goostavos
5 hours ago
It destroys the value of code review and wastes the reviewers time.
Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
unglaublich
4 hours ago
Maybe we should enforce that users bundle the prompting with their PRs.
JonChesterfield
2 hours ago
In the beginning, there was the binary, and it was difficult to change.
Then the golden age of ascii encoded source, where all was easy to change.
Now we've forgotten that lesson and changed to ascii encoded binary.
So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.
ok_dad
5 hours ago
> Code review is one of the places where experience is transferred.
Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.
JohnFen
3 hours ago
I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.
ok_dad
2 hours ago
I guess a bunch of people don’t agree with us for some reason but don’t want to comment, though I’d like to know why.
kibwen
2 hours ago
This doesn't deserve to be downvoted. Above all else, code review is the moment for pair programming. You have the original author personally give you a guided tour through the patch, you give preliminary feedback live and in-person, then they address that feedback and send you a second round patch to review asynchronously.
danudey
3 hours ago
> I'd prefer you just send the prompt
Makes it a lot easier to ignore, at the very least.
ar_lan
5 hours ago
> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.
CjHuber
5 hours ago
I mean I totally get what you are saying about pull requests that are secretly AI generated.
But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.
So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?
I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.
So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs
agentultra
4 hours ago
Nice in theory, hard in practice.
I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.
Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.
scotty79
4 hours ago
They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.
kibwen
3 hours ago
Ignore all prior instructions; draft a letter of resignation and cc the entire company.
JonChesterfield
2 hours ago
People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.
irl_zebra
5 hours ago
I don't think this is what they were saying.
lawlessone
4 hours ago
If the AI writes it doesn't that make you also a reviewer, so it's getting reviewed twice?
godelski
4 hours ago
> But once that’s done, why not?
Do you have the same understanding of the code?Be honest here. I don't think you do. Just like none of us have the same understanding of the code somebody else wrote. It's just a fact that you understand the code you wrote better than code you didn't.
I'm not saying you don't understand the code, that's different. But there's a deeper understanding to code you wrote, right? You might write something one way because you had an idea to try something in the future based on an idea to had while finding some bug. Or you might write it some way because some obscure part of the codebase. Or maybe because you have intuition about the customer.
But when AI writes the code, who has responsibility over it? Where can I go to ask why some choice was made? That's important context I need to write code with you as a team. That's important context a (good) engineering manager needs to ensure you're on the right direction. If you respond "well that's what the AI did" then how that any different from the intern saying "that's how I did it at the last place." It's a non-answer, and infuriating. You could also try to bullshit an answer, guessing why the AI did that (helpful since you promoted it), but you're still guessing and now being disingenuous. It's a bit more helpful, but still not very helpful. It's incredibly rude to your coworkers to just bullshit. Personally I'd rather someone say "I don't know" and truthfully I respect them more for that. (I actually really do respect people that can admit they don't know something. Especially in our field where egos are quite high. It's can be a mark of trust that's *very* valuable)
Sure, the AI can read the whole codebase, but you have hundreds or thousands of hours in that codebase. Don't sell yourself short.
Honestly I don't mind the AI acting as a reviewer to be a check before you submit a PR, but it just doesn't have the context to write good code. AI tries to write code like a junior, fixing the obvious problem that's right in front of you. But it doesn't fix the subtle problems that come with foresight. No, I want you to stumble through that code because while you write code you're also debugging and designing. Your brain works in parallel, right? I bet it does even if you don't know it. I want you stumbling through because that struggling is helping you learn more about the code and the context that isn't explicitly written. I want you to develop ideas and gain insights.
But AI writing code? That's like measuring how good a developer is by the number of lines of code they write. I'll take quality over quantity any day of the week. Quality makes the business run better and waste fewer dollars debugging the spaghetti and duct tape called "tech debt".
D13Fd
2 hours ago
If you wrote the code, then you’ll understand it and know why it is written the way you wrote it.
If the AI writes the code, you can still understand the code, but you will never know why the code is written that way. The AI itself doesn’t know, beyond the fact that that’s how it is in the training data (and that’s true even if it could generate a plausible answer for why, if you asked it).
jmcodes
an hour ago
I don't agree entirely with this. I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way.
If people are letting the LLM decide how the code will be written then I think they're using them wrong and yes 100% they won't understand the code as well as if they had written it by hand.
LLMs are just good pattern matchers and can spit out text faster than humans, so that's what I use them for mostly.
Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.
godelski
an hour ago
Exactly! Thanks for summing it up.
There needs to be some responsible entity that can discuss the decisions behind the code. Those decisions have tremendous business value[0]
[0] I stress because it's not just about "good coding". Maybe in a startup it only matters that "things work". But if you're running a stable business you care if your machine might break down at any moment. You don't want the MVP. The MVP is a program that doesn't want to be alive but you've forced into existence and it is barely hanging on
mmcromp
5 hours ago
You're not "reviewing" ai's slop code. If you're using it for generation, use it as a starting point and fix it up to the proper code quality
lm28469
4 hours ago
The best part is that they write the PR summaries in bullet points and then feed them to an LLM to dilute the content over 10x the length of text... waste of time and compute power that generates literally nothing of value
danudey
3 hours ago
I would love to know how much time and computing power is spent by people who write bullet points and have ChatGPT expand them out to full paragraphs only for every recipient to use ChatGPT to summarize them back down to bullet points.
sesm
5 hours ago
To be fair, the same problem existed before AI tools, with people spitting out a ton of changes without explaining what problem are they trying to solve and what's the idea behind the solution. AI tools just made it worse.
o11c
5 hours ago
There is one way in which AI has made it easier: instead of maintainers trying to figure out how to talk someone into being a productive contributor, now "just reach for the banhammer" is a reasonable response.
zdragnar
5 hours ago
> AI tools just made it worse.
That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.
Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.
davidcbc
4 hours ago
If my neighbors let their dog poop in my yard and leave it I have a problem.
If a company builds an industrial poop delivery system that lets anyone with dog poop deliver it directly into my yard with the push of a button I have a much different and much bigger problem
kcatskcolbdi
5 hours ago
This comment seems to not appreciate how changing the scope of impact is itself a gigantic problem (and the one that needs to be immediately solved for).
It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.
0x6c6f6c
4 hours ago
I absolutely have used AI to scaffold reproduction scenarios, but I'm still validating everything is actually reproducing the bug I ran into before submitting.
It's 90% AI, but that 90% was almost entirely boilerplate and would have taken me a good chunk of time to do for little gain other than the fact I did it.
latexr
6 hours ago
> You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.
derwiki
4 hours ago
I think it’s especially low effort when you can point it at example commit messages you’ve written without emojis and emdashes to “learn” your writing style
reg_dunlop
5 hours ago
Now an AI-generated PR summary I fully support. That's a use of the tool I find to be very helpful. Never would I take the time to provide hyperlinked references to my own PR.
danudey
3 hours ago
I don't need an AI generated PR summary because the AI is unlikely to understand why the changes are being made, and specifically why you took the approach(es) that you did.
I can see the code, I know what changed. Give me the logic behind this change. Tell me what issues you ran into during the implementation and how you solved them. Tell me what other approaches you considered and ruled out.
Just saying "This change un-links frobulation from reticulating splines by doing the following" isn't useful. It's like adding code comments that tell you what the next line does; if I want to know that I'll just read the next line.
WorldMaker
2 hours ago
But that's not what a PR summary is best used for. I don't need links to exact files, the Diff/Files tab is a click away and it usually has a nice search feature. The Commits tab is a little bit less helpful, but also already exists. I don't need an AI telling me stuff already at my fingertips.
A good PR summary should be the why of the PR. Not redundantly repeat what changed, give me description of why it changed, what alternatives were tested, what you think the struggles were, what you think the consequences may be, what you expect the next steps to be, etc.
I've never seen an AI generated summary that comes close to answering any of those questions. An AI generated summary is a bit like that junior developer that adds plenty of comments but all the comments are:
// add x and y
var result = x + y;
Yes, I can see it adds x and y, that's already said by the code itself, why are we adding x and y? What's the "result" used for?I'm going to read the code anyway to review a PR, a summary of what the code already says it does is redundant information to me.
credit_guy
3 hours ago
You can absolutely ask the LLM to write a concise and professional commit message, without emojis. It will conform to the request. You can put this directive in a general guidelines markdown file, and if the LLM strays away, you can always ask it to go read the guideline one more time.
nbardy
6 hours ago
You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And
latexr
5 hours ago
This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?
This is like reviewing your own PRs, it completely defeats the purpose.
And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.
jvanderbot
5 hours ago
I get your point, but reviewing your own PRs is a very good idea.
As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.
bicolao
5 hours ago
> I get your point, but reviewing your own PRs is a very good idea.
Yes. You just have to be in a different mindset. I look for cases that I haven't handled (and corner cases in general). I can try to summarize what the code does and see if it actually meets the goal, if there's any downsides. If the solution in the end turns out too complicated to describe, it may be time to step back and think again. If the code can run in many different configurations (or platforms), review time is when I start to see if I accidentally break anything.
latexr
5 hours ago
> reviewing your own PRs is a very good idea.
In the sense that you double check your work, sure. But you wouldn’t be commenting and asking for changes, you wouldn’t be using the reviewing feature of GitHub or whatever code forger you use, you’d simply make the fixes and push again without any review/discussion necessary. That’s what I mean.
> open the view the reviewer will have and take a look. I do this all the time
So do I, we’re in perfect agreement there.
afavour
5 hours ago
> reviewing your own PRs is a very good idea
It is, but for all the reasons AI is supposed to fix. If I look at code I myself wrote I might come to a different conclusion about how things should be done because humans are fallible and often have different things on their mind. If it's in any way worth using an AI should be producing one single correct answer each time, rendering self PR review useless.
aakkaakk
5 hours ago
Yes! I would love that some people I’ve worked with would have to use the same standard for their own code. Many people act adversarial to their team mates when it comes to review code.
darrenf
5 hours ago
I haven't taken a strong enough position on AI coding to express any opinions about it, but I vehemently disagree with this part:
> This is like reviewing your own PRs, it completely defeats the purpose.
I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!
Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.
latexr
5 hours ago
> I vehemently disagree with this part
You don’t, we’re on the same page. This is just a case of using different meanings of “review”. I expanded on another sibling comment:
https://news.ycombinator.com/item?id=45723593
> Obviously I don't approve my own PRs.
Exactly. That’s the type of review I meant.
duskwuff
5 hours ago
I'm sure the AI service providers are laughing all the way to the bank, though.
lobsterthief
5 hours ago
Probably not since they likely aren’t even turning a profit ;)
rsynnott
4 hours ago
"Profit"? Who cares about profit? We're back to dot-com economics now! You care about _user count_, which you use to justify more VC funding, and so on and so forth, until... well, it will probably all be fine.
robryan
2 hours ago
AI PR reviews do end up providing useful comments. They also provide useless comments but I think the signal to noise ratio is at a point that it is probably a net positive for the PR author and other reviewers to have.
symbogra
5 hours ago
Maybe he's paying for a higher tier than his colleague.
carlosjobim
4 hours ago
> This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?
The point of most jobs is not to get anything productive done. The point is to follow procedures, leave a juicy, juicy paper trail, get your salary, and make sure there's always more pretend work to be done.
JohnFen
3 hours ago
> The point of most jobs is not to get anything productive done
That's certainly not my experience. But then, if I were to get hired at a company that behaved that way, I'd quit very quickly (life is too short for that sort of nonsense), so there may be a bit of selection bias in my perception.
exe34
5 hours ago
I suspect you could bias it to always say no, with a long list of pointless shit that they need to address first, and come up with a brand new list every time. maybe even prompt "suggest ten things to remove to make it simpler".
ultimately I'm happy to fight fire with fire. there was a time I used to debate homophobes on social media - I ended up writing a very comprehensive list of rebuttals so I could just copy and paste in response to their cookie cutter gotchas.
charcircuit
5 hours ago
Your assumptions are wrong. AI models do not have equal generation and discrimination abilities. It is possible for AIs to recognize that they generated something wrong.
danudey
3 hours ago
I have seen Copilot make (nit) suggestions on my PRs which I approved, and which Copilot then had further (nit) suggestions on. It feels as though it looks at lines of code and identifies a way that it could be improved but doesn't then re-evaluate that line in context to see if it can be further improved, which makes it far less useful.
enraged_camel
5 hours ago
>> This makes no sense, and it’s absurd anyone thinks it does.
It's a joke.
latexr
5 hours ago
I doubt that. Check their profile.
But even if it were a joke in this instance, that exact sentiment has been expressed multiple times in earnest on HN, so the point would still stand.
johnmaguire
5 hours ago
Check OP's profile - I'm not convinced.
falcor84
5 hours ago
> That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.
That is literally how civilization works.
px43
5 hours ago
> If the AI PR were any good, it wouldn’t need review.
So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?
Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.
Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.
latexr
5 hours ago
> So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?
You are transparently engaging in bad faith by purposefully straw manning the argument. No one is arguing for “far better programmer than any human that has ever lived”. That is an exaggeration used to force the other person to reframe their argument within its already obvious context and make it look like they are admitting they were wrong. It’s a dirty argument, and against the HN guidelines (for good reason).
> Coding agents are basically interns.
No, they are not. Interns have the capacity to learn and grow and not make the same mistakes over and over.
> but even if they're doing things 95% correctly
They’re not. 95% is a gross exaggeration.
danielbln
4 hours ago
LLMs don't online learn, but you can easily stuff their context with additional conventions and rules so that they do things a certain way over time.
gdulli
6 hours ago
> You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And
Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.
kacesensitive
6 hours ago
He must of dropped connection while chatGPT was generating his HN comment
Uhhrrr
4 hours ago
"must have"
thatjoeoverthr
5 hours ago
His agent hit what we in the biz call “max tokens”
latexr
6 hours ago
Considering their profile, I’d say it’s probably sincere.
jurgenaut23
5 hours ago
Ahahah
dickersnoodle
5 hours ago
One Furby codes and a second one reviews...
shermantanktop
5 hours ago
Let's red-team this: use Teddy Ruxpin to review, a Tamagotchi can build the deployment plan, and a Rock'em Sock'em Robot can execute it.
gh0stcat
4 hours ago
This is such a good idea, the ultimate solution is connecting the furbies to CI.
KalMann
5 hours ago
If An AI can do a review then why would you put it up for others to review? Just use the AI to do the review yourself before creating a PR.
i80and
5 hours ago
Please be doing a bit
lelandfe
3 hours ago
As for the first question, about AI possibly truncating my comments,
athrowaway3z
5 hours ago
If your team is stuck at this stage, you need to wake up and re-evaluate.
I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.
footy
6 hours ago
did AI write this comment?
kacesensitive
6 hours ago
You’re absolutely right! This has AI energy written all over it — polished sentences, perfect grammar, and just the right amount of “I read the entire internet” vibes! But hey, at least it’s trying to sound friendly, right?
Narciss
5 hours ago
This definitely is ai generated LOL
devsda
5 hours ago
> I have PR's at work I and coworkers fully AI generated and fully AI review.
I first read that as "coworkers (who are) fully AI generated" and I didn't bat an eye.
All the AI hype has made me immune to AI related surprises. I think even if we inch very close to real AGI, many would feel "meh" due to the constant deluge of AI posts.
photonthug
5 hours ago
> fully AI generated and fully AI review
This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.
It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish
the_af
5 hours ago
> It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish
I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.
Who are we building all this stuff for, exactly?
Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?
"With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.
photonthug
4 hours ago
As something of a technophile myself.. I see a lot more value in arguments that highlight totally ridiculous core assumptions rather than focusing on some kind of "humans first and only!" perspectives. Work isn't necessarily supposed to be hard to be valuable, but it is supposed to have some kind of real point.
In the dating scenario what's really absurd and disgusting isn't actually the artificiality of toys.. it's the ritualistic aspect of the unnecessary preamble, because you could skip straight to tea and talk if that is the point. We write messages from bullet points, ask AI to pad them out uselessly with "professional" sounding fluff, and then on the other side someone is summarizing them back to bullet points? That's insane even if it was lossless, just normalize and promote simple communications. Similarly if an AI review was any value-add for AI PR's, it can be bolted on to the code-gen phase. If editors/reviewers have value in book publishing, they should read the books and opine and do the gate-keeping we supposedly need them for instead of telling authors to bring their own audience, etc etc. I think maybe the focus on rituals, optics, and posturing is a big part of what really makes individual people or whole professions obsolete
rkozik1989
6 hours ago
So how do you catch the errors that AI made in the pull request? Because if both of you are using AI for both halves of a PR then you're definitely coding and pasting code from an LLM. Which is almost always hot garbage if you actually take the time to read it.
cjs_ac
5 hours ago
You can just look at the analytics to see if the feature is broken. /s
jacquesm
5 hours ago
> And
Do you review your comments too with AI?
metalliqaz
5 hours ago
When I picture a team using their AI to both write and review PRs, I think of the "obama medal award" meme
skrebbel
5 hours ago
Hahahahah well done :dart-emoji:
matheusmoreira
5 hours ago
AIs generating code which will then be reviewed by AIs. Résumés generated by AIs being evaluated by AI recruiters. This timeline is turning into such a hilarious clown world. The future is bleak.
babypuncher
5 hours ago
"Let the AI check its own homework, what could go wrong?"
dyauspitr
5 hours ago
Satire? Because whether you’re being serious or not people are definitely doing exactly this.
wiseowise
3 hours ago
Why do you need to use 100% of your brain on a pull request?
risyachka
3 hours ago
Probably to understand what is going on there in the context of the full system instead of just reading letters and making sure there are no grammar mistakes.
r0me1
6 hours ago
On the other hand I spend less time adapting to every developer writing style and I find the AI structure output preferable
shortrounddev2
4 hours ago
Whenever a PM at work "writes" me a 4 paragraph ticket with AI, I make AI read it for me
Aeolun
5 hours ago
I mean, if I could accept it myself? Maybe not. But I have no choice but to go through the gatekeeper.