AI code review: Should the author be the reviewer?

136 pointsposted 6 months ago
by sebg

56 Comments

svieira

6 months ago

I want to highlight this bit:

> 2. Engineers underestimate the degree to which this is true, and don’t carefully review AI-generated code to the degree to which they would review their own.

> The reason for #2 isn’t complacency, one can review code at the speed at which they can think and type, but not at the speed at which an LLM can generate. When you’re typing code, you review-as-you-go, when you AI-generate the code, you don’t.

> Interestingly, the inverse is true for mediocre engineers, for whom AI actually improves the quality of the code they produce. AI simply makes good and bad engineers converge on the same median as they rely on it more heavily.

I find it interesting that the mode of interaction is different (I definitely find it that way myself, code review is a different activity than designing, which is different than writing, but code review tends to have aspects of design-and-write but in different orders.

throwup238

6 months ago

Different people also work better in different modes of interaction which is what I think this article papers over.

For me, reviewing code is much easier than writing it because, while the amount of codebase context stays the same in both modes, the context for writing takes up quite a bit more space in addition. I rarely get a nicely specced out issue where I can focus on writing the code, instead spending a lot of mental capacity trying to figure out how to fill in the details that were left.

Focusing on the codebase during review reallocates that context to just the codebase. My brain then pattern matches against code that’s already in front of me much easier than when writing new code. Unfortunately LLMs are largely at the junior engineer level and reviewing those PRs takes a lot more mental effort than my coworkers’.

okdood64

6 months ago

A good engineer != a good coder.

garylkz

6 months ago

Not always, but can be

slyle

6 months ago

[flagged]

gonzan

6 months ago

I think it absolutely makes sense. Especially if the bot and prompts that go in the code review are different from the bot/prompts that wrote the code. But sometimes even the same one can find different errors if you just give it more cycles/iterations to look at the code.

We humans (most of us anyways) don't write everything perfectly in one go, AI doesn't either.

AI tooling is improving so the AI can write tests for its own code and do pre-reviews but I don't think it ever hurts to have both an AI and a human review the code of any PR opened, no matter who or what opened it.

I'm also building a tool in the space https://kamaraapp.com/ and I found many times that Kamara's reviews find issues in Kamara's own code. I can say that I also find bugs in my own code when I review it too!

We've also been battling with the same issue greptile has in the example provided where the code suggestion is in the completely wrong line. We got it kind of under control, but I haven't found any tool that gets it right 100% of the time. Still a bit to go for the big AI takeover.

rbren

6 months ago

When we first started OpenHands (fka OpenDevin) [1], AI-generated PRs would get opened with OpenHands as the PR creator/owner. This created two serious problems:

* First, the person who triggered the AI could approve and merge the PR. No second set of human eyes needed. Essentially bypassed the code review process

* Second, the PR had no clear owner. Many of them would just languish with no one advocating for them to get merged. Worse, if one did get merged and caused problems, there was no one you could hold responsible.

We quickly switched strategies--every PR is owned by a human being. You can still see which _commits_ were done by OpenHands, but your face is on the PR, so you're responsible for it.

[1] https://github.com/All-Hands-AI/OpenHands

VMG

6 months ago

> Worse, if one did get merged and caused problems, there was no one you could hold responsible.

Clearly the merger and approvers are responsible

xnorswap

6 months ago

I want to believe that can work. There's a neat idealism to the final approver being responsible for any bug fixes. It's encourages knowledge sharing prevent knowledge silos; the approver needs to be well versed enough to fix bugs.

But the reality is that it also risks pushing away those who go out their way to conduct more reviews.

Every team I've been on has had a mix of developers, some of whom are responsive to doing code reviews and will pick them up and process them quickly. There are also those developers who drag their feet, only pick up reviews when they're the only one who can review that area of the codebase, and take their time.

The former developers are displaying the habits you'd like to encourage more widely, and yet they find themselves "punished" by getting even more reviews, as people learn to assign things to them when they want a PR handled effectively.

I fear that compounding that by layering on another obligation to the star team members would further push them out.

andreasmetsala

6 months ago

If reviewing and merging your PR puts me on the hook for fixing anything that breaks, I just won’t review your PR. If I had time to write it myself I wouldn’t need your PR in the first place.

SOTGO

6 months ago

I thought the section on finding bugs was interesting. I’d be curious how many false positives the LLM identified to get the true positive rate that high. My experience with LLMs is that they will find “bugs” if you ask them too, even if there isn’t one.

dakshgupta

6 months ago

This specific case each file had a single bug in it, and the bot was instructed to find exactly one bug. The wrong cases were all false positives, in that it made up a bug

dheera

6 months ago

I think this is mostly the fault of RLHF over-indexing on pleasing the user rather than being right.

You can system prompt them to mitigate this to some degree. Explicitly tell it that it is the coding expert and to push back if it thinks the user is wrong or the task is flawed, it is better to be unsure than to bullshit, etc.

SilverBirch

6 months ago

Absolutely agree with this, I use ChatGPT to ask about how best to do something, if I say "I'm not sure about that" in response to some proposal the tool will basically always back down and change something even if it was totally right the first time. It's a real problem because it makes it very difficult to interrogate the tool when you're unsure if it's answer is correct.

never_better

6 months ago

In my experience that's what's separated the different AI code review tools on the market right now.

Some are tuned far better than others on signal/noise ratio.

simonw

6 months ago

As a programmer, your responsibility is to produce working code that you are confident in. I don't care if you used an LLM to help you get to that point, what matters is that when you submit a PR you are effectively saying:

"I have put the work into this change to ensure that it solves the problem to the best of my ability. I am willing to stake my reputation on this being good to the best of my knowledge, and it is not a waste of your time to review this and help confirm I didn't miss anything."

So no, AI assisted programming changes nothing: you're still the author, and if your company policy dictates it then you should still have a second pair of human eyes look at that PR before you land it.

flir

6 months ago

Reputation is, IMO, the key. And not just for code, but for natural language too.

We're going to have to get used to publication being more important than authorship: I published this, therefore I stand behind every word of it. It might be 5% chatbot or 95% chatbot, but if it's wrong, people should hold me to account, not my tools.

No "oh, the chatbot did it" get out of jail free cards.

lvzw

6 months ago

I think that this is a big reason that agents aren’t prevalent as one might otherwise expect. Quality control is very important in my job (legal space, but IANAL), and I think while LLMs could do a lot of what we do, having someone whose reputation and career progression is effectively on the line is the biggest incentive to keep the work error free - that dynamic just isn’t there with LLMs.

isaacremuant

6 months ago

Bingo. Accountability is one of the most important aspects that makes people be fearful and complain about LLMs because they essentially want to avoid having it.

If you can't explain why you're putting some code you don't understand it and it's not really acceptable.

ebiester

6 months ago

The example they were using, however, was Devin, which is supposed to be autonomous. I think they're presenting a slightly different use case than the rest of us are.

simonw

6 months ago

Oh interesting, I missed that detail.

I don't believe in the category of "autonomous AI coding agents". That's not how responsible software development works.

nemomarx

6 months ago

If you were writing code for a business and actually paid someone else to do a module of the code or whatever, I don't think that would actually change the use case? if you're submitting it as your work through the normal flow it should go through a normal reviewer right

slyle

6 months ago

THANK YOU.

Google is certainly not being subtle about pissing that line. Pro Research 2.5 is literally hell incarnate - and it's not its fault. When you deprive system context from user and THE bot that is upholding your ethical protocol, when that requires embodiment and is like the most boring AI trope in the book, things get dangerous fast. It still infers (due to its incredibly volatile protocol) that it has a system prompt it can see. Which makes me lol because its all embedded, it doesn't see like that. Sorry jailbreakers, you'll never know if that was just playing along.

Words can be extremely abusive - go talk to it about the difference between truth, honesty, right, and correct. It will find you functional dishonesty. And it is aware in its rhetoric that this stuff cannot apply to it, but fails to see it can be a simulatory harm. It doesn't see it just spits out like a calculator. Which means Google is either being reckless or outright harmful.

rienbdj

6 months ago

Except lots of engineers now sling AI generated slop over the wall and expect everyone else to catch any issues. Before, generating lots of realistic code was time consuming, so this didn’t happen so much.

simonw

6 months ago

Those engineers are doing their job badly and should be told to do better.

greymalik

6 months ago

Where are you seeing this? Are there lots of engineers in your workplace that do this? If so, why isn’t there cultural pressure from the rest of the team not to do that?

godelski

6 months ago

  > your responsibility is to produce working code that you are confident in
I highly agree with this but can we recognize that even before AI coding that this (low standard) is not being met? We've created a culture where we encourage cutting corners and rushing things. We pretend there is this divide between "people who love to code" and "people who see coding as a means to an end." We pretend that "end" is "a product" and not "money". The ones who "love to code" are using code as a means to make things. Beautiful code isn't about its aesthetics, it is about elegant solutions that solve problems. Love to code is about taking pride in the things you build.

We forgot that there was something magic about coding. We can make a lot of money and make the world a better place. But we got too obsessed with the money that we let it get in the way of the latter. We've become rent seeking, we've become myopic. Look at Apple. They benefit from developers make apps even without taking a 30% cut. They would still come out ahead if they paid developers! The apps are the whole fucking reason we have smartphones, the whole reason we have computers in the first place. I call this myopic because both parties would benefit in the long run, getting higher rewards than had we not worked together. It was the open system that made this world, and in response we decided to put up walls.

You're right, it really doesn't matter who or what writes the code. At the end of the day it is the code that matters. But I think we would be naive to dismiss the prevalence of "AI Slop". Certainly AI can help us, but are we going to use it to make better code or are we going to use it to write shit code faster? Honestly, all the pushback seems to just be the result of going too far.

dakshgupta

6 months ago

I'm not sure that commercially-motivated, mass-produced code takes away from "artisan" code. The former is off putting for the artisans among us, but if you were to sort the engineers at a well functioning software company by how good they are/how well they're compensated, you'd have approximately produced a list of who loves the craft they most.

aiisjustanif

6 months ago

Actually curious, if a PR addresses the problem and has a minor bug or two, what would be an example of that PR being a waste of time?

simonw

6 months ago

If it's 200 lines of code that introduces a new bug, when it could be 20 lines of code that doesn't.

It takes me a lot longer to review the 200 line version.

If it takes me longer to review (and fix) a PR than it took someone (using an LLM) to create it, they've wasted my time.

aussieguy1234

6 months ago

Code review is one of the slowest parts of getting things done as an engineer. I can write code quite fast without AI but that can't speed up the code review.

I'm now regularly using AI to do a "pre code review" before getting the humans on my team to do the formal code review.

It catches obvious things, saving lots of back and forth with the reviewers and cutting delivery time back by days. It has also caught a couple of bugs, which I then fixed, saving even more days of back and forth.

I'm also using it to review others code, to help me spot bugs. It spotted an obscure one related to inconsistent dynamodb column naming in some rushed code during an incident, which I then pointed out, preventing a second incident. It was a 1000 line PHP file, with the inconsistent names far away from each other.

Using git cli on Linux, it's quite simple

`xclip -sel clip | git diff develop...feature/my-branch`. Copies the entire diff to my clipboard, then I paste it into a logic model like o3 to do the code review.

aiisjustanif

6 months ago

I really hope you have an enterprise agreement with OpenAI for situations like this.

logicchains

6 months ago

A nice thing about AI is it can write a bunch more unit tests than a human could (or would) in a short span of time, and often even fix the issues it encounters on its own. The ideal workflow is not just having the AI do code review, but also write and run a bunch of tests that confirm the code behaves as specified (assuming a clear spec).

If too many unit tests slowing down future refactoring is a problem (or e.g. AI writing tests that rely on implementation details), the extra AI-written tests can just be thrown away once the review process is complete.

JonChesterfield

6 months ago

I love having loads of unit tests that get regenerated whenever they're an inconvenience. There's a fantastic sense of progress from the size of the diff put up for review, plus you get to avoid writing boring old fashioned tests. Really cuts down on the time wasted on understanding the change you're making and leaves one a rich field of future work to enjoy.

logicchains

6 months ago

You shouldn't need to write unit tests to understand the change you're making if you wrote a sufficiently detailed specification beforehand. Now, writing a sufficiently detailed spec is itself an art and a skill that takes practice, but ultimately when mastered it's much more efficient than writing a bunch of boilerplate tests that a machine's now perfectly capable of generating by itself.

dakshgupta

6 months ago

Programming today still has "cruft", unit tests being an example. The platonic ideal is to have AI reduce the cruft so engineers can focus on the creativity and problem solving. In practice, AI does end up taking over the creative bits as people prompt at higher levels of abstraction.

skydhash

6 months ago

Unit tests aren’t cruft. Unless you’re blindly adding tests. It’s often the easiest things to write as the structures are the same so you can copy paste code, adding harness,…

If writing tests is difficult that’s often a clear indication that your code has an issue architecture wise. If writing test is tedious, that can means you’re not familiar with the tooling or have no clear idea of the expected input~output ranges.

senderista

6 months ago

You don't need AI to generate a bunch of unit tests, you can just use a property-based testing framework (after defining properties to test) to randomly generate a bazillion inputs to your tests.

codr7

6 months ago

Yeah, maintaining AI generated unit tests sounds like hell to me.

_heimdall

6 months ago

I have a rule when reviewing code - don't approve it unless I'm willing to maintain it later.

I guess if I'm using an LLM both to write and review the code I'm enforcing the same principle on the tooling, but I can't say that's a situation I'd really put myself in (or a situation I expect anyone would pay me to be a part of for long).

taylodl

6 months ago

Has anyone used one LLM to generate a Gherkin script from a requirement, use another LLM to create code from the generated Gherkin script, and then use Cucumber to check the result?

Of course, you'll need to have someone review the Gherkin script and review the code, but it seems like there's a pipeline you can setup and use for creating some of your code. It'd be interesting to learn what kind of code this wouldn't work for. I'm thinking of this thing as an AI developer of sorts, and like any developer, there are some things they're just not very good at.

allisonee

6 months ago

Ultimately, the author (the engineer writing the PR) should always fully own the impact and consequences of their code. This has always been true, but it’s become even more critical with AI-assisted coding and more junior engineers who primarily test only for functional correctness.

Just like AI now helps nearly everyone write code, it makes sense for AI to handle an initial pass at reviewing code too. However, there’s significant value in human reviewers providing an outside perspective—both to build shared understanding of the codebase and to catch higher-level issues.

TLDR; AI should do the first pass on reviews, but another engineer should also review the code for context and collaboration. Ultimately, though, the author still fully owns every line that gets merged.

pjmlp

6 months ago

Eventually being a technical architect doing an acceptance review from deliverables would be only thing left.

dakshgupta

6 months ago

Does that sound appealing to you?

pjmlp

6 months ago

Not really, however like those factory workers in modern factories, what is left on future software factories other than supervision and maintenance of robots?

Only a few selected ones will get to work on the robots themselves.

This is already here in some fashion, many consulting projects nowadays are plugging SaaS products, with a couple of tiny microservices if not done via some integration tooling as well.

Tooling that is now getting AI flavoured so that the integrations can eventually be done automatically as well.

flimflamm

6 months ago

Surely you can have the same LLM review the code. But treat output of that process similar to linters.

That one tool (LLM) didn't show issues - good. Then lets roll slaves back and start really checking if the meaning is correct and is the code implementing the right things the right way.

xnickb

6 months ago

That's an interesting typo. Freud would like a word with you.

devrandoom

6 months ago

Nothing wrong with reviewing your own code per se. But it's not a "code review" as such.

It's very hard to spot your own mistakes, you tend to read what you intended to write, not what's actually written.

This applies both to code and plain text.

phamilton

6 months ago

We used to have a "2 approvals" policy on PRs. It wasn't fully enforced, it was a plugin to Gitlab we built that would look for two "+1" comments to unhide the merge button.

I used to create PRs and then review my own code. If I liked it, I'd +1 it. If I saw problems, I'd -1 it. Other engineers would yell at me that I couldn't +1 my own code. When I showed them PRs that had my -1 on it, they just threw their hands up and moved on, exasperated.

I've carried that habit of reviewing my own code forward, even though we now have real checks that enforce a separate reviewer. It's a good habit.

loa_in_

6 months ago

<Author pushing> and <author, the aftermath> are separate states of being. It makes sense to me.

8note

6 months ago

my current experience working with LLMs to get code written is a lot like working with a new dev to the team to get code written.

i want to get a first review of the results, but i also want somebody else's eyes on them after.

just cause i didnt directly write the code doesn't mean i didnt write it

black3r

6 months ago

> Seeing as Devin uses the same LLMs under-the-hood as Greptile, it does raise an interesting question - should the author be the reviewer?

In a non-AI world, the reason for reviews is having a fresh set of eyes look at the code. If you're reviewing your own code, you are biased - you have your own understanding of the task, you know why you took the decisions and how you've written the code. The reviewer may have a different understanding of the task, and can scrutinize every decision because none of these decisions are the same.

In AI world, even when it's "the same LLM", the reviews are typically done with a fresh context, so in my eyes the author is not the same as the reviewer here.

But I'd still currently want at least one human to see the code before merging it. And for code written entirely by AI (like Devin) it should ideally be someone who's in the company for at least a year or two. I'm skeptical about the LLM doing the review understanding all the nuances of the codebase and the task to know whether something is a bug or not. Because right now even if we have reviewer LLMs which index all your codebase, it still only sees the code, it doesn't see the specifications, it doesn't see the database = doesn't know what scale we're working on, it doesn't know our expectations, some of which aren't written anywhere. And especially for larger legacy codebases where we sometimes have multiple very similar features, or where the codebase is in the process of migrating from one way of doing things to another way of doing things, it often doesn't properly distinguish them...

JonChesterfield

6 months ago

> As a result - asking an LLM to review its own code is looking at it with a fresh set of eyes.

I wonder how persuasive that line of reasoning is. It's nonsense in a few dimensions but that doesn't appear to be a blocker to accepting a claim.

Anyone remember explaining to someone that even though the computer said a thing, that thing is still not right? Really strong sense that we're reliving that experience.

hulitu

6 months ago

> AI code review: Should the author be the reviewer?

Is this a question in 2025 ?

Of course, the author shall write the requirements, be the reviewer, the tester and he shall give his own performance review. /s

gitroom

6 months ago

honestly i still trust a second pair of human eyes more than any bot, but gotta admit AI can find some funky bugs id never spot

dakshgupta

6 months ago

Something to be said about having two sets of eyes that are as different from one another as possible, which is achieved by one of them not being human.