AI Sucks at Code Reviews

26 pointsposted a day ago
by ctrlaltelite

8 Comments

FloNeu

18 hours ago

Surprise a thing that doesn’t understand context is bad in a task that requires understanding context and intent… Well… I haven’t read the article and never will.

bdjsiqoocwk

17 hours ago

I'd still like someone to explain to my how chatgpt is so bad at solving leetcode exercises. Literally copy pasted publicly available exercises.

tdeck

a day ago

Is it just me, or does everything in this article (both pros and cons) apply equally well to traditional static analysis tools? It's striking that adding "AI" doesn't seem to reduce the false alert rate or give anything particularly smart that a linter couldn't do (their big example is recognizing deprecated methods which doesn't seem like it needs and LLM to me).

scubbo

21 hours ago

That's true - but the important contrast is that no-one's claiming that static analysis tools are solely sufficient for code review. Despite the inflammatory headline, I read this article (particularly the final section "Conclusion: People Still Matter") as trying to say "AI is one useful tool in your arsenal to _improve_ code review, but don't for God's sake rely on it solely or blindly" - trying to temper some of the dangerous enthusiasm.

mstachowiak

21 hours ago

I'm the author of the article. Yes, I agree. I think AI reviewers, in their current state, are essentially glorified linters. Much of what they excel at can already be achieved with linting. However, I believe their edge lies in spotting semantic mistakes, whereas linters are ideally suited for syntactic or stylistic issues.

bodge5000

18 hours ago

The big difference is that static code analysis draws a hard line between what it can do and what it can't do, it doesn't have this fuzzy area of "well it can, but it shouldn't" that AI has (the problem there being that "but it shouldn't" is often lost, not just in ai but in tech in general).

user

a day ago

[deleted]