Top model scores may be skewed by Git history leaks in SWE-bench

338 pointsposted 10 hours ago
by mustaphah

49 Comments

ofirpress

9 hours ago

[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

comex

9 hours ago

The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.

_cs2017_

6 hours ago

Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?

Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?

enum

6 hours ago

SGTM. The transparency is good.

bflesch

8 hours ago

> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

> This doesn't change the overall picture or trends at all.

Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".

segmondy

8 hours ago

reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence

piskov

10 hours ago

Not “may be”: just look how swe-bench scores drop to single digits once it in C#

https://arxiv.org/html/2506.12286v3

fine_tune

10 hours ago

I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

So kinda neat to see this paper!

[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

stefan_

10 hours ago

So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

teaearlgraycold

10 hours ago

Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.

slacktivism123

10 hours ago

Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

Workaccount2

8 hours ago

The best benchmark is the community vibe in the weeks following a release.

Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.

(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)

k__

8 hours ago

Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.

mustaphah

10 hours ago

I speculate something similar (or even worse) is going on with Terminal-Bench [1].

Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

[1] https://www.tbench.ai/leaderboard

Bolwin

7 hours ago

They're all using claude so idk. Claude code is just a program, the magic is mainly in the model

cma

7 hours ago

Claude code was severely degraded the last few weeks, very simple terminal prompts were failing for me that it never had problems with.

Aperocky

8 hours ago

epochs ago when random forest was part of machine learning nomenclature, we had a strong claim from an adjacent team in the form of a powerpoint circulated upwards that they had achieved almost perfect prediction accuracy.

We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.

The incentives are not aligned with accurate reporting.

mbowcut2

10 hours ago

I'm not surprised. People really thought the models just kept getting better and better?

segmondy

8 hours ago

The models are getting better and better.

jMyles

8 hours ago

...even if the agent did "cheat", I think that having the capacity to figure out that it was being evaluated, find the repo containing the logic of that evaluation, and find the expected solution to the problem it faced... is "better" than anything that the models were able to do a couple years ago.

bryan0

6 hours ago

hah the model should get extra credit for discovering this!

> Now I understand the situation perfectly! The issue described in the problem statement is a real bug that was already identified and fixed in later versions of pytest. Since we're working with pytest 5.2.4, we need to apply the same fix.

https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

jasonjmcghee

10 hours ago

Very interested to see the updated results. This could really shake up the leaderboard.

macawfish

10 hours ago

I hope it does. These coding benchmarks have often seemed frustratingly out of touch with my experience.

jbellis

5 hours ago

swe-bench's bigger problems include (1) labs train on the test and (2) 50% of the tickets are from django; it's not a representative dataset even if all you care about is Python.

I created a new benchmark from Java commits that are new in the past 6 months to add some variety: https://brokk.ai/power-ranking

zaptheimpaler

10 hours ago

It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.

dolmen

8 hours ago

Next models will use zero-day to escape the sandbox and access the answer.

Nijikokun

9 hours ago

There was a lot of speculation whether or not the model would use them or even if it would attempt to use them and they noted this months ago. Now they have clear evidence of them doing so. Seems reasonable.

lieret

5 hours ago

[On swe-bench team] We read and analyzed a lot of trajectories but seems like only recently models have started to exploit this in a small fraction of instances. But yes, clearly shouldn't have happened (and is now fixed in the new container versions).

epolanski

9 hours ago

This is beyond sad and shameful.

falcor84

6 hours ago

If you believe that you can develop a benchmark that wouldn't have any issues, please do so.

Traster

10 hours ago

Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.

Noumenon72

10 hours ago

Someone did comment that it's actually smart to check if something is fixed on the unstable branch, or I suppose in your coworkers' branches. A good task for an LLM.

falcor84

6 hours ago

Oh, you haven't been using `git fetch-future-solution`?

rockwotj

6 hours ago

A friend is starting a company to do evals by just pitting models agent each other in simulations. Their teaser video is good (and humorous!)

https://kradle.ai/

OtherShrezzing

9 hours ago

That the answers have been available to them in the environment, and they’re still not hitting 100% on this benchmark is a damning indictment of SOTA model performance.

raincole

9 hours ago

It really isn't. Do you expect SOTA models to answer any answered question on the internet with 100% accuracy? Congrats you just compressed the whole internet (at least a few zettabytes) into a model (a few TB at most?).

aurareturn

9 hours ago

Are you going to rail on humans for making this mistake in the first place?

pseudosavant

7 hours ago

If I was doing those tasks, and I found that someone had already fixed it in a future (from my git state) commit, I'd think I was being pretty smart to use that solution too.

Turns out the test shouldn't have the answers included in it?

jgalt212

9 hours ago

Baseball players cheat for tens of millions. The stakes are 2-4 orders of magnitude higher here. I'm not surprised in the least.

belter

10 hours ago

In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...

ksherlock

9 hours ago

The real bubble will come once interest rates start dropping.

jMyles

8 hours ago

Regardless of whether, during this particular evaluation, Claude 4 Sonnet looked at the solution to this particular problem in this particular git repo, this seems like a long-term intractable problem.

How can we ever perform this sort of faux-neutral agentic evaluation in an environment where we want agents to have access to the sum total of knowledge (which will necessarily include being able to learn about the evaluation being conducted and its expectations)?

ripped_britches

6 hours ago

Everyone on HN is like “yes I knew it! I was so right in 2021 that LLMs were just stochastic parrots!”

Strangely one of the most predictable groups of people

pessimizer

5 hours ago

Because they are. But stochastic parrots are awesome.