SeanLuke
3 hours ago
I developed and maintain a large and very widely used open source agent-based modeling toolkit. It's designed to be very highly efficient: that's its calling card. But it's old: I released its first version around 2003 and have been updating it ever since.
Recently I was made aware by colleagues of a publication by authors of a new agent-based modeling toolkit in a different, hipper programming language. They compared their system to others, including mine, and made kind of a big checklist of who's better in what, and no surprise, theirs came out on top. But digging deeper, it quickly became clear that they didn't understand how to run my software correctly; and in many other places they bent over backwards to cherry-pick, and made a lot of bold and completely wrong claims. Correcting the record would place their software far below mine.
Mind you, I'm VERY happy to see newer toolkits which are better than mine -- I wrote this thing over 20 years ago after all, and have since moved on. But several colleagues demanded I do so. After a lot of back-and-forth however, it became clear that the journal's editor was too embarrassed and didn't want to require a retraction or revision. And the authors kept coming up with excuses for their errors. So the journal quietly dropped the complaint.
I'm afraid that this is very common.
bargle0
2 hours ago
If you’re the same Sean Luke I’m thinking of:
I was an undergraduate at the University of Maryland when you were a graduate student there in the mid nineties. A lot of what you had to say shaped the way I think about computer science. Thank you.
mnw21cam
an hour ago
A while back I wrote a piece of (academic) software. A couple of years ago I was asked to review a paper prior to publication, and it was about a piece of software that did the same-ish thing as mine, where they had benchmarked against a set of older software, including mine, and of course they found that theirs was the best. However, their testing methodology was fundamentally flawed, not least because there is no "true" answer that the software's output can be compared to. So they had used a different process to produce a "truth", then trained their software (machine learning, of course) to produce results that match this (very flawed) "truth", and then of course their software was the best because it was the one that produced results closest to the "truth", whereas the other software might have been closer to the actual truth.
I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.
A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.
I despair of the scientific process.
timr
24 minutes ago
If it makes you feel any better, the problem you’re describing is as old as peer review. The authors of a paper only have to get accepted once, and they have a lot more incentive to do so than you do to reject their work as an editor or reviewer.
This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.
This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.
oawiejrlij
28 minutes ago
When I was a grad student I contacted a journal to tell them my PI had falsified their data. The journal never responded. I also contacted my university's legal department. They invited me in for an hour, said they would talk to me again soon, and never spoke to me or responded to my calls again after that. This was in a Top-10-in-the-USA CS program. I have close to zero trust in academia. This is why we have a "reproducibility crisis".
trogdor
43 minutes ago
> it became clear that the journal's editor was too embarrassed
How sad. Admitting and correcting a mistake may feel difficult, but it makes you credible.
As a reader, I would have much greater trust in a journal that solicited criticism and readily published corrections and retractions when warranted.