Upvoter33
4 days ago
Good article, neat research behind it.
I think the paper's contributions really don't have anything to do with ML; it's about the new side channel with interrupts, which is a cool find. ML just gets more people to read it, which I guess is ok. I mean, you could just use "statistics" here in much the same way.
I remember an advisor once telling me: once you figure out what a paper is really about, rewrite it, and remove the stuff you used to think it was about. The title of this paper should be about the new side channel, not about the ML story, imho.
But this is just a nitpick. Great work!
jackcook
4 days ago
Thanks for reading! The two stories are of course deeply intertwined: we wouldn’t have found the new side channel without the cautionary tale about machine learning.
But the finding about ML misinterpretation is particularly notable because it calls a lot of existing computer architecture research into question. In the past, attacks like this were very difficult to pull off without an in-depth understanding of the side channel being exploited. But ML models (in this case, an LSTM) generally go a bit beyond “statistics” because they unlock much greater accuracy, making it much easier to develop powerful attacks that exploit side channels that aren’t really understood. And there are a lot of ML-assisted attacks created in this fashion today: the Shusterman et al. paper alone has almost 200 citations, a huge amount for a computer architecture paper.
The point of publishing this kind of research is to better understand our systems so we can build stronger defenses — the cost of getting this wrong and misleading the community is pretty high. And this would technically still be true even if we ultimately found that the cache was responsible for the prior attack. But of course, it helps that we discovered a new side channel along the way — this really drove our point home. I probably could have emphasized this more in my blogpost.
albert_e
4 days ago
yes - I also feel this does not have strong new findings about ML except some common sense that all ML practitioners should have: that is, do not interpret ML results as cause-and-effect explanations when the data you have captured and modelled does not warrant it.
Maybe in the real world, this common sense gets lost in the deluge of correlations when people are immersed in a sea of data -- but good experiment design and peer review should ideally sift out any unsound conclusions and interpretations -- which, to be fair, this replication study does an excellent job of!
Well done, and good luck to the OP!