I don't understand the metric they're using. Which is maybe to be expected of an article that looks LLM-written. But they started with ~250 URLs; that's a weirdly small sample. I'm sure there are tens of thousands malicious websites cropping up monthly. And I bet that Safe Browsing flags more than 16% of that?
So how did they narrow it down to that small number? Why these sites specifically?... what's the false positive / negative rate of both approaches? What's even going on?
>what's the false positive / negative rate of both approaches
the false positive rate is 100%. they just say everything is phishing:
"When we ran the full dataset through the deep scan, it caught every single confirmed phishing site with zero false negatives. The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, which is worth it when you're actively investigating a link you don't trust."
it's 100% for what they call "deep scan", it's 66.7% for the "automatic scan". Practically unusable anyway
Probably could have been a bit more descriptive around the dataset. Our tooling pulls in a lot more than 250 URLs but since we are manually confirming them that means a smaller dataset. In other words, out of the urls we pulled in these 250 were confirmed (by a human) as phishing. We did not do any selection beyond that. As for the article LLMs were used to help with the graphs and grammatical checks but that's it. This was our first month of going through this exercise and we definitely want to have larger datasets going forward as we expand capacity for review.
As for Safe Browsing catching more than 16% it depends on the timeline at the time these attacks are launched it's likely Safe Browsing catches closer to 0% but as the time goes on that number definitely climbs.
I never loved the idea of GSB or centralized blocklists in general due to the consequences of being wrong, or the implications for censorship.
So for my masters' thesis about 6-7 years ago now (sheesh) I proposed some alternative, privacy-preserving methods to help keep users safe with their web browsers: https://scholarsarchive.byu.edu/etd/7403/
I think Chrome adopted one or two of the ideas. Nowadays the methods might need to be updated especially in a world of LLMs, but regardless, my hope was/is that the industry will refine some of these approaches and ship them.
Block lists will always be used for one reason or another, in this case these are verified malicious sites, there is no subjective analysis element in the equation that could be misconstrued as censorship. But even if there was, censorship implies a right to speech, in this case Google has the right to restrict the speech of it's users if it so wishes, matter of fact, through extensions there are many that do censor their users using Chrome.
I know for a fact that GSB contains non-malicious sites in its dataset.
Maybe I’m an outlier but I’d rather this than accidentally block legit sites.
Otherwise this becomes just another tool for Google to wall in the subset of the internet they like.
One thing that often gets overlooked in these comparisons is distribution latency.
Detecting a phishing domain internally is one problem, but pushing a verified block to billions of browsers worldwide is a completely different operational challenge.
Systems like Safe Browsing have to worry about propagation time, cache layers, update intervals, and the risk of pushing a false positive globally. A specialized vendor can update instantly for a much smaller customer base.
That difference alone can easily look like a “miss” in snapshot-style measurements.
If you are not a bot, I suggest changing your voice so that you are distinguishable from one. You're not wrong, just like you weren't wrong about "one thing that trips people up about asyncio" yesterday, but I noticed the slop-speak immediately. I'm sure others have as well.
Multiple comments that start w/ "what's interesting about" by this user and very similar formatting kind of answers that question on human vs bot. Weird internet we live in these days.
Let me give you a simple detection algorithm. Apply OCR to the screenshot because they often use logos. Also, parse the text from the HTML and compare it to the URL. You can catch a lot of spam this way.You can also examine many parameters in the js html code.
The most dangerous links recently have been from sharepoint.com, dropbox.com, etc. and nobody is going to block those.
Just yesterday I marked another Gmail phishing scam. This wouldn't be worth mentioning but they are using Google's own service for it. It has to be intentional, there is no other explanation. https://news.ycombinator.com/item?id=46665414
I'm all for stopping phishing - and the tool sounds great - but I have to say the Web Store Extension listing is very concerning - even with a new company/offering - there's only 4 users - and 1 rating (a 5 of course) - I'd like to try - but seems phishy :-(
I'm all for stopping phishing - and the tool sounds great - but the Web Store Extension listing is very concerning - even with a new company/offering - there's only 4 users - and 1 rating (a 5 of course) - I'd like to try - but seems phishy :-(
Default deny and only permitting what you explicitly allow stops 90% of this in a corporate environment.
You don’t just leave all your ports open on the firewall and only close the ones exploited. You default deny and only allow the bare minimum you need in.
I'm getting some kind of chrome security warning when using zscaler now. Discussing all of this with non-techies, I think folks are overwhelmed by all of the security warnings they get and have stopped paying attention to them.
So what's the point of doing all of this if there isn't some kind of corresponding education on responsible computer use? There needs to be some personal responsibility here, you can't protect people against everything.
> When we ran the full dataset through the deep scan, it caught every single confirmed phishing site with zero false negatives. The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious
Huh? Does this mean it just flagged everything as suspicious?
indeed... it seems like it just says everything is phishing... which they go on to say is desirable?
"The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, which is worth it when you're actively investigating a link you don't trust."
so, you dont really need the scanning product at all. if you just assume every website is a phishing website, you will have the same performance as the scanner!
Yeah probably could have done better at describing the methodology. The dataset is just the confirmed (manually by a human) phishing urls. We only included the FPs to show that the tooling isn't perfect there were many TNs that we did not include. Going forward we could definitely frame these results better.
It would be interesting to see how many of the sites safe browsing does block are false positives.
Almost all email phishing attempts we receive come from GMail.
Why should I trust that "Norn Labs" knows what is and is not a phishing site?
On a tangent - gmail has a feature to report phishing emails, but it seems like it’s only available on the website. Their mobile app doesn’t seem to have the option (same with “mark as unread”). Is it hidden or just not available?
The mobile app definitely has mark as unread. It's the envelope icon next to the trashcan (the exact same icon as in the web interface). Never realized there was a report phishing option. I just mark those emails as spam, which is available in the app.
They put them directly in front of search results, why would they not miss them ?
>We also ran the full dataset of 263 URLs (254 phishing, 9 confirmed legitimate) through Muninn's automatic scan. This is the scan that runs on every page you visit without any action on your part. On its own, the automatic scan correctly identified 238 of the 254 phishing sites and only incorrectly flagged 6 legitimate pages.
...so it has a false positive rate of 67%? On a ridiculously small dataset?
Fair point in isolation that number doesn't look good. The important context is that this dataset was built to test phishing detection, not to measure false positive rates on normal traffic. It's sourced from our threat intelligence tooling so it's almost entirely malicious URLs by design. The 9 clean sites aren't a random sample of everyday browsing. They're sites that were submitted as suspicious and turned out to be legitimate so they're basically the hardest possible set of clean pages to correctly classify. This seems like a common critique and we definitely could have done a better job of explaining the methodology. Going forward we will include numbers from daily use to give a better picture of FP rate.
Anecdotal and loosely related, but I can say since Gemini was forced into Gmail, much more obvious SPAM passes the filter
So, the false negative rate was 84%, but what was the false positive rate?
They have a table "AUTOMATIC SCAN RESULTS (263 URLS)" that sort of presents this information. Of the 9 sites that were negatives, they say they incorrectly flagged 6 as phishing.
With a false positive rate of 66%, it's not surprising they were able to drive down their false negative rate. Also, the test set of 254 phishing sites with 9 legitimate ones is a strange choice.
(Or maybe they need to work on how they present data in tables; tl;dr the supporting text.)
The false positive rate was 66% for "automatic scan" and 100% (!) for "deep scan".
In other words, you can get these numbers if your deep scan filter is isSuspicious() { return true; }.
Criminals can easily show Google crawlers "good" websites.
The fact that Safe Browsing even works is already good enough.
But hits 100% of browsing tracking
Educate yourself on how it works before you say something like this.
Pun aside, I cannot fully trust a centralized URL checker on a remote server that I don’t own, even if they guarantee that my privacy is safe
Glass is half empty, I see.
How about GSB stopped 16% of phishing sites? that's still huge.
Would you use anything that was only 16% effective for its claimed purpose?
“Tylenol stops headaches in 16% of people” - it’s huge, right? That’s millions of people we’re talking about.
Would you use it?
99% of users don't even know they're being protected. There's no promise except "we work to make browsing safer" and cutting even 5% of malicious sites from a user's experience is an unmitigated win for that user at the low false positive rate Safe Browsing offers.
If the other options would just straight up kill innocent bystanders (e.g. false positives for legit shops) I think that is a tradeoff I am willing to make.
Countless medications have <16% efficacy rate.
Idk why not? What’re the side effects?
I guess the glass is 16% full.
Blocklists assume you can separate malicious infrastructure from legitimate infrastructure. Once phishing moves to Google Sites and Weebly that model just doesn't work.
> ...full dataset of 263 URLs (254 phishing, 9 confirmed legitimate)
> ... automatic scan is optimized for precision (keeping false alarms low...
really?
> When we ran the full dataset through the deep scan, ... it flagged all 9 of the legitimate sites in our dataset as suspicious
lol
So I tested out the extension..
First the extension spammed me with "login required"..
So I click the notification to be taken to a login page..
Great? Now I have to create an account and verify a link..
Now I can test how great this is against a "fresh" facebook phishing page being actively promoted via Facebook Ads..
hxxps://r7ouhcqzdgae76-fsc0fydmbecefrap.z03.azurefd.net/new2/?utm_medium=paid&utm_source=fb&utm_id=6900429311725&utm_content=6900429312725&utm_t
erm=6900429314125&utm_campaign=6900429311725
The "extension" did a "scan".
{"url":"https://r7ouhcqzdgae76-fsc0fydmbecefrap.z03.azurefd.net/new2..."}
response: {"classification":"clean"}
great work?
If I click "Deep scan".. I see a screenshot blob being sent over..
response: {
"classification": "phish",
"reasons": [
"Our system has previously flagged this webpage as malicious."
]
}
So if the site were already flagged, why does the "light" scan not show that?
The purpose of "Safe Browsing" is to send your URLs to Google.
Yeah, maybe let's change the title to remove that 84% rate. It's meaningless because it's just 254 websites, given the scale of what Google Safe Browsing deals with.
How is this serious? This is a marketing slop. If the title isn't enough indicator, the ending should be:
> If you're interested in trying Muninn, it's available as a Chrome extension. We're in an early phase and would genuinely appreciate feedback from anyone willing to give it a shot. And if you run across phishing in the wild, consider submitting it to Yggdrasil so the data can help protect others.
But why Apple choose to work with this on Safari?
"If you're interested in trying Muninn, it's available as a Chrome extension. We're in an early phase "
Domain is less than 4 months old.. Software is "early phase".. Already making misleading marketing claims of usefulness..
When Google will remove scams, phishing and other nonsense from their advertising? Especially the scareware stuff, where AI videos say someone might be listened to / hacked and here is the software that will help block it / find it whatnot. Then they collect personal data.
There's probably like one engineer maintaining this as a side project at the company
Yeah, it would be interesting to know how much work is spent on it. I sometimes submit sites when I am targeted by a campaign, but I'm not sure if they end up in their deny-list.
These statistics would be a lot better if they were compared directly to the same measurements taken from dedicated cloud SWGs/SSEs like Zscaler. My somewhat subjective sense is that the whole industry is in a bit of a rough patch, the miss rate seems to be noticeably climbing all across the board.
Having spent some time in the anti-abuse and Trust & Safety space, I always take these vendor reports with a massive grain of salt. It’s a classic case of comparing apples to vendor-marketing oranges. A headline screaming about an 84% miss rate sounds like a systemic collapse until you look at the radically different constraint envelopes a global default like GSB and a specialized enterprise vendor operate under.
The biggest factor here is the false-positive cliff. Google Safe Browsing is the default safety net for billions of clients across Chrome, Safari, and Firefox. If GSB’s false-positive rate ticks up by even a fraction of a percent, they end up accidentally nuking legitimate small businesses, SaaS platforms, or municipal portals off the internet. Because of that massive blast radius, GSB fundamentally has to be deeply conservative. A boutique security vendor, on the other hand, can afford to be highly aggressive because an over-block in a corporate environment just results in a routine IT support ticket.
You also have to factor in the ephemeral nature of modern phishing infrastructure and basic selection bias. Threat actors heavily rely on automated DGAs and compromised hosts where the time-to-live for a payload is measured in hours, if not minutes. If a specialized vendor detects a zero-day phishing link at 10:00 AM, and GSB hasn't confidently propagated a global block to billions of edge clients by 10:15 AM, the vendor scores it as a "miss." Add in the fact that vendors naturally test against the specific subset of threats their proprietary engines are tuned to find, and that 84% number starts to make a lot more sense as a top-of-funnel marketing metric rather than a scientific baseline.
None of this is to say GSB is perfect right now. It has absolutely struggled to keep up with the recent explosion of automated, highly targeted spear-phishing and MFA-bypass proxy kits. But we should read this report for what it really is: a smart marketing push by a security vendor trying to sell a product, not a sign that the internet's baseline immune system is totally broken.
> We also ran the full dataset of 263 URLs (254 phishing, 9 confirmed legitimate) through Muninn's automatic scan. This is the scan that runs on every page you visit without any action on your part. On its own, the automatic scan correctly identified 238 of the 254 phishing sites and only incorrectly flagged 6 legitimate pages.
[...]
The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, ...
Am I missing something or is that a 66%/100% False Positive Rate on legitimate Sites?
If GSB would have that ratio, it would be absolute unusable.. So comparing these two is absolutely wrong...
The 9/9 is actually crazy, and then they posted about it as if they found something? What they did was find a major issue in their own process and then told the world about it, that just doesn't seem right.
Crazy, and also like, 9? The sample size in that part of your test suite is 9?
It would seem their service identifies only phishing sites as legitimate ones. It would seem 100% of sites they deem legitimate are phishing sites. Incredible.
The deep scan detected all phishing sites correctly with the unfortunate tagging of legit sites as phishing too. I imagine their code looks something like isPhishing = true.
> I always take these vendor reports with a massive grain of salt. It’s a classic case of comparing apples to vendor-marketing oranges. A headline screaming about an 84% miss rate sounds like a systemic collapse until...
I've seen this before in the ip blocklist space... if you're layering up firewall rules, you're bound to see the higher priority layers more often.
That doesn't mean the other layers suck, security isn't always an A or B situation...
On the other hand, I don't know how I feel about how GSB is implemented... you're telling google every website you go to, but chances are the site already has google analytics or SSO...
i thought it was checks against a local list of hashes? with frequent updates
this is how Firefox does it. can't speak for the rest.
> I always take these vendor reports with a massive grain of salt.
Yeah. "Here's a blog post with some casually collected numbers about our product [...] It turns out that it's great!" is sorta boring.
But couple that with a headline framed as "Google [...] Bad" and straight to the top of the HN front page it goes!
These are fair points and I agree with a lot of them. GSB operates at a scale we don't, and the conservatism that comes with being the default for billions of users is a real constraint. The post tries to acknowledge that ("the takeaway from all of this is not that Google Safe Browsing is bad") and we're upfront about the timing caveat since these were checked at time of scan.
Where I'd push back is on what this means for the average person. Most people have no protection against phishing beyond what their email provider and browser give them. If that protection is fundamentally reactive, catching threats hours or days after they go live, that's a real limitation worth talking about honestly. The 84% number isn't meant to say GSB is broken. It's meant to say there's a gap, and that gap has consequences for real users regardless of the engineering reasons behind it.
On the marketing angle, we aren't currently selling anything. The extension is free and so is submitting URLs for verification. We recognize it would be disingenuous to say we never will, but at the very least the data and the ability to check URLs (similar to PhishTank before they closed registration) will always be free. The dataset is also sourced from public threat intelligence feeds, not a curated set designed to make our tool look good. We think publishing findings like this is valuable even if you set aside everything about our tools.
> We think publishing findings like this is valuable even if you set aside everything about our tools.
In what way is it valuable?
Their example is really dumb. Eventually, you get a fake Microsoft login page, but they clip out the address bar which clearly isn’t a Microsoft address so your auto complete password isn’t going to be put into the form and you’d have to be pretty dumb to type it in my hand or even to know your Microsoft password, it should be some random thing generated by Safari or whatever your password manager is. Not to mention two factor authentication.