mmsc
4 months ago
>after having received a lukewarm and laconic response from the HackerOne triage team.
A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.
At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.
poorman
4 months ago
There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.
If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.
The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.
Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...
miohtama
4 months ago
The useless reports are because there are a lot of useless people
saurik
4 months ago
One way to correct this misalignment is to give the bounty platform a cut of the bounty. This is how Immunifi works, and I've so far not heard anyone unhappy with communicating with them (though, I of course will not be at all shocked or surprised if a billion people reply to me saying I simply haven't talked to the right people and in fact everyone hates them ;P).
davidczech
4 months ago
AI generated bounty report spam is a huge problem now.
wslh
4 months ago
The best thing you can do is to include an exploit when it is possible, so this can be validated automatically and clear the noise.
tptacek
4 months ago
The backstory here, of course, is that the overwhelming majority of reports on any HackerOne program are garbage, and that garbage definitely includes 1990s sci.crypt style amateur cryptanalyses.
CaptainOfCoit
4 months ago
> 1990s sci.crypt style amateur cryptanalyses
Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.
CiPHPerCoder
4 months ago
Most people don't make their spam public, but I did when I ran this bounty program:
https://hackerone.com/paragonie/hacktivity?type=team
The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.
Some notable stinkers:
https://hackerone.com/reports/149369
https://hackerone.com/reports/244836
user
4 months ago
joatmon-snoo
4 months ago
This is great to see, much appreciated for the disclosure!
lvncelot
4 months ago
That last one has to be a troll, holy shit.
CaptainOfCoit
4 months ago
From another bogus report from the same actor: https://hackerone.com/reports/180393
> Please read it and let me know and I'm very sorry for the last report :) also please don't close it as N/A and please don't publish it without my confirm to do not harm my Reputation on hacker on community
I was 90% sure it was a troll too, but based on this second report I'm not so sure anymore.
nightpool
4 months ago
I like the bit where he tried to get paid by Hackerone for the bug you reported:
i think there a bug here on your last comment. can i report it to hackerone ? they will reward me ?cedws
4 months ago
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.
I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
lenerdenator
4 months ago
> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.
Show me the incentives, and I'll show you the outcomes.
We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.
Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.
dpoloncsak
4 months ago
I love this idea, but I feel like it just devolves into ways to classify that 'specific exploit' is/isn't technically a 0-day, so they can/can't be held liable
akerl_
4 months ago
Why?
Why is it inherently desirable that society penalize companies that get hacked above and beyond people choosing not to use their services, or selling off their shares, etc?
lenerdenator
4 months ago
Because they were placed in a position of trust and failed. Typically, the failure stems from a lack of willingness to expend the resources necessary to prevent the failure.
It'd be one thing if these were isolated incidents, but they're not.
Furthermore, the methods you mention simply aren't effective. Our economy is now so consolidated that many markets only have a handful of participants offering goods or services, and these players often all have data and computer security issues. As for divestiture, most people don't own shares, and those who do typically don't know they own shares of a specific company. Most shareholders in the US are retirement or pension funds, and they are run by people who would rather make it impossible for the average person to bring real consequences to their holdings for data breaches, than cause the company to spend money on fixing the issues that allow for the breaches to begin with. After all, it's "cheaper".
akerl_
4 months ago
I feel like this kind of justification comes up every time this topic is on HN: that the reason companies aren't being organically penalized for bad IT/infosec/privacy behavior is because the average person doesn't have leverage or alternatives.
It's never made sense to me.
I can see that being true in specific instances: many people in the US don't have great mobility for residential ISPs, or utility companies. And there's large network effects for social media platforms. But if any significant plurality of users cared about the impact of service breaches, or bad privacy policies, surely we'd see the impact somewhere in the market? We do in some related areas: Apple puts a ton of money into marketing about keeping people's data and messages private. WhatsApp does the same. But there are so many companies out there, lots of them have garbage security practices, lots of them get compromised, and I'm struggling to remember any example of a consumer company that had a breach and saw any significant impact.
To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.
Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?
lenerdenator
3 months ago
People give up on getting companies to be good actors because ultimately they're just a single person with a job and maybe a small savings account, looking at suing a company with absolutely no guarantee of ever recovering a cent on all of the trouble that their lax security policies cost them. Oh, and litigation is a rich man's sport.
> To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.
No one considers these things when they're buying plumbing tape. Really, you shouldn't have to consider that. You should be able to do commerce without having to wonder if some guy on the other side of the transaction is going to get his yearly bonus by cutting the necessary resources to keep you from having to deal with identity theft.
> Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?
Let's try with a company that has your data and see how correct "the market" is. Principles are the things you build a functioning society upon, not quarterly returns.
akerl_
3 months ago
> Let's try with a company that has your data and see how correct "the market" is.
What do you mean? Tons of companies with my data have been breached.
lan321
3 months ago
I think it's more simple in the Home Depot example. Even if you care about the breach what are you gonna do? Home Depot got hacked so they'll now probably get some more security staff. Funding for the quarter is secured. Lowes has not been hacked. Does that mean they won't be hacked? Not really... For cheap smart home shit it doesn't even matter since the company will go bankrupt and change hands 3 times in the next 5 years and again, they are all garbage. Either they'll get hacked or they'll sell your data anyway.
Plenty of my normie friends don't want new cars for example due to all the tracking and subscription garbage, but realistically, what can you do when the old ones slowly get outlawed/impossible to maintain due to part shortages.
bri3d
4 months ago
> We need Good Samaritan laws that legally protect and reward white hats.
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
cedws
4 months ago
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.
For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.
This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.
bri3d
4 months ago
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.
> If a company has a security hole that leads to disclosure of sensitive information, it should be fined
What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?
> such fines can be used for rewards.
So we're back to the award allocation problem.
> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.
Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.
cedws
4 months ago
You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?
It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.
bri3d
4 months ago
> You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?
My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.
tptacek
4 months ago
None of this has anything to do with the story we're commenting on; this kind of vulnerability research has never been legally risky.
akerl_
4 months ago
You're (thankfully) never going to get a legal framework that allows "white hats" to test another person's computer without their permission.
There's a reason Good Samaritan laws are built around rendering aid to injured humans: there is no equivalent if you go down the street popping peoples' car hoods to refill their windshield wiper fluid.
jacquesm
4 months ago
Legal protections have absolutely nothing to do with 'the existing market'.
bri3d
4 months ago
Yes, and my question is both genuine and concrete:
What proposed regulation could address a current failure to value bugs in the existing market?
The parent post suggested regulation as a solution for:
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
I don't know how this would work and am interested in learning.
bongodongobob
4 months ago
Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.
akerl_
4 months ago
Just to clarify: if Bob in Finance doesn't have phishing-resistant MFA, that's an organizational failure that's squarely homed in the IT and Infosec world.
bongodongobob
4 months ago
Absolutely. It's extremely common with small and midsize businesses that don't have any IT on staff.
quicksilver03
3 months ago
Having seen some of those cases, I'd say it's rather because Bob in Finance doesn't want to be bothered with MFA and has raised so much stink with the CFO that IT has been ordered to disable MFA for him.
andersa
4 months ago
Had the same experience last time I attempted to report an issue on Hacker One. Triage did not seem to actually understand the issue and insisted on needing a PoC they could run themselves that demonstrated the maximum impact for some reason, even though any developer familiar with the actual code at hand could see the problem in about ten seconds. Ended up writing to some old security email I found for the company to look at the report and they took care of it one day later, so good ending I guess.
This was about an issue in a C++ RPC framework not validating object references are of the correct type during deserialization from network messages, so the actual impact is kind of unbounded.
baby
4 months ago
From what I understand these aya the triagers are AI, but the bug reports are AI as well :o)