hn_throwaway_99
10 months ago
At this point, I have to wonder what is even the point of missives like this. There are only two things that will solve the software quality problem:
1. Economic incentives. It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products. If you're not talking about that, what you're saying is basically "ok pretty please" useless.
2. Reducing the complexity of making products secure in the first place. Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary. Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.
The other thing that people often ignore, or at least don't want to admit, is that the "move fast and break things" ethos has been phenomenally successful from a business perspective. The US software industry grew exponentially faster than anyplace else in the world, even places like India that doubled down on things like the "Software Capability Maturity Model" in the early 00s, and honestly have little to show for it.
gwd
10 months ago
> At this point, I have to wonder what is even the point of missives like this. ...It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products.
I think that liability for bugs is exactly where she's going with this. I'm not an expert, but it sounds from a few things I've heard on some Lawfare podcasts (e.g,. [1][2]) like the idea of software liability has been discussed for quite a while now in government policy circles. This sort of public statement may be laying the groundwork for building the political will to make it happen.
[1] https://www.youtube.com/watch?v=9UneL5-Q98E&pp=ygUQbGF3ZmFyZ...
[2] https://www.youtube.com/watch?v=zyNft-IZm_A&pp=ygUQbGF3ZmFyZ...
EDIT:
> Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary.
Loads of companies already are liable for bugs software that runs on their products: this includes cars, airplanes, and I would presume medical devices, and so on. The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.
These processes don't prevent all bugs, naturally, but they certainly go a long way towards reducing them. Liability for companies who don't follow appropriate standard processes would essentially prevent cloud companies cutting security to get an edge in time-to-market or cost.
jand
10 months ago
> The response has been what's called "safety certification":
This is the most scary part for me. Certifications are mostly bureaucratic sugar and on the other hand very expensive. This seems like a sure way to strangle your startup culture.
If customers require certifications worth millions, nobody can bootstrap a small business without outside capital.
Eddy_Viscosity2
10 months ago
Assuming the level of certification will be proportionate the potential risk/harm, then this is actually totally ok. Like would you want to fly in a plane built but a bootstrap startup that had no certifications? Or go in a submarine to extremely deep ocean tours of the titanic? Or put in a heart device? Or transfer all of your savings to a startup's financial software that had no proof of being resilient to even the most basic of attacks?
For me, its a hard no. The idea of risk/harm based certification and liability is overdue.
mike_hearn
10 months ago
Problem is that it's rarely proportional.
There's a different thread on HN about the UK Foundations essay. It gives the example of the builders of a nuclear reactor being required to install hundreds of underwater megaphones to scare away fish that might otherwise be sucked into the reactor and, um, cooked. Yet cooking fish is clearly normal behavior that the government doesn't try to restrict otherwise.
This type of thing crops up all over the place where government certification gets involved. Not at first, but the ratchet only moves in one direction. After enough decades have passed you end up with silliness like that, nobody empowered to stop it and a stagnating or sliding economy.
> Like would you want to fly in a plane built but a bootstrap startup that had no certifications?
If plenty of other people had flown in it without problems, sure? How do you think commercial aviation got started? Plane makers were startups once. But comparing software and planes (or buildings) is a false equivalence. The vast majority of all software doesn't hurt anyone if it has bugs or even if it gets hacked. It's annoying, and potentially you lose money, but nobody dies.
Eddy_Viscosity2
9 months ago
Commercial aviation was regulated because planes were killing people, and when it came in, air travel became the safest form of transportation. That isn't a coincidence. If the vast majority of software doesn't hurt anyone if it has bugs, then it won't require any certifications. If you heard me arguing for that, then you heard wrong. I am advocating for risk/harm based certification/liability.
mike_hearn
9 months ago
Aren't you arguing for the status quo then? There are very small amounts of software that can cause physical harm, and those are already regulated (medical devices etc).
Eddy_Viscosity2
9 months ago
Financial harm and harm via personal records being hacked should also be included. The Equifax leak for example should have resulted in much worse consequences for the executives and also new software compliance regulations to better safeguard that sort of record-keeping.
pas
9 months ago
Why aren't they installing grates on the intakes?
mike_hearn
9 months ago
There will be grates but fish are small and obviously grates must have holes in them.
generic92034
10 months ago
> The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.
Can you still call it "liability" when all you have to do is performing some kind of compliance theater to get rid of it?
Terr_
9 months ago
Technically yes, since it depends on the kind of liability.
In particular, liability for negligence depends on somebody doing stupid-things or stupidly-not-doing the right things, and there's an ongoing struggle to define where that envelope lies.
kurikuri
9 months ago
The ‘compliance theater’ often is filled with things which seem inane to your product, but the requirements themselves are often broad so that they cover many different types of products. Sure, we could add further granularity, but there is usually a ton of pushback and many people need to come to some compromise on what is relevant.
telgareith
10 months ago
Ask Boeing.
Buttons840
10 months ago
A third option is to empower security researchers and hope the good guys find the security holes before the bad guys.
Currently, we threaten the good guys, security researchers, with jail way to quickly. If someone presses F12 and finds a bunch of SSNs in the raw HTML of the State's web page the Governor personally threatens to send them to jail[0]. The good security researchers tip-tow around, timidly asking permission to run pentests while the bad guys do whatever they want.
Protect security researchers, change the laws to empower them and give them the benefit of the doubt.
I think a big reason we don't do this is it would be a burden and an embarrassment to wealthy companies. It's literally a matter of national security and we current sacrifice national security for the convenience of companies.
As you say, security is hard, and putting liability for security issues on anyone is probably unreasonable. The problem is companies can have their cake and eat it too. The companies get full control over their software, and they get to pick and choose who tests the security of their systems, while at the same time having no liability. The companies are basically saying "trust me to take care of the security, also, I accept no liability"; that doesn't inspire confidence. If the liability is too much to bear, then the companies should give up control over who can test the security of their systems.
[0]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...
ezoe
10 months ago
It suggest that insecure software should be simply called defective product. So the security audit should be called QA.
A product, which don't spend a lot on QA, profit more. Unless there will be a catastrophic incident.
Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.
Thorrez
10 months ago
Software has tons of bugs. A fraction of those bugs are security vulnerabilities.
Any type of bug can be considered a defect, and thus can be considered to make the product defective. By using the terminology "defective" instead of "vulnerable" we lose the distinction between security bugs and other bugs. I don't think we want to lose that distinction.
irundebian
10 months ago
Security-related product defect, or simply security defect.
specialist
10 months ago
Empathic agreement.
Source: Was QA/test manager for a bit. Also, recovering election integrity activist.
TIL: The conversation is just easier when bugs, security holes, fraud, abuse, chicanery, etc are treated as kinds of defects.
Phrases like "fraud" and "exploit" are insta-stop conversation killers. Politicians and managers (director level and above) simply can't hear or see or reason about those kinds of things. (I can't even guess why not. CYA? Somebody else's problem? Hear no evil...?)
QA/Test rarely receives its needed attention and resources. Now less than ever. But advocating for "fixing bugs" isn't a total bust.
acdha
10 months ago
> Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.
They have been, for years. Some very prominent voices in the security community have been criticizing the level of engineering prowess in security tools for ages - Travis Ormandy ripped into the AV industry for Project Zero over a decade ago and found critical problems in things like FireEye, too. The problem is that without penalties, the companies just keep repeating the cycle of “trust us” without improving their levels of craft or transparency.
Terr_
9 months ago
In terms of shaming bad products, motivating executives, etc, maybe... However I don't think you can easily combine general QA and security stuff under one roof, because they demand different approaches and knowledge sets.
thayne
10 months ago
> But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.
The other thing is that software, especially software connected software (which these days is most software) has to have a much higher level of security than most other industries. When a structural engineer builds a bridge they don't have to worry abou a large number of criminals from all over the world trying to find weaknesses that can be exploited to cause the bridge to collapse.
But software engineers do have to worry about hackers, including state sponsored ones, constantly trying to find and exploit weaknesses in their software.
I think it absurd to put blame on software engineers for failing to make flawless software instead of the bad actors that exploit bugs that would never be noticed during normal operation, or the law enforcement that is ineffective at stopping such exploitation.
Now, I do think that there should be more liability if you don't take security seriously. But there is a delicate balance there. If a single simple mistake has the potential to cause devastating liability, that will seriously slow down the software industry and substantially increase the cost.
tastyfreeze
10 months ago
Lock makers aren't liable for making pickable locks. Punish the bad actors and leave the liability to civil suits.
Veserv
10 months ago
If a single simple mistake has the potential to cause devastating harm, then that is precisely the standard that should be demanded. If you do not want that, then you can work on something where your mistakes do not cause immense harm to others or reduce the scope of the system so such harm is not possible.
Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.
"But I want to work on problems that can kill people (or other devastating harm), but not be responsible for killing them (or other devastating harm)" is a utterly insane position that has taken hold in software and has no place in serious, critical systems. If you can not make systems fit for the level of harm they can cause, then you have no place making such systems. That is irrespective of whether anybody in the entire world can do so; systems inadequate to minimize harm to the degree necessary to provide a positive societal cost-benefit over their lifetime are unfit for usage.
Phiwise_
10 months ago
>Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.
This is a truly absurd comparison. In the first place, yes, it is in fact much easier to make physical products safer, as anyone who's ever seen a warning line or safety guard could tell you. The manufacturers of CNC mills don't accept liability by making it impossible to run a bad batch that could kill someone, they just put the whole machine in a steel box and call it a day. The software consumers want has no equivalent solution. What's worse, in the second place, these engineers aren't actually held responsible for the equivalent of most software breaches already. There pretty much is zero liability for tampering or misuse, thus the instruction booklet of 70% legal warnings that still comes with everything you buy even in this age of cutting costs. Arguing software should be held to the same standard as physical products, when software has no equivalent of safety lockouts, is just to argue it should include acceptable use sections in its terms and conditions, which is no real improvement to people's security in the face of malicious actors.
thayne
10 months ago
Do you think that the mechanical engineer should be held liable if, say a criminal breaks into the factory, and sabatoges the robot to damage property or injure workers, because they didn't make the robot sufficiently resistent to sabatoge attempts?
Veserv
10 months ago
Is that something that you expect to be attempted against a significant fraction of the products over the average expected lifetime of the product? If so, they should absolutely be required to make a product fit for its usage context that is robust against common and standard failure modes, like adversarial attacks.
That such requirements are relatively uncommon in physical products due to their usage contexts being fundamentally airgapped from global adversaries is a blessing. Their requirements are lower and thus their expected standards can be lower as well. What a concept!
You seem to think that standards are some kind of absolute line held in common across all product categories. Therefore it is unfair to hold some products to higher standards. That is nonsense. Expected standards are in relation to the problem and usage context and should minimize harm to the degree necessary to result in positive societal cost-benefit over their lifetime.
That is how literally everything else works and principle that can be applied in general. “But my problems are different, can I ignore them and inflict devastating harm on other humans” is asinine. If you are causing net harm to society, you should not get to do it. Period.
thayne
9 months ago
Ok let's try another analogy. Should a carmaker be held liable if someone breaks into a car and steals something valuable (laptop, purse, etc.)? Or how about if someone drives irresponsibly (possibly under the influence) and injures or kills someone in an accident?
Those are things that a significant number of cars will encounter.
And yes if you don't put any effort into making your car secure or safe, there should be liability, but if you put in significant effort to security it is unreasonable to make you liable because there is more you could have done, because there is always more you could have done, and there are significant tradeoffs between security and safety and useability and cost.
Maybe you could build a car that is impossible to break into, but it would be a huge pain to use, and most people wouldn't be able to afford it.
> inflict devastating harm on other humans
I'm not sure where you are getting this "devastating harm" from.
The context of this is the large number of "urgent" security patches and vulnerability disclosures.
The thing is, the vast majority of those are probably not as bad as the terminology might lead you a layperson to believe. Often in order to exploit a vulnerability in the real world, you have to chain multiple vulnerabilities together.
A lot of "critical" vulnerabilities in software I need to patch, I read through the description and determine that it would be extremely difficult, or impossible to exploit the way I use the software, but the patch is still necessary because some (probably small) set of users might be using it in a way that could be exploited.
Veserv
9 months ago
You said: “If a single simple mistake has the potential to cause devastating liability”
Liability is in proportion to actual harm. A single simple mistake can only incur devastating liability if it causes devastating harm (as in harm actually occurred). If it can not and does not cause devastating harm, then you will not incur devastating liability and you are already protected.
Again, you are applying some sort of bizarre uniform standard for defect remediation that ignores all nuance around intended use, intended users, usage context, harm, potential for harm, etc. That is not how anything works. Toys have different standards than planes which have different standards than bridges. The world is not unary.
As for the questions of cars you posed. Those fall into slightly more nuanced principles.
One of those principles is that liability is in proportion to/limited by guarantees either explicit or implicit. The limitations of the physical security of car windows are clearly communicated, obvious, and well-understood to the average consumer. A company guaranteeing that their windows are immune to being broken would be taking on the liability for their guarantee that is in excess of normal expectations.
Software security and reliability is not just not clearly communicated, it is deceptively communicated. Do you believe that the average Crowdstrike customer believed that Crowdstrike could take out their business? Or were Crowdstrike’s sales and marketing continuously assuring their customers that Crowdstrike will work flawlessly despite the fine print disclaiming all liability? If the customer actually believed it could or would fuck up royally, then sure, no liability. Customer’s fault for using a product they believed to be a piece of shit. But would Crowdstrike have ever sold a single seat if their customer’s believed that? Fuck no. They made a implicit and likely explicit guarantee of operability and they are liable for it.
Liability around dangerous operation is again, about clear understanding of operation guarantees. It is obvious to the most casual observer the dangers of a chainsaw and how one might be safely used. It is the job of the manufacturer to make such boundaries clear, to mitigate unknowing unsafe operation, and to mitigate unsafe operation where feasible. To go back to Crowdstrike, you can not contend that it was used “dangerously”. The defect was in failure to perform correctly, not inappropriate usage. Dangerous usage might be enabling a feature that automatically deletes any program that starts sending on the network, but you forgot that your critical server starts sending notifications to your team when under heavy load. That is likely your damages to bear. But if it was actually a feature that deletes any “anomalous program”, then that depends on the guarantees provided.
Making careful tradeoffs about the distribution of damages and liability is a well-worn path. Throwing your hands up in the air because your problems are harder and more dangerous so you should be held to lower standards is absurd.
And please, you make it sound like software is built to some high standard comparable to your average physical product and that it is unfair to hold it to higher standards. You and I both know software is generally held to exactly no standard. Demanding it be held to even normal standards is not some sort of unfair imposition. It should probably be held to higher standards due to the risks of correlated failures due to network connectivity as you point out, but even just getting to normal standards would be a massive and qualitative improvement.
thayne
9 months ago
> You said: “If a single simple mistake has the potential to cause devastating liability”
I think you are misinterpreting my statement. I meant devastating to the software creator. Which is not necessarily the same thing as devastating to the victim of an attack.
> A single simple mistake can only incur devastating liability if it causes devastating harm (as in harm actually occurred)
This is absolutely not true. Even if you ignore any possible inequities that exist in the justice system (like say that civil courts tend to favor the party that spends more on lawyers), the same absolute amount in dollars has a very different impact on different parties.
For example suppose a single developer or early bootstrapped startup makes a library or application (possibly open source), and a big bank uses that library then a vulnerability in it is used as part of an attack that causes a few million in damages. That few million would be a relatively small amount to the big bank, but would likely bankrupt an individual or small startup, and would thus be "devastating".
> Again, you are applying some sort of bizarre uniform standard for defect remediation that ignores all nuance around intended use, intended users, usage context, harm, potential for harm, etc.
I don't know why you think that.
> One of those principles is that liability is in proportion to/limited by guarantees either explicit or implicit. The limitations of the physical security of car windows are clearly communicated, obvious, and well-understood to the average consumer. A company guaranteeing that their windows are immune to being broken would be taking on the liability for their guarantee that is in excess of normal expectations.
Right. And most software doesn't come with any garantee that it is bug free, or even free of security bugs. Afaik, software doesn't currently have some kind of exemption from liability. But it is very common to push as much liability on the user of the software as possible with terms of service or licensing terms. In fact almost all open source licenses have disclaimers saying that the copyright holder makes no garantees about the quality of the software.
> Software security and reliability is not just not clearly communicated, it is deceptively communicated.
That's a broad generalization, but I agree that there are many cases where that is true. But if what sales and marketing says differs from the actual terms of the contract or ToS, wouldn't that be more a problem of false advertising?
And security is an imprecise and relative term.
And there are cases where software contracts do have garantees of reliability, and sometimes security, and promise to compensate damages if those garantees aren't met. But you usually have to be willing to pay a lot of money for an enterprise contract to get that.
Could this situation be improved? Absolutely!
I think if your ToS push liability onto the custemer, at the very least that should be made much more clear to the customer (and the same for many other things hidden in the fine print), and then maybe market forces would push more companies to make stronger garantees to win customers.
But that problem is hardly unique to software. Lots of companies hide stuff like "you take full responsibility, and agree not to sue us" in their fine print.
> Do you believe that the average Crowdstrike customer believed that Crowdstrike could take out their business?
The crowdstrike issue is unrelated. As you admiited later, there was no malicious actor involved. And there were significant problems with their deployment process. I absolutely do think they should be held liable.
But that is very different from something like, say, a buffer overflow due to a simple mistake that slipped through rigorous review, testing, fuzzing and maybe even a pen test.
> you can not contend that it was used “dangerously”.
I don't.
> Throwing your hands up in the air because your problems are harder and more dangerous so you should be held to lower standards is absurd.
That isn't at all what I am saying. I'm saying that developers shouldn't be held responsible for the actions of criminals just because those criminals used an unknown weakness of the software to commit the crime. Doing so is holding software up to a higher standard than most other products. Now just as with physical products, I think there are exceptions, like if your product is sold or marketed specifically to prevent specific actions of criminals, and fails to do so.
Ironically, blaming developers cybercrime is throwing up your hands because your problems are hard. Specifically, stopping cybercrime with law enforcement is extremely difficult, in part because the criminals are often beyond the jurisdiction the applicable law enforcement.
But maybe putting some government funding towards increasing security of critical components, especially open source ones, or initiatives to rewrite those components in memory safe languages, would be more effective than pointing fingers?
> You and I both know software is generally held to exactly no standard.
Um, just like physical products the standards for software varies widely. What I am opposed to is applying some kind of blanket liability to all software products. Because a game shouldn't be held to the same standard as an enterprise firewall, or anti-malware solution.
And there absolutely are standards and certifications for security and reliability in software. ISO 27001, SOC 2, PCI, FedRAMP, HIPAA, just to name a few. And if you sell to certain organizations, like governments, financial institutions, health care providers, etc. you will need one or more of those.
pjmlp
10 months ago
Moving goalposts posts, if the person was injured by something the mechanical engineer has made themselves, contrary to software industry, they are indeed liable, and can even be the target of a lawsuit.
And since a mechanical engineer is a proper engineer, not someone that call themselves engineer after a six weeks bootcamp, additionally they can lose the professional title.
mike_hearn
10 months ago
The goalpost isn't moving. Comparisons to physical products are wrong because the question is not "are there safety critical bugs" but "can this product survive a sustained assault by highly trained adversaries", which is a standard no other product is held to.
pjmlp
10 months ago
Silly me thinking there is something like product recalls, food poisoning, and use cases that insurances don't cover due to high liability in hazardous goods.
Anything that brings proper engineering practices into computing, and liability for malpractices, gets my vote.
mike_hearn
9 months ago
Software bugs almost never poison you, so that isn't applicable.
Product recalls happen every day in the software industry, voluntarily. We call them software updates and the industry does them far better and more smoothly than any other industry.
Use cases that aren't covered due to liability can be found in any EULA. You aren't allowed to use Windows in nuclear power stations for example.
pjmlp
9 months ago
Actually they do, when faulty software powers something food related, anywhere on the delivery chain from the farmer to the point someone actually eats something.
EULAs are meaningless in many countries outside US law, most folks are yet to bother to sue companies because they have been badly educated to accept faulty software, while they on material world, that is something that they only accept on cheap knockoffs on street bazaar, and 1 euro shops.
Maybe that is the measure for software then.
And even those do require a permit for selling, in many countries, or some exchange of goods between sellers and law enforcement checking permits.
Veserv
9 months ago
Is a non-network connected car model at risk of all cars being taken over by a economical sustained assault by highly trained adversaries in a foreign country? No.
Is a network connected car vulnerable to such attacks? Yes.
The product decision to make the car network connected introduced a new, previously non-existent risk. If you want to introduce that new risk, you (and anybody else who wants to introduce such risk) are responsible for the consequences. You can just not do it if that scares you.
“But other people get to profit off of endangering society and I want to do it too but I do not want to be responsible for the consequences” is not a very compelling argument for letting you do that.
That you need to reach higher standards for new functionality you want to add is the entire point. It is how it works in every other industry. Demanding lower standards because you want to do things that are harder and more dangerous is ass backwards.
thayne
9 months ago
Ok, now consider that there is a software component, say an open source library, that has a vulnerability in it, and a car maker used that component in their network connected car, then a well funded foreign adversary exploited that vulnerability, among others, to do something nefarious. That software component wasn't specifically designed to be used in cars, but it wasn't specifically designed not to be either, it was a general purpose tool. Should the make of that component be held liable? They weren't the ones who decided to connect a car to the internet. They certainly weren't the ones who decided not to fully isolate the network-connected components from anything that could control the care.
But if software doesn't have a way to push liability onto the user, then you can bet the big auto maker will sue that developer in nebraska[1] into oblivion, and the world might be worse off because of it.
I don't think that you should be able to do something like sell network connected cars, without putting any effort into security. But at the same time, I think a blanket requirement to be liable for any security vulnerabilities in your software could have a lot of negative consequences.
pjmlp
9 months ago
If it was an electronic component that would set the car on fire due to a short circuit, the maker of the electronic component would certainly be sued, and was the electronic component designed to be placed on car circuit boards in first place?
juunpp
10 months ago
3. Legal incentives. When somebody dies or fails to receive critical care at a hospital because a Windows XP machined got owned, somebody should probably face something along the lines of criminal negligence.
rockskon
10 months ago
Will Microsoft face liability if someone dies or fails to receive critical care because some infrastructure system auto-rebooted to apply an update and lost state/data relating to that patient's care?
pasc1878
10 months ago
The person who should be fined etc is the person who said put the machine on the internet.
soerxpso
10 months ago
> Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.
I'm not sure about your claim that structural engineering is less complex, but there's another (arguably much more significant) difference: structural safety is against an indifferent adversary (the weather, and physics); software security is against a malicious adversary. If someone with resources wants to take down a building (with exception for certain expensive military installations), no amount of structural engineering is going to stop them. Software that isn't vulnerable to cyberattacks should be compared to a bunker that isn't vulnerable to coordinated artillery strikes, not to your average building.
impossiblefork
10 months ago
You can also choose to avoid complexity.
Often a shorter computer program that is easy to understand can do exactly what a more complicated program can. We can simplify interfaces between systems and ensure that their specifications are short, readable and implementable without allocation, buffers or other things that can be implemented incorrectly. We can ensure that program code is stored separately from data.
Now that LLMs are getting better we could probably have them go through all our code and introduce invariants, etc. to make sure it does exactly what it's supposed to, and if it can't then a human expert can intervene for the function for which the LLM can't find the proof.
I think hardware manufacturers could help too. More isolation, Harvard architectures etc. would be quite appealing.
transpute
10 months ago
> economic incentives
EU CRA incoming, https://ubuntu.com/blog/the-cyber-resilience-act-what-it-mea...
How will the US version differ?
pjmlp
10 months ago
Making vendors understand that the time of EULAS waving responsibility is coming to pass, and like in any other grown up industry, liability is coming.
nradov
10 months ago
If customers want their software vendors to take liability for software defects (including security vulnerabilities) then they can just negotiate that into licensing agreements. We don't need the federal government to get involved with new laws or regulations.
lelanthran
10 months ago
> ways that software vendors will be held liable for bugs in their products.
And if we do that then the state of software would grind to a halt, because there is no software that is completely bug-free.[1]
Like it or not, the market has spoken on what it considers acceptable risk for general software. Software where human lives are at risk is already highly regulated, which is why so few human lives are lost to bugs, compared to lives lost to other engineering defects.
[1] I feel we are already at a point where the market has, economically anyway, hit the point of diminishing returns for investment into reliability in software.
croes
10 months ago
Do get laws to punish certain behavior that behavior must be considered as a bad thing in terms of being morally wrong.
So it's a first step to liability.
jiggawatts
10 months ago
> Making truly secure software products is incredibly hard in this day and age
I politely disagree. Writing secure software is easier than ever.
For example, there are several mainstream and full-featured web frameworks that use managed virtual machine runtimes. Node.js and ASP.NET come to mind, but there are many other examples. These are largely immune to memory safety attacks and the like that plague older languages.
Most popular languages also have a wide variety of ORMs available that prevent SQL injection by default. Don't like heavyweight ORMs? No problem! There's like a dozen micro-ORMs like Dapper that do nothing to your SQL other than block injection vulnerabilities and eliminate the boilerplate.
Similarly, web templating frameworks like Razor pages prevent script injection by default.
Cloud app platforms, containerisation, or even just virtual machines make it trivial to provide hardware enforced isolation on a per-app basis instead of relying on the much weaker process isolation within an OS with shared app hosting.
TLS 1.3 has essentially eliminated cryptographic vulnerabilities in network protocols. You no longer have to "think" about this concern in normal circumstances. What's even better is that back end protocols have also uniformly adopted TLS 1.3. Even Microsoft Windows has started using for the wire protocol of Microsoft SQL Server and for the SMB file sharing protocol! Most modern queues, databases, and the like use at least TLS 1.2 or the equivalent. It's now safe to have SQL[1] and SMB shares[2] exposed to the Internet. Try telling that to someone in sec-ops twenty years ago!
Most modern PaaS platforms such as cloud-native databases have very fine-grained RBAC, built-in auditing, read-only modes, and other security features on by default. Developers are spoiled with features such as SAS tokens that can be used to trivially generate signed URLs with the absolute bare minimum access required.
Speaking of PaaS platforms like Azure App Service, these have completely eliminated the OS management aspect of security. Developers never again need to think about operating system security updates or OS-level configuration. Just deploy your code and go.
Etc...
You have to be deliberately making bad choices to end up writing insecure software in 2025. Purposefully staring at the smörgåsbord of secure options and saying: "I really don't care about security, I'm doing something else instead... just because."
I know that might be controversial, but seriously, the writing has been on the wall for nearly a decade now for large swaths of the IT industry.
If you're picking C or even C++ in 2025 for anything you're almost certainly making a mistake. Rust is available now even in the Linux kernel, and I hear the Windows kernel is not far behind. Don't like Rust? Use Go. Don't like Go? Use .NET 9. Seriously! It's open-source, supports AOT compilation, works on Linux just fine, and is within spitting distance of C++ for performance!
[1] https://learn.microsoft.com/en-us/azure/azure-sql/database/n...
[2] https://learn.microsoft.com/en-us/azure/storage/files/files-...
AStonesThrow
10 months ago
This is a strange and myopic understanding of "application security". You seem quite focused on vulnerabilities that could threaten underlying platforms or connected databases, but you're ignoring things like (off the top of my head) authentication, access control, SaaS integrations, supply chains, and user/admin configuration.
Sure, write secure software where nobody signs in or changes a setting, or connects to Google Drive, and you have no dependencies... Truly mythical stuff in 2024.
jiggawatts
10 months ago
I wanted to avoid writing war & peace, but to be fair, some gaps remain at the highest levels of abstraction. For example, SPA apps are a security pit of doom because developers can easily forget that the code they're writing will be running on untrusted endpoints.
Similarly, high-level integration with third-party APIs via protocols such as OIDC have a lot of sharp edges and haven't yet settled on the equivalent of TLS 1.3 where the only possible configurations are all secure.
AStonesThrow
10 months ago
Still too narrow. Even I assumed "application security" when this isn't the point of the comments. We're talking about the gamut, from the very infrastructure AWS or VMWare is writing, mainframe OS, Kubernetes, to embedded and constrained systems like IoT, routers, security appliances, switches, medical devices, automobiles.
You don't just tell all those devs to throw them on Rust and a VM, whether it's 2024 or December 31, 1999.
mgh95
10 months ago
It's important to remember that even `npgsql` can have issues (see https://github.com/npgsql/npgsql/security/advisories/GHSA-x9...).
In your world, would the developer of a piece of software exploted by a vulnerability such as this be liable?
jiggawatts
10 months ago
The point is that secure software is easier to write, not that it's impossible to have security vulnerabilities.
Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.
Postgres is written in C and uses a complicated and bespoke network protocol. This is the root cause of that vulnerability.
If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.
The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.
Collectively, we need to start saying "no" to this legacy.
Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.
mgh95
10 months ago
> Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.
Ah there is the issue: protocol level bugs are language independent; even memory safe languages have issues. One example in the .net sphere is f* which is used to verify programs. I recommend you look at what the concepts of protocol safety actually look like.
> The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.
This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.
> If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.
There is no guarantee a borked client implementation would be defect free.
This is a much harder problem than I think you think it is. Without resorting to a very different paradigm for programming (which, frankly, I don't think you have exposure to based upon your comments) I'm not sure it can be accomplished without rendering most commercial software non-viable.
> Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.
Yeah, I mean start your own OS in rust from scratch. There is a very real issue that RIIR isn't always an improvement. Rewriting a linux implementation from scratch in rust if it's a "must have right now" fix is probably better.
jiggawatts
10 months ago
The counter to any argument is the lived experience of anyone that developed Internet-facing apps in the 90s.
Both PHP and ASP were riddled with security landmines. Developers had to be eternally vigilant, constantly making sure they were manually escaping HTML and JS safely. This is long before automatic and robust escaping such as provided by IHtmlString or modern JSON serializers.
Speaking of serialisation: I wrote several, by hand, because I had to. Believe me, XML was a welcome breath of fresh air because I no longer had to figure out security-critical quoting and escaping rules by trial and error.
I started in an era where there were export-grade cipher suites known to be compromised by the NSA and likely others.
I worked with SAML 1.0 which is one of the worst security protocols invented by man, outdone only by SAML 2.0. I was - again — forced to implement both, manually, because “those were the times”.
We are now spoiled for choice and choose poorly despite that.
ptx
10 months ago
> protocol level bugs are language independent; even memory safe languages have issues. [...] This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.
But it couldn't have occurred in Python, for example, and Swift also (says Wikipedia) doesn't allow integer overflow by default. So it's possible for languages to solve this safety problem as well, and some languages are safer than others by default.
C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.
[0] https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
[1] https://docs.oracle.com/en/java/javase/19/docs/api/java.base...
mgh95
9 months ago
> C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.
This is the point I'm making: verifying a program is separate from writing it. Constantly going back and saying "but-if we just X" is a distraction. Secure software is verified software, not bugfixed.
And that's the point a lot of people don't like to acknowledge about software. It's not enough to remediate defects: you must prevent them in the first place. And that requires program verification, which is a dramatically different problem than the one OC thinks they're solving.
pjmlp
10 months ago
That is why other industries have bill of materials and supplier chain validation.
Bill of materials is already a reality in high critical computing deployments.
mgh95
9 months ago
Yes, but 0 days exist. Software BOM is really only remedial.
cratermoon
9 months ago
3. Regulations.
Yes I know the typical visitor to HN is deathly allergic to government regulations and regulatory bodies in general. That attitude in tech is how we got here.
EnigmaFlare
10 months ago
I agree with her about blaming developers, not hackers. Though not to the point of liability for all developers, but maybe for a few specialist professionals who take on that responsibility and are paid appropriately for it.
Hackers are essentially a force of nature that will always exist and always be unstoppable by law enforcement because they can be in whatever country doesn't enforce such laws. You wouldn't blame the wind for destroying a bridge - it's up to the engineer to expect the wind and make it secure against that. Viewing them this way makes it clear that the people responsible for hacks are the developers in the same way developers are responsible for non-security bugs. Blame is only useful if you can actually use it to alter people's behavior - which you can't for international hackers, or the wind.
Banging this drum could be effective if it leads to a culture change. We already see criticism of developers of software that has obvious vulnerabilities all the time on HN, so there's already some sense that developers shouldn't be extremely negligent/incompetent around security. You can't guarantee security 100% of course, but you can have a general awareness that it's wrong to make the stupid decisions that developers keep making and are generally known to be bad practice.
pmontra
10 months ago
Developers build insecure software in part because themselves and in part because of the decisions made by their managers up to the CEO.
So when you write "developers" we must read "software development companies".
EnigmaFlare
9 months ago
Yes, that's what I meant too, sorry.
gljiva
10 months ago
> I agree with her about blaming developers, not hackers.
They are clearly called "villains".
Wind isn't a person capable of controlling their actions. There is no intention to do harm. They aren't senseless animals either. Yes, it's developers' fault if a product isn't secure enough, but it's also not wrong to put blame on those actively exploiting that. Let's not stop blaming those who do wrong --- and that kind of hackers is doing wrong, not just the developers "making stupid decisions".
Those aren't mutually exclusive
acdha
10 months ago
> They are clearly called "villains".
As readers of the article know, they are not:
> The truth is: Technology vendors are the characters who are building problems" into their products, which then "open the doors for villains to attack their victims,"
She’s talking about companies, not individual developers, and she didn’t call them villains but rather creators of the problems actual villains exploit. The company focus is important: it’s always easy to find who committed a problematic line of code - say a kernel driver which doesn’t validate all 21 of its arguments properly - but the person who typed that in doesn’t work alone. The company sets their incentives, provides training (or not), and most importantly should be pairing the initial author of that code with reviewers, testers, and quality tools. When a company makes a $50 toaster, they don’t just ask the designer whether they think it’s safe, they actually test it in a variety of ways to get that UL certification, and we have far fewer fires than we had a hundred years ago.
One key to understanding this is to remember CISA’s scope and mission. They’re looking at a world where every week has new ransomware attacks shutting down important businesses, even things like hospitals, industrial espionage is on the rise and the industry has largely tried to stay in the cheaper reactive mode of shipping patches after problems are discovered rather than reducing the rate of creating them. This is fundamentally not a technical issue but an economic one and she’s trying to change the incentive structure to get out of the cycle which really isn’t working.
EnigmaFlare
9 months ago
> put blame on those actively exploiting that
To some extent hackers are like the wind. They're a nebulous cloud of unidentifiable possible-people that you can't influence through shaming or laws or anything else. I think we should see them that way to make it clear that it's primarily the developer's responsibility.
Of course blame hackers when they're within reach too.