hn_throwaway_99
8 hours ago
At this point, I have to wonder what is even the point of missives like this. There are only two things that will solve the software quality problem:
1. Economic incentives. It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products. If you're not talking about that, what you're saying is basically "ok pretty please" useless.
2. Reducing the complexity of making products secure in the first place. Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary. Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.
The other thing that people often ignore, or at least don't want to admit, is that the "move fast and break things" ethos has been phenomenally successful from a business perspective. The US software industry grew exponentially faster than anyplace else in the world, even places like India that doubled down on things like the "Software Capability Maturity Model" in the early 00s, and honestly have little to show for it.
gwd
an hour ago
> At this point, I have to wonder what is even the point of missives like this. ...It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products.
I think that liability for bugs is exactly where she's going with this. I'm not an expert, but it sounds from a few things I've heard on some Lawfare podcasts (e.g,. [1][2]) like the idea of software liability has been discussed for quite a while now in government policy circles. This sort of public statement may be laying the groundwork for building the political will to make it happen.
[1] https://www.youtube.com/watch?v=9UneL5-Q98E&pp=ygUQbGF3ZmFyZ...
[2] https://www.youtube.com/watch?v=zyNft-IZm_A&pp=ygUQbGF3ZmFyZ...
EDIT:
> Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary.
Loads of companies already are liable for bugs software that runs on their products: this includes cars, airplanes, and I would presume medical devices, and so on. The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.
These processes don't prevent all bugs, naturally, but they certainly go a long way towards reducing them. Liability for companies who don't follow appropriate standard processes would essentially prevent cloud companies cutting security to get an edge in time-to-market or cost.
EnigmaFlare
5 hours ago
I agree with her about blaming developers, not hackers. Though not to the point of liability for all developers, but maybe for a few specialist professionals who take on that responsibility and are paid appropriately for it.
Hackers are essentially a force of nature that will always exist and always be unstoppable by law enforcement because they can be in whatever country doesn't enforce such laws. You wouldn't blame the wind for destroying a bridge - it's up to the engineer to expect the wind and make it secure against that. Viewing them this way makes it clear that the people responsible for hacks are the developers in the same way developers are responsible for non-security bugs. Blame is only useful if you can actually use it to alter people's behavior - which you can't for international hackers, or the wind.
Banging this drum could be effective if it leads to a culture change. We already see criticism of developers of software that has obvious vulnerabilities all the time on HN, so there's already some sense that developers shouldn't be extremely negligent/incompetent around security. You can't guarantee security 100% of course, but you can have a general awareness that it's wrong to make the stupid decisions that developers keep making and are generally known to be bad practice.
pmontra
3 hours ago
Developers build insecure software in part because themselves and in part because of the decisions made by their managers up to the CEO.
So when you write "developers" we must read "software development companies".
gljiva
an hour ago
> I agree with her about blaming developers, not hackers.
They are clearly called "villains".
Wind isn't a person capable of controlling their actions. There is no intention to do harm. They aren't senseless animals either. Yes, it's developers' fault if a product isn't secure enough, but it's also not wrong to put blame on those actively exploiting that. Let's not stop blaming those who do wrong --- and that kind of hackers is doing wrong, not just the developers "making stupid decisions".
Those aren't mutually exclusive
Buttons840
5 hours ago
A third option is to empower security researchers and hope the good guys find the security holes before the bad guys.
Currently, we threaten the good guys, security researchers, with jail way to quickly. If someone presses F12 and finds a bunch of SSNs in the raw HTML of the State's web page the Governor personally threatens to send them to jail[0]. The good security researchers tip-tow around, timidly asking permission to run pentests while the bad guys do whatever they want.
Protect security researchers, change the laws to empower them and give them the benefit of the doubt.
I think a big reason we don't do this is it would be a burden and an embarrassment to wealthy companies. It's literally a matter of national security and we current sacrifice national security for the convenience of companies.
As you say, security is hard, and putting liability for security issues on anyone is probably unreasonable. The problem is companies can have their cake and eat it too. The companies get full control over their software, and they get to pick and choose who tests the security of their systems, while at the same time having no liability. The companies are basically saying "trust me to take care of the security, also, I accept no liability"; that doesn't inspire confidence. If the liability is too much to bear, then the companies should give up control over who can test the security of their systems.
[0]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...
ezoe
3 hours ago
It suggest that insecure software should be simply called defective product. So the security audit should be called QA.
A product, which don't spend a lot on QA, profit more. Unless there will be a catastrophic incident.
Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.
Thorrez
2 hours ago
Software has tons of bugs. A fraction of those bugs are security vulnerabilities.
Any type of bug can be considered a defect, and thus can be considered to make the product defective. By using the terminology "defective" instead of "vulnerable" we lose the distinction between security bugs and other bugs. I don't think we want to lose that distinction.
irundebian
an hour ago
Security-related product defect, or simply security defect.
thayne
6 hours ago
> But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.
The other thing is that software, especially software connected software (which these days is most software) has to have a much higher level of security than most other industries. When a structural engineer builds a bridge they don't have to worry abou a large number of criminals from all over the world trying to find weaknesses that can be exploited to cause the bridge to collapse.
But software engineers do have to worry about hackers, including state sponsored ones, constantly trying to find and exploit weaknesses in their software.
I think it absurd to put blame on software engineers for failing to make flawless software instead of the bad actors that exploit bugs that would never be noticed during normal operation, or the law enforcement that is ineffective at stopping such exploitation.
Now, I do think that there should be more liability if you don't take security seriously. But there is a delicate balance there. If a single simple mistake has the potential to cause devastating liability, that will seriously slow down the software industry and substantially increase the cost.
Veserv
4 hours ago
If a single simple mistake has the potential to cause devastating harm, then that is precisely the standard that should be demanded. If you do not want that, then you can work on something where your mistakes do not cause immense harm to others or reduce the scope of the system so such harm is not possible.
Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.
"But I want to work on problems that can kill people (or other devastating harm), but not be responsible for killing them (or other devastating harm)" is a utterly insane position that has taken hold in software and has no place in serious, critical systems. If you can not make systems fit for the level of harm they can cause, then you have no place making such systems. That is irrespective of whether anybody in the entire world can do so; systems inadequate to minimize harm to the degree necessary to provide a positive societal cost-benefit over their lifetime are unfit for usage.
thayne
37 minutes ago
Do you think that the mechanical engineer should be held liable if, say a criminal breaks into the factory, and sabatoges the robot to damage property or injure workers, because they didn't make the robot sufficiently resistent to sabatoge attempts?
tastyfreeze
5 hours ago
Lock makers aren't liable for making pickable locks. Punish the bad actors and leave the liability to civil suits.
impossiblefork
an hour ago
You can also choose to avoid complexity.
Often a shorter computer program that is easy to understand can do exactly what a more complicated program can. We can simplify interfaces between systems and ensure that their specifications are short, readable and implementable without allocation, buffers or other things that can be implemented incorrectly. We can ensure that program code is stored separately from data.
Now that LLMs are getting better we could probably have them go through all our code and introduce invariants, etc. to make sure it does exactly what it's supposed to, and if it can't then a human expert can intervene for the function for which the LLM can't find the proof.
I think hardware manufacturers could help too. More isolation, Harvard architectures etc. would be quite appealing.
juunpp
7 hours ago
3. Legal incentives. When somebody dies or fails to receive critical care at a hospital because a Windows XP machined got owned, somebody should probably face something along the lines of criminal negligence.
transpute
7 hours ago
> economic incentives
EU CRA incoming, https://ubuntu.com/blog/the-cyber-resilience-act-what-it-mea...
How will the US version differ?
jiggawatts
3 hours ago
> Making truly secure software products is incredibly hard in this day and age
I politely disagree. Writing secure software is easier than ever.
For example, there are several mainstream and full-featured web frameworks that use managed virtual machine runtimes. Node.js and ASP.NET come to mind, but there are many other examples. These are largely immune to memory safety attacks and the like that plague older languages.
Most popular languages also have a wide variety of ORMs available that prevent SQL injection by default. Don't like heavyweight ORMs? No problem! There's like a dozen micro-ORMs like Dapper that do nothing to your SQL other than block injection vulnerabilities and eliminate the boilerplate.
Similarly, web templating frameworks like Razor pages prevent script injection by default.
Cloud app platforms, containerisation, or even just virtual machines make it trivial to provide hardware enforced isolation on a per-app basis instead of relying on the much weaker process isolation within an OS with shared app hosting.
TLS 1.3 has essentially eliminated cryptographic vulnerabilities in network protocols. You no longer have to "think" about this concern in normal circumstances. What's even better is that back end protocols have also uniformly adopted TLS 1.3. Even Microsoft Windows has started using for the wire protocol of Microsoft SQL Server and for the SMB file sharing protocol! Most modern queues, databases, and the like use at least TLS 1.2 or the equivalent. It's now safe to have SQL[1] and SMB shares[2] exposed to the Internet. Try telling that to someone in sec-ops twenty years ago!
Most modern PaaS platforms such as cloud-native databases have very fine-grained RBAC, built-in auditing, read-only modes, and other security features on by default. Developers are spoiled with features such as SAS tokens that can be used to trivially generate signed URLs with the absolute bare minimum access required.
Speaking of PaaS platforms like Azure App Service, these have completely eliminated the OS management aspect of security. Developers never again need to think about operating system security updates or OS-level configuration. Just deploy your code and go.
Etc...
You have to be deliberately making bad choices to end up writing insecure software in 2025. Purposefully staring at the smörgåsbord of secure options and saying: "I really don't care about security, I'm doing something else instead... just because."
I know that might be controversial, but seriously, the writing has been on the wall for nearly a decade now for large swaths of the IT industry.
If you're picking C or even C++ in 2025 for anything you're almost certainly making a mistake. Rust is available now even in the Linux kernel, and I hear the Windows kernel is not far behind. Don't like Rust? Use Go. Don't like Go? Use .NET 9. Seriously! It's open-source, supports AOT compilation, works on Linux just fine, and is within spitting distance of C++ for performance!
[1] https://learn.microsoft.com/en-us/azure/azure-sql/database/n...
[2] https://learn.microsoft.com/en-us/azure/storage/files/files-...
mgh95
3 hours ago
It's important to remember that even `npgsql` can have issues (see https://github.com/npgsql/npgsql/security/advisories/GHSA-x9...).
In your world, would the developer of a piece of software exploted by a vulnerability such as this be liable?
jiggawatts
2 hours ago
The point is that secure software is easier to write, not that it's impossible to have security vulnerabilities.
Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.
Postgres is written in C and uses a complicated and bespoke network protocol. This is the root cause of that vulnerability.
If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.
The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.
Collectively, we need to start saying "no" to this legacy.
Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.
mgh95
an hour ago
> Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.
Ah there is the issue: protocol level bugs are language independent; even memory safe languages have issues. One example in the .net sphere is f* which is used to verify programs. I recommend you look at what the concepts of protocol safety actually look like.
> The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.
This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.
> If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.
There is no guarantee a borked client implementation would be defect free.
This is a much harder problem than I think you think it is. Without resorting to a very different paradigm for programming (which, frankly, I don't think you have exposure to based upon your comments) I'm not sure it can be accomplished without rendering most commercial software non-viable.
> Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.
Yeah, I mean start your own OS in rust from scratch. There is a very real issue that RIIR isn't always an improvement. Rewriting a linux implementation from scratch in rust if it's a "must have right now" fix is probably better.
jiggawatts
13 minutes ago
The counter to any argument is the lived experience of anyone that developed Internet-facing apps in the 90s.
Both PHP and ASP were riddled with security landmines. Developers had to be eternally vigilant, constantly making sure they were manually escaping HTML and JS safely. This is long before automatic and robust escaping such as provided by IHtmlString or modern JSON serializers.
Speaking of serialisation: I wrote several, by hand, because I had to. Believe me, XML was a welcome breath of fresh air because I no longer had to figure out security-critical quoting and escaping rules by trial and error.
I started in an era where there were export-grade cipher suites known to be compromised by the NSA and likely others.
I worked with SAML 1.0 which is one of the worst security protocols invented by man, outdone only by SAML 2.0. I was - again — forced to implement both, manually, because “those were the times”.
We are now spoiled for choice and choose poorly despite that.
AStonesThrow
2 hours ago
This is a strange and myopic understanding of "application security". You seem quite focused on vulnerabilities that could threaten underlying platforms or connected databases, but you're ignoring things like (off the top of my head) authentication, access control, SaaS integrations, supply chains, and user/admin configuration.
Sure, write secure software where nobody signs in or changes a setting, or connects to Google Drive, and you have no dependencies... Truly mythical stuff in 2024.
jiggawatts
2 hours ago
I wanted to avoid writing war & peace, but to be fair, some gaps remain at the highest levels of abstraction. For example, SPA apps are a security pit of doom because developers can easily forget that the code they're writing will be running on untrusted endpoints.
Similarly, high-level integration with third-party APIs via protocols such as OIDC have a lot of sharp edges and haven't yet settled on the equivalent of TLS 1.3 where the only possible configurations are all secure.
AStonesThrow
2 hours ago
Still too narrow. Even I assumed "application security" when this isn't the point of the comments. We're talking about the gamut, from the very infrastructure AWS or VMWare is writing, mainframe OS, Kubernetes, to embedded and constrained systems like IoT, routers, security appliances, switches, medical devices, automobiles.
You don't just tell all those devs to throw them on Rust and a VM, whether it's 2024 or December 31, 1999.