CISA boss: Makers of insecure software are the real cyber villains

111 pointsposted 9 hours ago
by tsujamin

79 Comments

hn_throwaway_99

8 hours ago

At this point, I have to wonder what is even the point of missives like this. There are only two things that will solve the software quality problem:

1. Economic incentives. It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products. If you're not talking about that, what you're saying is basically "ok pretty please" useless.

2. Reducing the complexity of making products secure in the first place. Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary. Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.

The other thing that people often ignore, or at least don't want to admit, is that the "move fast and break things" ethos has been phenomenally successful from a business perspective. The US software industry grew exponentially faster than anyplace else in the world, even places like India that doubled down on things like the "Software Capability Maturity Model" in the early 00s, and honestly have little to show for it.

gwd

an hour ago

> At this point, I have to wonder what is even the point of missives like this. ...It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products.

I think that liability for bugs is exactly where she's going with this. I'm not an expert, but it sounds from a few things I've heard on some Lawfare podcasts (e.g,. [1][2]) like the idea of software liability has been discussed for quite a while now in government policy circles. This sort of public statement may be laying the groundwork for building the political will to make it happen.

[1] https://www.youtube.com/watch?v=9UneL5-Q98E&pp=ygUQbGF3ZmFyZ...

[2] https://www.youtube.com/watch?v=zyNft-IZm_A&pp=ygUQbGF3ZmFyZ...

EDIT:

> Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary.

Loads of companies already are liable for bugs software that runs on their products: this includes cars, airplanes, and I would presume medical devices, and so on. The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.

These processes don't prevent all bugs, naturally, but they certainly go a long way towards reducing them. Liability for companies who don't follow appropriate standard processes would essentially prevent cloud companies cutting security to get an edge in time-to-market or cost.

EnigmaFlare

5 hours ago

I agree with her about blaming developers, not hackers. Though not to the point of liability for all developers, but maybe for a few specialist professionals who take on that responsibility and are paid appropriately for it.

Hackers are essentially a force of nature that will always exist and always be unstoppable by law enforcement because they can be in whatever country doesn't enforce such laws. You wouldn't blame the wind for destroying a bridge - it's up to the engineer to expect the wind and make it secure against that. Viewing them this way makes it clear that the people responsible for hacks are the developers in the same way developers are responsible for non-security bugs. Blame is only useful if you can actually use it to alter people's behavior - which you can't for international hackers, or the wind.

Banging this drum could be effective if it leads to a culture change. We already see criticism of developers of software that has obvious vulnerabilities all the time on HN, so there's already some sense that developers shouldn't be extremely negligent/incompetent around security. You can't guarantee security 100% of course, but you can have a general awareness that it's wrong to make the stupid decisions that developers keep making and are generally known to be bad practice.

pmontra

3 hours ago

Developers build insecure software in part because themselves and in part because of the decisions made by their managers up to the CEO.

So when you write "developers" we must read "software development companies".

gljiva

an hour ago

> I agree with her about blaming developers, not hackers.

They are clearly called "villains".

Wind isn't a person capable of controlling their actions. There is no intention to do harm. They aren't senseless animals either. Yes, it's developers' fault if a product isn't secure enough, but it's also not wrong to put blame on those actively exploiting that. Let's not stop blaming those who do wrong --- and that kind of hackers is doing wrong, not just the developers "making stupid decisions".

Those aren't mutually exclusive

Buttons840

5 hours ago

A third option is to empower security researchers and hope the good guys find the security holes before the bad guys.

Currently, we threaten the good guys, security researchers, with jail way to quickly. If someone presses F12 and finds a bunch of SSNs in the raw HTML of the State's web page the Governor personally threatens to send them to jail[0]. The good security researchers tip-tow around, timidly asking permission to run pentests while the bad guys do whatever they want.

Protect security researchers, change the laws to empower them and give them the benefit of the doubt.

I think a big reason we don't do this is it would be a burden and an embarrassment to wealthy companies. It's literally a matter of national security and we current sacrifice national security for the convenience of companies.

As you say, security is hard, and putting liability for security issues on anyone is probably unreasonable. The problem is companies can have their cake and eat it too. The companies get full control over their software, and they get to pick and choose who tests the security of their systems, while at the same time having no liability. The companies are basically saying "trust me to take care of the security, also, I accept no liability"; that doesn't inspire confidence. If the liability is too much to bear, then the companies should give up control over who can test the security of their systems.

[0]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...

ezoe

3 hours ago

It suggest that insecure software should be simply called defective product. So the security audit should be called QA.

A product, which don't spend a lot on QA, profit more. Unless there will be a catastrophic incident.

Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.

Thorrez

2 hours ago

Software has tons of bugs. A fraction of those bugs are security vulnerabilities.

Any type of bug can be considered a defect, and thus can be considered to make the product defective. By using the terminology "defective" instead of "vulnerable" we lose the distinction between security bugs and other bugs. I don't think we want to lose that distinction.

irundebian

an hour ago

Security-related product defect, or simply security defect.

thayne

6 hours ago

> But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.

The other thing is that software, especially software connected software (which these days is most software) has to have a much higher level of security than most other industries. When a structural engineer builds a bridge they don't have to worry abou a large number of criminals from all over the world trying to find weaknesses that can be exploited to cause the bridge to collapse.

But software engineers do have to worry about hackers, including state sponsored ones, constantly trying to find and exploit weaknesses in their software.

I think it absurd to put blame on software engineers for failing to make flawless software instead of the bad actors that exploit bugs that would never be noticed during normal operation, or the law enforcement that is ineffective at stopping such exploitation.

Now, I do think that there should be more liability if you don't take security seriously. But there is a delicate balance there. If a single simple mistake has the potential to cause devastating liability, that will seriously slow down the software industry and substantially increase the cost.

Veserv

4 hours ago

If a single simple mistake has the potential to cause devastating harm, then that is precisely the standard that should be demanded. If you do not want that, then you can work on something where your mistakes do not cause immense harm to others or reduce the scope of the system so such harm is not possible.

Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.

"But I want to work on problems that can kill people (or other devastating harm), but not be responsible for killing them (or other devastating harm)" is a utterly insane position that has taken hold in software and has no place in serious, critical systems. If you can not make systems fit for the level of harm they can cause, then you have no place making such systems. That is irrespective of whether anybody in the entire world can do so; systems inadequate to minimize harm to the degree necessary to provide a positive societal cost-benefit over their lifetime are unfit for usage.

thayne

37 minutes ago

Do you think that the mechanical engineer should be held liable if, say a criminal breaks into the factory, and sabatoges the robot to damage property or injure workers, because they didn't make the robot sufficiently resistent to sabatoge attempts?

tastyfreeze

5 hours ago

Lock makers aren't liable for making pickable locks. Punish the bad actors and leave the liability to civil suits.

impossiblefork

an hour ago

You can also choose to avoid complexity.

Often a shorter computer program that is easy to understand can do exactly what a more complicated program can. We can simplify interfaces between systems and ensure that their specifications are short, readable and implementable without allocation, buffers or other things that can be implemented incorrectly. We can ensure that program code is stored separately from data.

Now that LLMs are getting better we could probably have them go through all our code and introduce invariants, etc. to make sure it does exactly what it's supposed to, and if it can't then a human expert can intervene for the function for which the LLM can't find the proof.

I think hardware manufacturers could help too. More isolation, Harvard architectures etc. would be quite appealing.

juunpp

7 hours ago

3. Legal incentives. When somebody dies or fails to receive critical care at a hospital because a Windows XP machined got owned, somebody should probably face something along the lines of criminal negligence.

rockskon

7 hours ago

Will Microsoft face liability if someone dies or fails to receive critical care because some infrastructure system auto-rebooted to apply an update and lost state/data relating to that patient's care?

mkoubaa

7 hours ago

That's the only thing that'll Microsoft chill with the reboots

jiggawatts

3 hours ago

> Making truly secure software products is incredibly hard in this day and age

I politely disagree. Writing secure software is easier than ever.

For example, there are several mainstream and full-featured web frameworks that use managed virtual machine runtimes. Node.js and ASP.NET come to mind, but there are many other examples. These are largely immune to memory safety attacks and the like that plague older languages.

Most popular languages also have a wide variety of ORMs available that prevent SQL injection by default. Don't like heavyweight ORMs? No problem! There's like a dozen micro-ORMs like Dapper that do nothing to your SQL other than block injection vulnerabilities and eliminate the boilerplate.

Similarly, web templating frameworks like Razor pages prevent script injection by default.

Cloud app platforms, containerisation, or even just virtual machines make it trivial to provide hardware enforced isolation on a per-app basis instead of relying on the much weaker process isolation within an OS with shared app hosting.

TLS 1.3 has essentially eliminated cryptographic vulnerabilities in network protocols. You no longer have to "think" about this concern in normal circumstances. What's even better is that back end protocols have also uniformly adopted TLS 1.3. Even Microsoft Windows has started using for the wire protocol of Microsoft SQL Server and for the SMB file sharing protocol! Most modern queues, databases, and the like use at least TLS 1.2 or the equivalent. It's now safe to have SQL[1] and SMB shares[2] exposed to the Internet. Try telling that to someone in sec-ops twenty years ago!

Most modern PaaS platforms such as cloud-native databases have very fine-grained RBAC, built-in auditing, read-only modes, and other security features on by default. Developers are spoiled with features such as SAS tokens that can be used to trivially generate signed URLs with the absolute bare minimum access required.

Speaking of PaaS platforms like Azure App Service, these have completely eliminated the OS management aspect of security. Developers never again need to think about operating system security updates or OS-level configuration. Just deploy your code and go.

Etc...

You have to be deliberately making bad choices to end up writing insecure software in 2025. Purposefully staring at the smörgåsbord of secure options and saying: "I really don't care about security, I'm doing something else instead... just because."

I know that might be controversial, but seriously, the writing has been on the wall for nearly a decade now for large swaths of the IT industry.

If you're picking C or even C++ in 2025 for anything you're almost certainly making a mistake. Rust is available now even in the Linux kernel, and I hear the Windows kernel is not far behind. Don't like Rust? Use Go. Don't like Go? Use .NET 9. Seriously! It's open-source, supports AOT compilation, works on Linux just fine, and is within spitting distance of C++ for performance!

[1] https://learn.microsoft.com/en-us/azure/azure-sql/database/n...

[2] https://learn.microsoft.com/en-us/azure/storage/files/files-...

mgh95

3 hours ago

It's important to remember that even `npgsql` can have issues (see https://github.com/npgsql/npgsql/security/advisories/GHSA-x9...).

In your world, would the developer of a piece of software exploted by a vulnerability such as this be liable?

jiggawatts

2 hours ago

The point is that secure software is easier to write, not that it's impossible to have security vulnerabilities.

Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Postgres is written in C and uses a complicated and bespoke network protocol. This is the root cause of that vulnerability.

If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

Collectively, we need to start saying "no" to this legacy.

Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.

mgh95

an hour ago

> Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Ah there is the issue: protocol level bugs are language independent; even memory safe languages have issues. One example in the .net sphere is f* which is used to verify programs. I recommend you look at what the concepts of protocol safety actually look like.

> The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.

> If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

There is no guarantee a borked client implementation would be defect free.

This is a much harder problem than I think you think it is. Without resorting to a very different paradigm for programming (which, frankly, I don't think you have exposure to based upon your comments) I'm not sure it can be accomplished without rendering most commercial software non-viable.

> Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.

Yeah, I mean start your own OS in rust from scratch. There is a very real issue that RIIR isn't always an improvement. Rewriting a linux implementation from scratch in rust if it's a "must have right now" fix is probably better.

jiggawatts

13 minutes ago

The counter to any argument is the lived experience of anyone that developed Internet-facing apps in the 90s.

Both PHP and ASP were riddled with security landmines. Developers had to be eternally vigilant, constantly making sure they were manually escaping HTML and JS safely. This is long before automatic and robust escaping such as provided by IHtmlString or modern JSON serializers.

Speaking of serialisation: I wrote several, by hand, because I had to. Believe me, XML was a welcome breath of fresh air because I no longer had to figure out security-critical quoting and escaping rules by trial and error.

I started in an era where there were export-grade cipher suites known to be compromised by the NSA and likely others.

I worked with SAML 1.0 which is one of the worst security protocols invented by man, outdone only by SAML 2.0. I was - again — forced to implement both, manually, because “those were the times”.

We are now spoiled for choice and choose poorly despite that.

AStonesThrow

2 hours ago

This is a strange and myopic understanding of "application security". You seem quite focused on vulnerabilities that could threaten underlying platforms or connected databases, but you're ignoring things like (off the top of my head) authentication, access control, SaaS integrations, supply chains, and user/admin configuration.

Sure, write secure software where nobody signs in or changes a setting, or connects to Google Drive, and you have no dependencies... Truly mythical stuff in 2024.

jiggawatts

2 hours ago

I wanted to avoid writing war & peace, but to be fair, some gaps remain at the highest levels of abstraction. For example, SPA apps are a security pit of doom because developers can easily forget that the code they're writing will be running on untrusted endpoints.

Similarly, high-level integration with third-party APIs via protocols such as OIDC have a lot of sharp edges and haven't yet settled on the equivalent of TLS 1.3 where the only possible configurations are all secure.

AStonesThrow

2 hours ago

Still too narrow. Even I assumed "application security" when this isn't the point of the comments. We're talking about the gamut, from the very infrastructure AWS or VMWare is writing, mainframe OS, Kubernetes, to embedded and constrained systems like IoT, routers, security appliances, switches, medical devices, automobiles.

You don't just tell all those devs to throw them on Rust and a VM, whether it's 2024 or December 31, 1999.

tonetegeatinst

2 hours ago

I think sometimes cyber is still seen as an unnecessary cost. Plenty of places do bare minimum for security, and most of the time its after an incident that budgets suddenly get raised.

Software, hardware, policy, and employee training are all things one must focus on. You can't just start making rdx or fireworks without the proper paperwork, permits, licenses, fees, and a lawyer around to navigate everything. You run a business without investing anything into IT and cybersecurity, you just make it easier for an incident to occur. And remember, just because your product isn't IT or cyber security dosnt mean its losing money, it a cost of doing buissness in our regulated market. You mishandle HIPPA, PII or sensitive info, and the customers realize you didn't take basic steps to stop this, you open yourself to a lawsuit. Think about it like this, investing in it every day means your lowering that risk, however much you think is reasonable to pay for, and every day its paying for itself.

boomboomsubban

8 hours ago

That is rich coming from a former NSA Tailored Access Operations agent. She had no problems paying companies to release insecure software, including some that have signed the "secure by design" pledge.

davisr

8 hours ago

That is important context, but I still agree with what she's said in this article. It's also rich that Cisco especially -- a company known for hard-coding backdoors into their products for decades -- is "taking a pledge" to do better.

boomboomsubban

2 hours ago

I agree that software should often get more tests to improve security.

I don't think supporting companies that sign a meaningless pledge improves anything and I question her motives in trying to shame people who use companies that have not signed this pledge.

keepamovin

4 hours ago

I see it as the opposite: as ex Deputy Head of TAO Easterly is no retard.

And there's a difference between defective software that leads to vulns exploited by crime gangs and NOBUS backdoors that the good guys use to keep you safe. Sounds bullshit, right?

That's how far the public discourse on cyber has diverged from the reality, which is part of the issue. Easterly's push for renaming cyber actors and flaws is smart. Bad quality comes from mindset, attitude. And names are important, as programmers should know! :)

I would prefer it if she had a GitHub profile tho. Always cool if you do that.

rockskon

7 hours ago

So I'm aware she worked for the NSA but this is the first I'm hearing of her working for TAO.

I had thought she worked at the NSA's IAD (defensive side) pre-merge of the offensive and defensive sides.

transpute

7 hours ago

Dan Geer on prioritizing MTRR over MTBF (2022):

  Metrics as Policy Driver: Do we steer by Mean Time Between Failure (MTBF) or Mean Time To Repair (MTTR) in Cybersecurity?

  Choosing Mean Time Between Failure (MTBF) as the core driver of cybersecurity assumes that vulnerabilities are sparse, not dense.  If they are sparse, then the treasure spent finding them is well-spent so long as we are not deploying new vulnerabilities faster than we are eliminating old ones. If they are dense, then any treasure spent finding them is more than wasted; it is disinformation. 

  Suppose we cannot answer whether vulnerabilities are sparse or dense. In that case, a Mean Time To Repair (MTTR) of zero (instant recovery) is more consistent with planning for maximal damage scenarios. The lesson under these circumstances is that the paramount security engineering design goal becomes no silent failure – not no failure but no silent failure – one cannot mitigate what one does not recognize is happening.

userbinator

8 hours ago

Why does software require so many urgent patches?

Conspiracy theory: creating new bugs they can always fix later is a good source of continued employment.

Of course there's also the counterargument that insecurity is freedom: if it weren't for some insecurity, the population would be digitally enslaved even more by companies who prioritise their own interests. Stallman's infamous "Right to Read" is a good reminder of that dystopia. This also ties in with right-to-repair.

The optimum amount of cybercrime is nonzero.

tonetegeatinst

2 hours ago

One thing is software development is not really focused on secure software. If Microsoft ca n manage to call recall a engineered product feature where those M$ folks get payed stacks to work on, can't even both securing the data in rest, and then decide its a great idea to use cloud computing for recall, then I can totally see "security software engineer" becoming a separate field.

Securing software seems to be hard especially in web development. You got to worry about regular development, then all these crazy different exploit methods like xss,SQL injection, data sanitation, etc....and then you got to get this site working for multiple browsers, you need to jugle all of this. And if an api or some 3rd party tool gets compromised how do you prevent that except a crystal ball a bottle of burbon and a clairvoyant chant?

Also their was iirc people submitting bogus CVE reports to increase the mental drain on people and overwhelm the human link which is always the weakest.

LinuxBender

an hour ago

Conspiracy theory: creating new bugs they can always fix later is a good source of continued employment.

That is absolutely a thing [1]. There are hardware devices that can be fixed illegally fixed using 3rd party software for this reason. The people making a work around to the scam then get sued. The video is worth a watch in my opinion. It was created 3 years ago and the problem is still ongoing.

[1] - https://www.youtube.com/watch?v=SrDEtSlqJC4 [video][29min]

wredue

6 hours ago

It isn't malice dude. Unfortunately, it really is just that 70% of developers are utterly incompetent.

tsimionescu

3 hours ago

If having security vulnerabilities in code you wrote or reviewed is a sign of incompetence, then there has probably never been a competent developer in the history of the industry.

forgetfreeman

4 hours ago

Pretty sure I'm going to burn some karma on this one but what the hell.

To the best of my knowledge there is no evidence in over four decades of commercial software development that supports the assertion that software can be truly secure. So to my mind this suggests the primary villains are the individuals and organizations that have pushed software into increasingly sensitive areas of our lives and vital institutions.

gavinhoward

4 hours ago

As someone who thinks the industry should professionalize, I actually agree with you somewhat; we pushed too far, too fast, and into territory that software has no business being in.

irundebian

43 minutes ago

What's the definition of truly secure?

waihtis

4 hours ago

I hope she starts the crackdown with easily the biggest impact offender here, Microsoft

jdougan

7 hours ago

I dunno. "Evil Ferret" and "Scrawny Nuisance" sound pretty good in our irony filled world.

sublinear

8 hours ago

What a joke. This role deserves way better, but I understand it's only been around since 2018.

acdha

8 hours ago

This seems like a very pat dismissal of what upon reading further seems like a very reasonable critique of the industry. Vendors have been seriously exploiting their ability to decline responsibility for their product development decisions and that often has significant negatives for affected users.

Animats

7 hours ago

A previous head of cyber security was fired when he said something like that.

dhx

5 hours ago

Where does CISA/NIST recommend (for software developers) or require (for government agencies integrating software) specific software/operating system hardening controls?

* Where do they require software developers to provide and enforce seccomp-bfp rules to ensure software is sandboxed from making syscalls it doesn't need to? For example, where is the standard that says software should be restricted from using the 'ptrace' syscall on Linux if the software is not in the category of [debugging tool, reverse engineering tool, ...]?

* Where do they require government agencies using Kubernetes to use a "restricted" pod security standard? Or what configuration do they require or recommend for systemd units to sandbox services? Better yet, how much government funding is spent on sharing improved application hardening configuration upstream to open source projects that the government then relies upon (either directly or indirectly via their SaaS/PaaS suppliers)?

* Where do they provide a recommended Kconfig for compiling a Linux kernel with recommended hardening configuration applied?

* Where do they require reproducible software builds and what distributed ledger (or even central database) do they point people to for cryptographic checksums from multiple independent parties confirming they all reproduced the build exactly?

* Where do they require source code repositories being built to have 100% inspectable, explainable and reproducible data? As xz-utils showed, how would a software developer need to show that test images, test archives, magic constants and other binary data in a source code repository came to be and are not hiding something nefarious up the sleeve.

* Where do they require proprietary software suppliers to have source code repositories kept in escrow with another company/organisation which can reproduce software builds, making supply chain hacks harder to accomplish?

* ... (similar for SaaS, PaaS, proprietary software, Android, iOS, Windows, etc)

All that the Application Security and Development STIG Ver 6 Rel 1[1] and NIST SP 800-53 Rev 5[2] offer up is vague statements of "Application hardening should be considered" which results in approximately nothing being done.

[1] https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_ASD_...

[2] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S...

tptacek

5 hours ago

I don't know how meaningful those countermeasures really are. Like, you're basically looking at the space of Linux kernel LPEs, and you're bringing it down to maybe 1/3rd the "native" rate of vulnerabilities --- but how does that change your planning and the promises you can make to customers?

jart

5 hours ago

Government agencies have got their hands tied just getting people to use MFA and not click on phishing emails. You're decades ahead of the herd if you're thinking about coding in SECCOMP-BPF.

dhx

2 hours ago

Software sandboxing has a relatively good cost-to-benefit ratio at reducing the consequences of software bugs, which is why it's already implemented in a lot of software we all use every day. For example, it exists in Android apps, iOS apps, Flatpak apps (Linux), Firefox[1][2], Chromium browsers[3][4][5], SELinux-enabled distributions such as Fedora and Hardened Gentoo[6], OpenSSH (Linux)[7], postfix's multi-process architecture with use of ACLs, etc.

Kubernetes, Docker and systemd folk will be familiar with the idea of sandboxing for containers too, and they're able to do so using much higher level controls, e.g. turn on Kubernetes "Restricted pod security standard" for much stricter sandboxing defaults. Even if containerisation and daemon sandboxing aren't used, architects will understand the concept of sandboxing by just specifying more servers, each one ideally performing a separate job with the number of external interfaces minimised as much as possible. In all of these situations, the use of more granular controls such as detailed seccomp-bpf filters is most useful to mitigate the risks introduced by (ironically) security agent software that is typically installed alongside a database server daemon, web server daemon, etc within the same container.

Tweaking some Kubernetes, Docker or systemd config is _much_ cheaper and quicker to implement rather than waiting to rewrite software in a safer language such as Rust (a noble end goal). Even if software were rewritten in Rust, you'd _still_ want to implement some form of external sandboxing (e.g. systemd-nspawn applying seccomp-bpf filters based on some basic systemd service configuration) to mitigate supply chain attacks against Rust software which cause the software to perform functions it shouldn't be doing.

[1] Firefox Linux: https://searchfox.org/mozilla-central/source/security/sandbo...

[2] Firefox Windows: https://searchfox.org/mozilla-central/source/security/sandbo...

[3] Chromium multi-platform: https://github.com/chromium/chromium/blob/main/docs/security...

[4] Chromium Linux: https://chromium.googlesource.com/chromium/src/+/0e94f26e8/d... (seemingly Linux sandboxing is experiencing significant changes as this document or one similar to it does not appear to exist anymore)

[5] Chromium Windows: https://chromium.googlesource.com/chromium/src/+/HEAD/docs/d...

[6] https://gitweb.gentoo.org/proj/hardened-refpolicy.git/tree/p...

[7] https://github.com/openssh/openssh-portable/blob/master/sand...

notepad0x90

8 hours ago

"Technology vendors are the characters who are building problems..."

there are vendors building problems into their products? isn't that a crime?

What a silly take.

"House developers that build weak doors into houses are the real problem, not the burglars"

Security is an age-old problem, it is not a new concept. What is different with information security is the complexities and power dynamics changed drastically.

I mean, really! She should know better, the #1 attack vector for initial access is still phishing or social-engineering of some kind. Not a specific vulnerability in some software.

EnigmaFlare

5 hours ago

Search MasterLock on Youtube and you'll see that people blame the lock maker rather than the burglars. That's because the lock maker is notorious for making insecure locks.

Social engineering is also something that can be protected against by developers, at least to some extent. Yubikey type 2FA is more resistant to that than user-input TOTP codes, for example. Nothing's bulletproof but it could certainly be improved in many cases. Wasn't some company that experienced a credential stuffing attack recently sued for not requiring 2FA for its users?

We don't tolerate house builders who says "well there's always some way water might get into the framing, so we don't need to bother installing flashings properly - a bit of caulking will do instead."

notepad0x90

3 hours ago

Even a yubikey isn't resistant to cookie-theft, plenty of TOTP code theft phishing kits exist. Users literally enable third-party apk side-loading and install malicious apks on their android phones because of social engineering. It's not new to security. If you wear a high-vis vest and carry a clipboard, you'd be let into even government buildings. that also is social engineering.

For your house builders statement, that is not a fair analogy. Is there evidence of a trend where software devs are saying "well there's always some other way of getting hacked so let's not bother doing things properly"?

You masterlock analogy is on-point though, because of "threat model", the purpose of most doors and locks in the US for residential use is as a lightweight deterrent. Burglars can just break the glass window and walk by the no-fence perimeter. You can and probably should get a more secure lock, but it is as strong as the door frame and windows, and whatever alarm system you're using. In other words, for that analogy (and for what the CISA boss is saying) to be valid, there needs to be evidence that burglars give up and go home when the lock is secure. I would even go further and ask for a proper root cause analysis. Do the builders of masterlock know how insecure their lock is? If they are indeed making it weak because of cost, then are they really to blame? they're a business after all. Where is the regulation for proper secure locks. As a government agency, CISA shouldn't be blaming vendors, that's a cheap cop-out. They should be lobbying for regulation and laws, and then enforcing them. So, in the end, even if the CISA boss is right, ultimately she shouldn't be blaming vendors but explaining what she's been doing to pass regulations.

acdha

5 hours ago

Hint: she didn’t say that they were building them intentionally. Skimping is building a problem even if nobody had a Jira ticket saying they had to leave out bounds checking.

notepad0x90

3 hours ago

Most vulnerabilities are not there because someone was deliberately negligent either. I see no evidence of a trend where vendors are "skimping" on anything.

johndhi

6 hours ago

What she is asking for is a radical economic restructuring of a free market.

It's attitudes like hers that lead to the federal government having the worst software imaginable. It just doesn't work unless everyone agrees to do it .. so good luck

acdha

5 hours ago

Liability for a defective product is a “radical restructuring”? It’s something we have in almost every other category of business - not perfectly but pervasive enough that software is really conspicuous as an outlier.

EnigmaFlare

5 hours ago

Absolutely. OEM car parts suppliers have liability not just for their own product but whatever consequences happen downstream, like the cost of recalls, etc. And that makes sense becase liability is on the companies that are in the best position to ensure their product is correct.

Vendors of engineering software used by car makers, on the other hand, have no such liability. It's software so its the user's responsibility.

consumer451

6 hours ago

I used to be an IT guy at a structural and civil engineering firm. Those were real professional engineers with stamps and liability.

As long as "SWEs" do not have stamps and legal liability, they are not real (professional) engineers, IMHO.

My point is that I believe to earn the title of "Software Engineer," you should have a stamp and legal liability.

We done effed up. This breach of standards might be the great filter.

edit: Thanks to the conversation down-thread, the possibly obvious solution is a Software Professional Engineer, with a stamp. This means full-stack is actually full effing stack, not any bullshit. This means that ~1% to ~5% of SWE would be SWPE, as it is in other engineering domains. A SWPE would need to sign off on anything actually important. What is important? Well we figured that out in other engineering domains. It's time for software to catch the f up.

tonetegeatinst

2 hours ago

I think in certain ways you are right we need those standards and stamps. But also colleges need to be very clear and cease calling it software engineering as it implies the expertise of a engineer.

Computer engineering degrees can sorta get away with this, as they have quite a bit of engineering classes they have to take, but I'm not sure if they can call themselves engineers legally.

Its hard because I believe certain titles are protected like doctor,lawyer,engineer, but last I check it varied by state in the USA at least.

Sidenote: sometimes people get a computer science degree, or they he a cyber security degree, or even cyber warefare degree. All three seem to be used interchangeably in the security field.

On one hand I get how formal protected titles help uphold standards and create trust, but it also enforces the grip colleges have and it creates more barriers to entering a field. Some States require 3 years experience to get a state permit for certain activities, when a federal permit is already needed so in that case its just more paperwork.

Imagine if we started demanding people who build houses, not the guy who designed it but the builders to be an engineer, so we can trust their work and ability to do a job in the correct manner?

At what point do we sacrifice and pass legislation in the name of reliability and safety?

bruce511

6 hours ago

Ok, so overnight all programmers stop calling themselves Engineers. [1] What problem does that solve? I fix bugs all day, but I don't call myself a Software Doctor.

Frankly whether software people call themselves engineers or not matters to pretty much no-one (except actual engineers who have stamps and liabilities.)

Creating a bunch of requirements and liability won't suddenly result in more programmers getting certified and taking on liability. It'll just mean they stop using that title. I'm not sure that achieves anything useful. We'd still have the exact same software coming from the exact same people.

[1] for the record I think 'software engineer' is a daft title anyway, and I don't use it. I don't have an engineering degree. On the other hand I have a science degree and I don't go around calling myself a data scientist either.

consumer451

6 hours ago

That's fine, it just means that devs without stamps can't sign off on anything actually important. In real engineering, there is a difference between an Engineer and a Professional Engineer. The latter has a stamp.

I realize that this is the nearly the opposite of our current environment. It is a lot more regulation, a lot more professional standards... but it worked for civil and structural, and those standards were written in blood.

Maybe what I am asking for is a PE for SWE, those people have stamps, and it would be really hard to get a SW PE. Anything deemed critical (like security), by regulation, would require a SW PE stamp. [0]

Software did in-fact eat the world. Why shouldn't it have any legal/professional liability like civil and structural engineering?

[0] In this case, full stack, actually means full freaking stack = SWPE

bruce511

5 hours ago

>> That's fine, it just means that devs without stamps can't sign off on anything actually important

For some definition of important.

But let's follow your thought through. Who decides what is important? You? Me? The developer? Yhe end-user? Some regulatory body?

Is Tetris important? OpenSSL? Notepad ++? My side project on github?

If my OSS project becomes important, and I now need to find one of your expensive engineers to sign off on it, take liability for it, do you think I can afford her? How could they be remotely sure that the code is OK? How would they begin to determine if its safe or not?

>> Software did in-fact eat the world. Why shouldn't it have any legal/professional liability like civil and structural engineering?

Because those professions have shown us why that model doesn't scale. How many bridges, dams etc are built by engineers every year? How does that compare to the millions of software projects started every year?

In the last 30 years we've pretty much written all the code, on all the platforms, in use today. Linux, Windows, the web, phones, it's all less than 35 years old. What civil engineering projects have been completed in the same time scale? A handful of new skyscrapers?

You are basically suggesting we throw away all software ever written and rebuild the world based on individual's prepared to take on responsibility and legal liability for any bugs they create along the way?

I would suggest that not only would this be impossible, not only would it be meaningless, but it would take centuries to get to where we are right now. With just as many bugs as there are now. But, yay, we can bankrupt an engineer every time a bug is exploited.

consumer451

5 hours ago

This has all been done before in mechanical, structural, and civil engineering. People die and then regulatory and industry standards fix the problems.

We do not need to re-invent the concepts of train engine, bridge, and dam standards again.

I mean, I guess we actually do. The issue is that software has not yet killed enough people for those lessons to be learned. We are now at that cliff's edge [0], [1].

Another problem might be that software influence is on a far more hockey-stick-ish growth curve than what we dealt with in mechanical, civil, and structural engineering.

Meanwhile, our tolerance for professional and governmental standards seems to be diminishing.

[0] https://news.ycombinator.com/item?id=39918245

[1] https://news.ycombinator.com/item?id=24513820

... https://hn.algolia.com/?q=hospital+ransomware

tsimionescu

3 hours ago

No, the world's infrastructure has never been rebuilt from scratch to higher standards, not in the last few thousand years. We have always built on what already exists, grandfathered in anything that seemed ok, or was important enough even if not ok, etc.

We often live in buildings that far predate any building code, or even the state that emitted that code. We still use bits of infrastructure here and there that are much older than any modern state at all (though, to be fair, if a bridge has been around for the last thousand years, the risk it goes down tomorrow because it doesn't respect certain best practices is not exactly huge).

dhx

5 hours ago

There have long been multiple forms of professional software engineering in aerospace, rail, medical instrumentation and national security industries such as ISO 26262, DO-178B/DO-178C/ED-12C, IEC-61508, EN-50128, FDA-1997-D-0029 and CC EAL/PP.

DO-178B DAL A (software whose failure would result in a plane crashing) was estimated at [1] to be writable at 3 SLOC/day for a newbie and 12 SLOC/day for a professional software engineer with experience writing code to this standard. Writing software to DO-178B standards was estimated in [1] to double project costs. DO-178C (newer standard from 2012) is much more onerous and costly.

I pick DO-178 deliberately because the level of engineering effort required in security terms is probably closest to that applied to seL4, which is stated to have cost ~USD$500/SLOC (adjusted for inflation to 2024).[2] This is a level higher than CC EAL7 as CC EAL7 only requires formal verification of design, not the actual implementation.[3] DO-178C goes as far as requiring every software tool used to verify software automatically has been formally verified otherwise one must rely upon manual (human) verification. Naturally, formally verified systems such as flight computer software and seL4 are deliberately quite small. Scaling of costs to much larger software projects would most likely be prohibitive as complexity of a code base and fault tree (all possible ways the software could fail) would obviously not scale in a friendly way.

[1] https://web.archive.org/web/20131030061433/http://www.euroco...

[2] https://en.wikipedia.org/wiki/L4_microkernel_family#High_ass...

[3] https://cgi.cse.unsw.edu.au/~cs9242/21/lectures/10a-sel4.pdf

consumer451

4 hours ago

With much humility, may I ask, have you been exposed to the world of PEs with stamps and liability?

Do you see the need for anything like this in the software world, in the future?

dhx

4 hours ago

Professional engineers have been stamping and signing off on safety-critical software for decades, particularly in aviation, space, rail and medical instrumentation sectors. Whilst less likely to be regulated under a "professional association" scheme, there has also been two decades of similar stamping of security-critical software under the Common Criteria EAL scheme.

The question is whether formal software engineering practices (and associated costs) expand to other sectors in the future. I think yes, but at a very slow pace mainly due to high costs. CrowdStrike's buggy driver bricking Windows computers around the world is estimated to have caused worldwide damages of some USB$10bn+.[1] There will be cheaper ways seen to limit a buggy driver bricking Windows computers in the future other than requiring every Windows driver be built to seL4-like (~USD$500/SLOC) standards.

If formal software engineering practices are implemented more as years go by, it'll be the simplest/easiest software touched first, with the highest consequences of failure, such as Internet nameservers.

[1] https://en.wikipedia.org/wiki/2024_CrowdStrike_incident

olliej

4 hours ago

As a header, there's clearly a liability problem in modern software, which I'll get to later.

[pre-posting comment: I've moved the semi-rant portion to the bottom, because I realized I should start with the more direct issues first, lest the ranty-ness cause you to not read the less ranty portion :D ] <snip and paste below> Now getting to the point: there is a real problem in that companies can advertise products to do a certain thing, and they can then have a license agreement that says "we're not liable if it fails to do what we said it would do", but generally despite those licenses (which to be clear are a requirement for open source to exist as a concept), the law has found companies are liable for unreasonable losses.

So the question is just how liable should a company be for a bug in their software (or hardware I guess depending on where you place the hardware vs firmware vs software lines) that can be exploited, and your position is that in addition to liability bought about by their own actions (Again despite the "we have no liability" EULAs plenty of companies have ended up with penalties for bugs in their software causing a variety of different awful outcomes.

But you're going a step further, you're now saying, in addition to liability for your errors, you're also liable for other people causing failures due to those errors, either accidentally or intentionally.

I am curious, and I would be interested in the responses from your Real Engineer coworkers.

Who is responsible if a bridge collapses when a ship crashes into it? An engineer can reasonably predict that that would happen, and should design to defend against it.

Let's say an engineer designs a building, and a person is able to bomb the building and cause it to collapse with only a small amount of fertilizer? What happens to the liability if the only reason the plot succeeded was because they were able to break past a security barrier?

Because here is the thing: we are not talking about liability if a product/project/construction fails to do what it says it will do (despite EULAs, companies generally lose in court when their product causes harm even if there's a license precluding liability). The question is who is liable if a product fails to stand up to a malicious actor.

At its heart, the problem we're discussing is not about liability for "the engine failed in normal use", it's "the engine failed after someone poured sugar into the gas tank", not "the wall collapsed in the wind" it's "the wall collapsed after someone intentionally drove their truck into it", not "the plane crashed when landing due to the tires bursting", it's "the plane crashed when landing as someone had slashed the tires".

What you're saying, is that a Professional Engineer signing off on a design is saying not only "this product will work as intended", they're saying "this product will even under active attack outside of its design window".

That's an argument that seems to go either way: I don't recall ever hearing about a lawsuit against door manufacturers due to burglars being able to break through the doors or locks, but on the other hand Kia is being sued due to the triviality of stealing their cars - and even then the liability claims seem to be due to the cost of handling the increased theft, not the damage from any individual theft.

[begin ranty portion: this is all I think fairly reasonable, but it's much more opinionated than the now initial point and I'm loathe to just discard it]

I'm curious what/who you think is signing off and what they are signing off on.

* First off, software is more complex than more or less any physical product, the only solution is to reduce that complexity down to something manageable to the same extent that, say, a car is. How many parts total are in your car? Cool that's how many expressions or statements your program can have. And because it's not governed by direct physical laws and similar interactions, then that's still more complex than a car.

* Second: no more open source code in commercial products - you can't use it in a product, because doing so requires sign off by your magical software engineers who can understand products more complex, again, than any single physical product

* Third: no more free development in what open source remains - signing off on a review now makes you legally liable for it. You might say that's great, I say that means no one is going to maintain anything for zero compensation and infinite liability.

* Fourth: no more learn development through open source contributions, as a variation of the above now every newbie that submits a change brings liability, so you're not accepting any changes from anyone you don't know, and who you don't have reason to believe is competent.

* Fifth: OSS licenses are out - they all explicitly state that there's no warrantee or fitness for purpose, but you've just said the engineer that signs off on them is liable for it, which necessarily means that your desire for liability trumps the license.

* Sixth: Free development tools are out - miscompilation is now a legal liability issue, so now a GCC bug opens whoever signed off on that code to liability.

The world you're describing is one in which the model of all software dev including free dev, now comes with liability that matches designing cars, or constructing buildings, both of which are much less complex and much more predictable than even the modest OSS projects, and those fields all come with significant cost based barriers to entry. The reason there are more software developers than there are civil or mechanical engineers is not because one is easier than the other, it's because you cannot learn civil or mechanical engineering (or most other engineering disciplines) as cheaply as software. The reason that software generally pays more than those positions is because the employer is taking on a bunch of the financial, legal, and insurance expenses required to do anything - the capital expenditure required to start a new civil or mechanical engineering companies is vastly higher than for software, and the basic overhead for just existing is higher, which means employers don't have to compete with employees deciding to start their own companies. A bunch of this is straight up capital costs, and capital, but the bulk of it is being able to have sufficient liability protection before even the smallest project is started. At that point, the company has insurance to cover the costs so the owners are generally fine, but the engineer that missed something - not the people forcing time crunches, short cuts, etc - is the one that will end up in jail. You've just said the same should apply to software: essentially the same as today company screws up and pays fine/settlement, but now with lowered pay rates for the developers, and they can go to jail.

All because you have decided to apply a liability model that is appropriate to an industry where the things that are signed off on have entirely self contained and mostly static behavior to a different industry where _by design_ the state changes constantly, so there is not, and cannot be, any equivalent "safety" model or system. Even in the industries that you're talking about, where analysis is overwhelmingly simpler and predictable, products and construction fails. Yet now you're saying the software development could just be the same as that. When developing a building, you can be paranoid in your design, and make it more expensive by overbuilding - civil engineering price competition is basically just a matter of "how much can I strip down the materials of construction" without it collapsing (noting that you can exactly model the entire behavior of anything in your design). Again, the software your new standards require are the same as that required by the space and airline industry that people routinely already berate for being "over priced".

You've made a point that there are engineers, and professional engineers, and the latter are the only ones who sign off on things. So it sounds like you're saying patches can only be reviewed by specific employees, who taken on liability for those changes, so now an OSS project has to employ a Professional Engineer to review contributions, who becomes liable for any errors they miss (out of curiosity, if a bug requires two different patches working together to create a vulnerability, which reviewer is now legally liable?). Those professional engineers now have to sign off on all software, so who is signing off on linux? You want to decode images, hopefully you can find someone to sign off on that. Actually it's probably best to have a smaller in-house product, or a commercial product from another company who have had their own professional engineers sign off, and who have sufficient insurance. Remember that you need to remind your employees not to contribute to any OSS projects, and don't release anything as OSS, because you would be liable if it goes wrong in a different product that you don't make money from now (remember your own Professional Engineers have signed off on the safety of the code you released, if they were wrong, you're now liable if someone downstream relied on that sign off).

This misses a few core details:

  Physical engineering is *not* software engineering (which yes as a title "software engineer" is not accurate in many/most cases as it does imply a degree of rigour absent in most software projects). Physical engineering does not employ the same degree of reuse and intermingling as occurs in software - the closest I can really think of is engine swaps in cars, but that's only really doable because the engine is essentially the most complex part of the car anyway (at least in an ICE), and even then the interaction with the rest of the car is extremely constrained, predictable, and can be physically minimized. For civil/structural engineering it's even more extreme: large construction (e.g. the complex cases) are not simply made by mashing together arbitrary parts of other projects - buildings fall into basically two categories: the exact same building with different dimensions and a different paint job, or entirely custom.

  Physical engineering has basically an entirely predictable state from which to work. The overwhelming majority of any physical design is completely static, the things that are dynamic have a functionally finite and directly modelable set of states and behaviors, and those states and behaviors vary in predictable ways in response to constrained options. Despite this, many (if not most, though quantifying this would be hard because there's also vastly more static engineering than dynamic) failures are in the dynamic parts of these projects than the static portions. Software is definitionally free of more or less any static behavior, the entire reason for software is the desire for constantly changing state.
A lot of failures in physical engineering are the result of failing to correctly model dynamic behavior, and again, software is entirely the part that Real Engineers have a tendency to model incorrectly.

petre

4 hours ago

> As long as "SWEs" do not have stamps and legal liability, they are not real (professional) engineers

When and if that happens I’ll move to carpentry. Good luck. Tech is already full of *it. The only thing short of making it even worse is stamps and a mafia-like org issuing the stamps and asking for contributions in return, like it happens in the fields of medical care, law, book keeping, architecture or civil engineering.

The companies should certify the products which require certification instead and get liability insurance.

rawgabbit

8 hours ago

>"We don't have a cyber security problem – we have a software quality problem. We don't need more security products – we need more secure products."

Uhmmm. The foundation of a lot of the modern economy is built on Windows and the Crowdstrike fiasco has shown, Windows requires a security software to save it from itself by running at the kernel level. If we truly want secure products, we should shutdown all Windows machines?

lmz

8 hours ago

Crowdstrike also provided software for Linux systems. It's something you install to satisfy auditors and they would demand the same of any OS unless such functionality was built-in.

bruce511

6 hours ago

But wait... most Windows runs on Intel, we should shutdown Intel.

Of course shutting down Windows will have zero effect on security. There are plenty of exploits for Linux, and most data leaks are hosted on Linux (but have nothing to do with the OS.)

To the degree to which software is a failure at the coding level (like every SQL injection, phishing, social engineering , sim swap, php, attack ever) the OS is irrelevant.)

In truth most all security has to do with people; programmers and users. The OS might be a nice scapegoat but it's not the root of the problem. Blaming it though helps to deflect attention away from ourselves though.

davisr

8 hours ago

> If we truly want secure products, we should shutdown all Windows machines?

There should be a period where you put a question mark.

idle_zealot

7 hours ago

Running the world's critical infrastructure on verified, small, readable codebases rather than a diaspora of unvetted closed-source programs sprinkled across Windows and Linux systems sounds like a good start.

Aerroon

7 hours ago

It wont matter unless the code is written specifically for the hardware and will only run on that hardware.

jdougan

7 hours ago

We would need to dump POSIX/WinXX and replace/upgrade them with something better, probably using an object-capabilities approach. WASI, Capsicum. etc.