twistedpair
3 days ago
ProTip: use PNPM, not NPM. PNPM 10.x shutdown a lot of these attack vectors.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local. (Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)
hinkley
3 days ago
Npm is what happens when you let tech debt stack up for years too far. It took them five attempts to get lock files to actually behave the way lock files are supposed to behave (lockfile version 3, + at least 2 unversioned attempts before that).
It’s clear from the structure and commit history they’ve been working their asses off to make it better, but when you’re standing at the bottom of a well of suck it takes that much work just to see daylight.
The last time I chimed in on this I hypothesized that there must have been a change in management on the npm team but someone countered that several of the maintainers were the originals. So I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying. There’s just too much stupid there to make it easy.
I’m pretty sure it still cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache. Which means people with shit internet connections and large projects basically waste hours several times a week doing updates that fail.
ornornor
3 days ago
> cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache.
I wonder what circumstances led to saying “this is okay we’ll ship it like that”
hinkley
3 days ago
I think we can blame the IO streaming API in NodeJS on this. It’s a callback and you just know you got another block. My guess is chunked mode and not checking whether the bytes expected and the bytes received matched.
Not to diminish the facepalm but the blame can be at least partially shared.
Our UI lead was getting the worst of this during Covid. I set up an nginx forward proxy mostly for him to tone this down a notch (fixed a separate issue but helped here a bit as well) so he could get work done on his shitty ISP.
cyanydeez
3 days ago
Ignorance. Most programmers in open source operate on the "works on my machine"
xp84
2 days ago
True, and things that manifest only on old/slow hardware or on bad internet are the worst kind for this, since 100% of developers who have any say in the matter would never accept such circumstances at all, so they’re always approaching every issue with multi-gigabit speeds, zero latency, and this year’s $3,000 Mac. “What do you mean the page loads slowly?”
cxr
3 days ago
> I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying.
If they were trying, they'd stop doubling down on sunk costs and instead publicly concede that lock files and how npm-the-tool uses them to attempt to ensure the integrity of packages fetched from npm-the-registry is just a poor substitute for content-based addressing that ye olde DVCS would otherwise be doing when told to fetch designated shared objects from the code repo—to be accompanied by a formal deprecation of npm-install for use in build pipelines, i.e. all the associated user guides and documentation and everything else pushing it as best practice.
npm-install has exactly one good use case: probing the registry to look up a package by name to be fetched by the author (not collaborators or people downstream who are repackaging e.g. for a particular distribution) at the time of development (i.e. neither run time nor build time but at the time that author is introducing the dependency into their codebase). Every aspect of version control should otherwise be left up to the underlying SCM/VCS.
calvinmorrison
3 days ago
but this stuff is basically solved. We have enough history with languages and distribution of packages, repositories, linux, public trust, signing, maintainers, etc.
One key shift is there is no packager anymore. Its just - trust the publisher.
Any language as big as Node should hire a handful of old unix wizards to teach them the way the truth and the life.
smackeyacky
3 days ago
Likely they wouldn’t listen. Modern languages and environments seem intent on reinventing bad solutions to solved problems. I get it if it’s a bunch of kids that have never seen anything better but there is no excuse these days not to have at least a passing knowledge of older systems if you’ve been around a while.
calvinmorrison
3 days ago
there's certainly a piece of it. Also most seasoned people are not very interested in new languages and environments, and most languages are not 'spec built' by experts like Rob Pike building Go who explicitly set out to solve a lot of his problems, but are more naturally grown and born.
zahlman
3 days ago
> One key shift is there is no packager anymore. Its just - trust the publisher.
Repositories like NPM's, and PyPI, contain many more packages than any Linux distro. And the Linux Foundation actually gets funded.
calvinmorrison
3 days ago
NPM isn't a package repository it's more akin a code repository.
There's a reason why most distributions dont ship upstream (except basically Arch)
andrewmcwatters
3 days ago
[dead]
tetha
3 days ago
> And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
LelouBil
3 days ago
I would love to use this (for homelab stuff currently) but I would love a way to have vault/openbao be fully configuration-as-code and version controlled, and only have the actual secret values (those that would not be dynamic) in persistent storage.
mikestorrent
3 days ago
Definitely curious if you've come up with a way to give each build a short lived vault approle somehow in any CI system.
acedTrex
3 days ago
We do that in our github runners with oidc integration. Works well
benoau
3 days ago
Or just 'npm ci' so you install exactly what's in your package-lock.json instead of the latest version bumps of those packages. This "automatic updating" is a big factor in why these attacks are working in the first place. Make package updating deliberate instead of instant or on an arbitrary lag.
dachris
2 days ago
You'd be surprised how many people run 'npm i' in their CI. I've seen this on multiple occasions.
'npm ci' is some mitigation, but doesn't protect against getting hit when running 'npm i(nstall)' during development.
zahlman
3 days ago
For Python ecosystem people:
> Does not default to running post-install scripts (must manually approve each)
To get equivalent protection, use `--only-binary=:all:` when running `pip install` (or `uv pip install`). This prevents installing source distributions entirely, using exclusively pre-built wheels. (Note that this may limit version ability or even make your installation impossible.) Python source packages are built by following instructions provided with the package (specifying a build system which may then in turn be configured in an idiosyncratic way; the default Setuptools is configured using a Python script). As such, they effectively run a post-install script.
(For PAPER, long-term I intend to design a radically different UI, where you can choose a named "source" for each package or use the default; and sources are described in config files that explain the entire strategy for whether to use source packages, which indexes to check etc.)
> Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
Pip does not support this; with uv, use `--exclude-newer`. This appears to require a timestamp; so if you always want things up to X days old you'll have to recalculate.
acdha
3 days ago
> Pip does not support this; with uv, use `--exclude-newer`. This appears to require a timestamp; so if you always want things up to X days old you'll have to recalculate.
I do this by having my shell init do this:
export UV_EXCLUDE_NEWER=$(date -Iu -d "14 days ago")
That’s easy to override if you need to but otherwise seamless.zahlman
3 days ago
FWIW, I'd like if these tools had an option to prefer the oldest version satisfying the given constraints (rather than the newest, as it is now — probably still a better default).
collinmanderson
2 days ago
> prefer the oldest version satisfying the given constraints
The problem is there's no metadata for which versions fix security bugs, and therefore which previous versions are now insecure.
jjice
3 days ago
There were some recent posts I saw about "dependency cooldowns", which seem to be what you're referring to in item 2. The idea really resonated with me.
That said, I hard pin all our dependencies and get dependabot alerts and then look into updates manually. Not sure if I'm a rube or if that's good practice.
jaapz
3 days ago
That's good practice. God knows how many times I've been bitten by npm packages breaking on minor or even patch version changes, even when proudly proclaiming to use semver
madeofpalk
3 days ago
You shouldn't have any keys anywhere at all. Use OIDC https://docs.npmjs.com/trusted-publishers
Unfortunately you need to `npm login` with username and password in order to publish the very first version of a package to set up OIDC.
twistedpair
3 days ago
I'm struggling to understand why Trusted Publishers is any better.
Let's say you have a limited life, package specific scoped, IP CIDR bound publishing key, running on a private GH workflow runner. That key only exists in a trusted clouds secret store (e.g. no one will have access it from their laptop).
Now let's say you're a "trusted" publisher, running on a specific GitHub workflow, and GitHub Org, that has been configured with OIDC on the NPM side. By virtue of simply existing in that workflow, you're now a NPM publisher (run any publish commands you like). No need to have a secret passed into your workflow scope.
If someone is taking over GitHub CI/CD workflows by running `npm i` at the start of their workflow, how does the "Trusted Publisher" find themselves any more secure than the secure, very limited scope token?
c-hendricks
3 days ago
A whole single supported CI partner outside their own corporate family. They really planned this out well.
blktiger
3 days ago
Both NPM and Yarn have a way to disable install scripts which everyone should do if at all possible.
twistedpair
3 days ago
Good point, but until many popular packages stop requiring install.sh to operate, you'll still need to allowlist some of them. That is built into the PNPM tooling, luckily :)
dschofie
3 days ago
Reading through the post it looks like this infects via preinstall?
> The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file.
twistedpair
3 days ago
You're right. PNPM disables all install scripts by default. I was just noting one example.
lrvick
2 days ago
Pnpm cannot be built without an existing pnpm binary meaning there is no way to bootstrap it from audited source code. Perfect trusting trust attack situation.
Full source bootstrapped NPM with manually reviewed dependencies is the only reasonably secure way to use NodeJS right now.
tripplyons
3 days ago
Is there a way to set a minimum release age globally for my pnpm installation? I was only able to find a way to set it for each individual project.
Operyl
3 days ago
Did you try putting it in your global config file?
Windows: ~/AppData/Local/pnpm/config/rc
macOS: ~/Library/Preferences/pnpm/rc
Linux: ~/.config/pnpm/rc
twistedpair
3 days ago
I think it's a `pnpm-workspace.yaml` setting, for now, but PNPM has been pretty aggressive with expanding this feature set [1].
latchkey
3 days ago
ProTip: `use bun`
Funny that this is getting downvoted, but it installs dependencies super fast, and has the same approval feature as pnmp, all in a simple binary.
benjifri
3 days ago
This is like saying "use MacOS and you won't get viruses" in the 2000s
koito17
3 days ago
Bun disables post-install scripts by default and one can explicitly opt-in to trusting dependencies in the package.json file. One can also delay installing updated dependencies through keys like `minimumReleaseAge`. Bun is a drop-in replacement for the npm CLI and, unlike pnpm, has goals beyond performance and storage efficiency.
Not sure what your analogy is trying to imply.
salomonk_mur
3 days ago
Which was for the most part true.
latchkey
3 days ago
The suggestion was to use pnpm, and I'm suggesting something I prefer more than pnpm.
hiccuphippo
3 days ago
Except trying it out takes a minute and costs nothing.
hexasquid
2 days ago
"Rewrite it in rust"
halflife
3 days ago
What does it do with packages that download binaries for specific architecture in the post script?
madeofpalk
3 days ago
You don't need post-install scripts for this. Use optionalDependencies instead https://github.com/nrwl/nx/blob/master/packages/nx/package.j...
Each of those deps contains a constraint installing only for the relevant platform.
mook
3 days ago
As far as I can understand from the documentation, that doesn't actually specify in that config that one of them is required, does it? That is, if they _all_ fail to install as far as the system is concerned there's nothing wrong? There will be runtime errors of course, but that's sort of disappointing…
halflife
3 days ago
That’s cool, now I wish all libraries that need binaries would opt to use that instead of post script
zahlman
3 days ago
Do keep in mind that the binaries are still binaries. Even if your installation process doesn't run any untrusted code from the package, you can't audit the binaries like you might the .js files prior to first run.
edoceo
3 days ago
As stated, you manually approve them.
poetril
3 days ago
How does bun compare? Does it have similar features as well?
flanbiscuit
3 days ago
yes bun does both of the things mentioned in the parent comment:
> Unlike other npm clients, Bun does not execute arbitrary lifecycle scripts like postinstall for installed dependencies. Executing arbitrary scripts represents a potential security risk.
https://bun.com/docs/pm/cli/install#lifecycle-scripts
> To protect against supply chain attacks where malicious packages are quickly published, you can configure a minimum age requirement for npm packages. Package versions published more recently than the specified threshold (in seconds) will be filtered out during installation.
dzonga
3 days ago
pnpm on the backend, frontend use nobuild.
embedding-shape
3 days ago
> NPM is too insecure for production CLI usage.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
jkrems
3 days ago
They didn't deploy the code. That's not how this exploit works. They _downloaded_ the code to their machine. And npm's behavior is to implicitly run arbitrary code as part of the download - including, in this case, a script to harvest credentials and propagate the worm. That part has everything to do with npm behavior and nothing to do with how much anybody reviewed 3P deps. For all we know they downloaded the new version of the affected package to review it!
Ajedi32
3 days ago
If people stop running install scripts, isn't Shai-Hulud 3: Electric Boogaloo just going to be designed to run its obfuscated malware at runtime rather than install time? Who manually reviews new versions of their project dependencies after installing them but before running them?
GP is correct. This is a workflow issue. Without a review process for dependencies, literally every package manager I know of is vulnerable to this. (Yes, even Maven.)
zahlman
3 days ago
> If people stop running install scripts, isn't Shai-Hulud 3: Electric Boogaloo just going to be designed to run its obfuscated malware at runtime rather than install time?
Many such forms of malware have already been published and detected.
> Who manually reviews new versions of their project dependencies after installing them but before running them?
One person putting in this effort can protect everyone thereafter.
The PyPI website has a "Report project as malware" button on each project page for this purpose.
But yes, this is the world we live in. Without this particular form of insecurity, there is no "ecosystem" at all.
locallost
3 days ago
Thank you.
bpavuk
3 days ago
wait, I short-circuited here. wasn't the very concept of "libraries" created to *not* have to think about what exactly the code does?
imagine reviewing every React update. yes, some do that (Obsidian claims to review every dependency, whether new or an update), but that's due to flaws of the ecosystem.
take a look at Maven Central. it's harder to get into, but that's the price of security. you have to verify the namespace so that no one will publish under e.g. `io.gitlab.bpavuk.` namespace unless they have access to the `bpavuk` GitLab group or user, or `org.jetbrains.` unless they prove the ownership of the jetbrains.com domain.
Go is also nice in that regard - you are depending on Git repositories directly, so you have to hijack into the Git repo permissions and spoil the source code there.
tadfisher
3 days ago
> Go is also nice in that regard - you are depending on Git repositories directly, so you have to hijack into the Git repo permissions and spoil the source code there.
That in itself is scary because Git refs are mutable. Even with compromised credentials, no one can replace artifacts already deployed to Maven Central, because they simply don't allow it. There is nothing stopping someone from replacing a Git tag with one that points to compromised code.
The surface area is smaller because Go does locking via go.sum, but I could certainly see a tired developer regenerating it over the most strenuous of on-screen objections from the go CLI.
fpoling
3 days ago
Go also includes a database of known package hashes so altering git tag to point to another commit will be detected.
JodieBenitez
3 days ago
I don't know if it's a common or even a good practice, but I like to go mod vendor and add the result to my repo.
zahlman
3 days ago
> wasn't the very concept of "libraries" created to not have to think about what exactly the code does?
If you care about security, you only have to care once, during the audit. And you can find a pretty high percentage of malware in practice without actually having a detailed understanding of the non-malicious workings of the code.
Libraries allow you to not think about what the code does at development time, which in general is much more significant than audit time. Also, importantly, they allow you not to have to design and write that part of the code.
Ajedi32
3 days ago
Are GitHub creds any harder for malware to steal than NPM creds? I don't see how that helps at all.
jama211
3 days ago
“Personally, I never wear a seatbelt because all drivers on the road should just follow the road rules instead and drive carefully.”
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
embedding-shape
3 days ago
> and a company can’t magically turn all employees into perfect developers
Sure, agree, that's why professionals have processes and workflows, everyone working together to build the greatest stuff you can.
But when not a single person in the entire company reviews the code that gets deployed and run by users, you have to start asking what kind of culture the company has, it's borderline irresponsible I'd say.
jama211
2 days ago
Or they have different priority structures where this isn’t the developers job. Also things get missed sometimes no matter how good your processes are. But improving those processes could involve something like avoiding npm over pnpm.