Axios compromised on NPM – Malicious versions drop remote access trojan

927 pointsposted 9 hours ago
by mtud

319 Comments

postalcoder

7 hours ago

PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages.

I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default.

Here's how to set global configs to set min release age to 7 days:

  ~/.config/uv/uv.toml
  exclude-newer = "7 days"

  ~/.npmrc
  min-release-age=7 # days
  ignore-scripts=true
  
  ~/Library/Preferences/pnpm/rc
  minimum-release-age=10080 # minutes
  
  ~/.bunfig.toml
  [install]
  minimumReleaseAge = 604800 # seconds
(Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)

If you're developing with LLM agents, you should also update your AGENTS.md/CLAUDE.md file with some guidance on how to handle failures stemming from this config as they will cause the agent to unproductively spin its wheels.

friendzis

6 hours ago

> (Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)

First day with javascript?

notpushkin

6 hours ago

You mean first 86,400 seconds?

x0x0

5 hours ago

You have to admire the person who designed the flexibility to have 87239 seconds not be old enough, but 87240 to be fine.

zelphirkalt

2 hours ago

I actually think it is not too bad a design, because seconds are the SI base unit for time. Putting something like "x days" requires additional parsing steps and therefore complexity in the implementation. Either knowing or calculating how many seconds there are in a day can be expected of anyone touching a project or configuration at this level of detail.

wongarsu

an hour ago

Seconds are also unambiguous. Depending on your chosen definition, "X days" may or may not be influenced by leap seconds and DST changes.

I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler

raverbashing

4 hours ago

This is the difference between thinking about the user experience and thinking just about the technical aspect

gib444

5 hours ago

OP should be glad a new time unit wasn't invented

friendzis

4 hours ago

Workdays! Think about it, if you set the delay in regular days/seconds the updated dependency can get pulled in on a weekend with only someone maybe on-call.

(Hope your timezones and tzdata correctly identifies Easter bank holiday as non-workdays)

berkes

3 hours ago

> Workdays!

This is javascript, not Java.

In JavaScript something entirely new would be invented, to solve a problem that has long been solved and is documented in 20+ year old books on common design patterns. So we can all copy-paste `{ or: [{ days: 42, months: 2, hours: "DEFAULT", minutes: "IGNORE", seconds: null, timezone: "defer-by-ip" }, { timestamp: 17749453211*1000, unit: "ms"}]` without any clue as to what we are defining.

In Java, a 6000LoC+ ecosystem of classes, abstractions, dependency-injectables and probably a new DSL would be invented so we can all say "over 4 Malaysian workdays"

whatisthiseven

3 hours ago

But you know that Java solution will continue working even after we no longer use the Gregorian Calendar, the collapse and annexation of Malaysia to some foreign power, and then us finally switching to a 4-day work week; so it'd be worth it.

nesarkvechnep

2 hours ago

It probably won’t work correctly from the get go. But it can be debugged everywhere so that’s good.

berkes

an hour ago

... and since it was architectured to allow runtime injection-patching of events before they hit the enterprise-service-bus, everyone using this library must first set fourteen ENV vars in their profile, and provide a /etc/java/springtime/enterprise-workday-handling/parse-event-mismatch.jar.patch. Which should fix the bug for you.

You can find the patch files for your OSs by registering at Oracle with a J3EE8.4-PatchLibID (note, the older J3EE16-PatchLib-ids aren't compatible), attainable from your regional Oracle account-manager.

yohannesk

3 hours ago

And we also need localization. Each country can have their own holidays

wongarsu

an hour ago

Don't forget about regional holidays, which might follow arbitrary borders that don't match any of the official subdivisions of the country. Or may even depend on the chosen faith of the worker

rolandog

3 hours ago

And we need groups of locales for teams that are split across multiple locations; e.g.:

  new_date = add_workdays(
    workdays=1.5,
    start=datetime.now(),
    regions=["es", "mx", "nl", "us"],
  )

mewpmewp2

3 hours ago

Hopefully "es" will have Siesta support too.

zdc1

3 hours ago

Might be better to calculate them separately for each locale and then tie-break with your own approach (min/max/avg/median/etc.)

ghurtado

3 hours ago

If we're taking suggestions, I'd like to propose "parsec" (not to be confused with the unit of distance of the same name)

That way Han Solo can make sense in the infamous quote.

EDIT: even Gemini gets this wrong:

> In Star Wars, a parsec is a unit of distance, not time, representing approximately 3.26 light-years

cyrusmg

4 hours ago

N multiplications of dozen-second

vasco

2 hours ago

To me it sounds safer to have different big infra providers with different delays, otherwise you still hit everyone at the same time when something does inevitably go undetected.

And the chances of staying undetected are higher if nobody is installing until the delay time ellapses.

It's the same as not scheduling all cronjobs to midnight.

superjan

6 hours ago

About the use of different units: next time you choose a property name in a config file, include the unit in the name. So not “timeout” but “timeoutMinutes”.

layer8

5 hours ago

Or require the value to specify a unit.

mort96

4 hours ago

At that point, you're making all your configuration fields strings and adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either; either you write a bunch of parsing code (not terribly difficult but not something I wanna do when I can just not), or you use some time library to parse a duration string, in which case the programming language and time library you happen to use suddenly becomes part of your config file specification and you have to exactly re-implement your old time handling library's duration parser if you ever want to switch to a new one or re-implement the tool in another language.

I don't think there are great solutions here. Arguably, units should be supported by the config file format, but existing config file formats don't do that.

layer8

8 minutes ago

Another parsing step is the common case. Few parameters represent untyped strings where all characters and values are valid. For numbers as well, you often have a limited admissible range that you have to validate for. In the present case, you wouldn’t allow negative numbers, and maybe wouldn’t allow fractional numbers. Checking for a valid number isn’t inherently different from checking for a regex match.

notpushkin

4 hours ago

TOML has a datetime type (both with or without tz), as well as plain date and plain time:

  start_at = 2026-05-27T07:32:00Z  # RFC 3339
  start_at = 2026-05-27 07:32:00Z  # readable
We should extend it with durations:

  timeout = PT15S  # RFC 3339
And like for datetimes, we should have a readable variant:

  timeout = 15s   # can omit "P" and "T" if not ambiguous, can use lowercase specifiers
Edit: discussed in detail here: https://github.com/toml-lang/toml/issues/514

dxdm

an hour ago

> adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either

I'd argue that it is ideal, in the sense that it's the sweet spot for a general config file format to limit itself to simple, widely reusable building blocks. Supporting more advanced types can get in the way of this.

Programs need their own validation and/or parsing anyway, since correctness depends on program-specific semantics and usually only a subset of the values of a more simply expressed type is valid. That same logic applies across inputs: config may come from files, CLI args, legacy formats, or databases, often in different shapes. A single normalization and validation path simplifies this.

General formats must also work across many languages with different type systems. More complex types introduce more possible representations and therefore trade-offs. Even if a file parser implements them correctly (and consistently with other such parsers), it must choose an internal form that may not match what a program needs, forcing extra, less standard transformation and adding complexity on both sides for little gain.

Because acceptable values are defined by the program, not the file, a general format cannot fully specify them and shouldn’t try. Its role is to be a medium and provide simple, human-usable (for textual formats), widely supported types, avoid forcing unnecessary choices, and get out of the way.

All in all, I think it can be more appropriate for a program to pick a parsing library for a more complex type, than to add one consistently to all parsers of a given file format.

weird-eye-issue

6 hours ago

timeoutMs is shorter ;)

You guys can't appreciate a bad joke

cozzyd

5 hours ago

Megaseconds are about the right timescale anyway

kace91

2 hours ago

What megaseconds? They clearly meant the Microsoft-defined timeout.

withinboredom

2 hours ago

timoutμs is even better. People will learn how to type great symbols.

sayamqazi

5 hours ago

not timeout at all is even shorter.

powerpixel

2 hours ago

Is there a way to do that per repo for these tools ? We all know how user sided configuration works for users (they usually clean it whenever it goes against what they want to do instead of wondering why it blocks their changes :))

ZeWaka

2 hours ago

At least with npm, you can have a .npmrc per-repo

cowl

2 hours ago

min release age to 7 days about patch releases exposes you to the other side of the coin, you have an open 7 days window on zero-day exploits that might be fixed in a security release

ksnssjsjsj

2 hours ago

Out of the frying pan and into the frier.....

sspiff

2 hours ago

It's wild that none of these are set by default.

I know 90% of people I've worked with will never know these options exist.

zelphirkalt

2 hours ago

If everyone or a majority of people sets these options, then I think issues will simply be discovered later. So if other people run into them first, better for us, because then the issues have a chance of being fixed once our acceptable package/version age is reached.

po1nt

2 hours ago

That would likely mean same amount of people get the vulnerability, just 7 days later.

XYen0n

7 hours ago

If everyone avoids using packages released within the last 7 days, malicious code is more likely to remain dormant for 7 days.

otterley

7 hours ago

What do you base that on? Threat researchers (and their automated agents) will still keep analyzing new releases as soon as they’re published.

mike_hearn

3 hours ago

Their analysis was triggered by open source projects upgrading en-masse and revealing a new anomalous endpoint, so, it does require some pioneers to take the arrows. They didn't spot the problem entirely via static analysis, although with hindsight they could have done (missing GitHub attestation).

narrator

2 hours ago

A security company could set up a honeypot machine that installs new releases of everything automatically and have a separate machine scan its network traffic for suspicious outbound connections.

staticassertion

29 minutes ago

> What do you base that on?

The entire history of malware lol

cozzyd

7 hours ago

that's why people are telling others to use 7 days but using 8 days themselves :)

wongarsu

an hour ago

brb, switching everything to 9 days

shreyssh

2 hours ago

Worth noting this attack was caught because people noticed anomalous network traffic to a new endpoint. The 7-day delay doesn't just give scanners time, it gives the community time to notice weird behavior from early adopters who didn't have the delay set.

It's herd immunity, not personal protection. You benefit from the people who DO install immediately and raise the alarm

sersi

2 hours ago

But wouldn't the type of people that notifes anomalous network activity be exactly the type of people who add a 7 day delay because they're security conscious?

jmward01

7 hours ago

I suspect most packages will keep a mix of people at 7 days and those with no limit. That being said, adding jitter by default would be good to these features.

Barbing

5 hours ago

>adding jitter by default would be good

This became evident, what, perhaps a few years ago? Probably since childhood for some users here but just wondering what the holdup is. Lots of bad press could be avoided, or at least a little.

DimmieMan

7 hours ago

They’re usually picked up by scanners by then.

Aurornis

7 hours ago

Most people won’t.

7 days gives ample time for security scanning, too.

3abiton

6 hours ago

This highly depends on the detection mechanism.

bakugo

7 hours ago

> If everyone avoids using packages released within the last 7 days

Which will never even come close to happening, unless npm decides to make it the default, which they won't.

mhio

7 hours ago

and for yarn berry

    ~/.yarnrc.yml
    npmMinimalAgeGate: "3d"

cvak

3 hours ago

I think the npm doesn't support end of line comments, so

  ~/.npmrc
  min-release-age=7 # days 
actually doesn't set it at all, please edit your comment.

EDIT: Actually maybe it does? But it's weird because

`npm config list -l` shows: `min-release-age = null` with, and without the comment. so who knows ¯\_(ツ)_/¯

cvak

3 hours ago

ok, it works, only the list function shows it as null...

ashishb

5 hours ago

Run npm/pnpm/bun/uv inside a sandbox.

There is no reason to let random packages have full access to your machine

WD-42

6 hours ago

Props to uv for actually using the correct config path jfc what is “bunfig”

abustamam

2 hours ago

Silly portmanteau of "bun" and "config"

antihero

2 hours ago

npm is claiming this doesn’t exist

imhoguy

3 hours ago

Good luck with any `npm audit` in a pipeline. Sometimes you have to pull the latest release because the previous one had a critical vulnerability.

umko21

5 hours ago

The config for uv won't work. uv only supports a full timestamp for this config, and no rolling window day option afaik. Am I crazy or is this llm slop?

ad3xyz

5 hours ago

https://docs.astral.sh/uv/concepts/resolution/#dependency-co...

> Define a dependency cooldown by specifying a duration instead of an absolute value. Either a "friendly" duration (e.g., 24 hours, 1 week, 30 days) or an ISO 8601 duration (e.g., PT24H, P7D, P30D) can be used.

umko21

5 hours ago

My bad. This works for per project configuration, but not for global user configuration.

js2

4 hours ago

I think it should work at the user config level too:

> If project-, user-, and system-level configuration files are found, the settings will be merged, with project-level configuration taking precedence over the user-level configuration, and user-level configuration taking precedence over the system-level configuration.

https://docs.astral.sh/uv/concepts/configuration-files/

bob1029

3 hours ago

"Batteries included" ecosystems are the only persistent solution to the package manager problem.

If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.

I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.

Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.

wongarsu

an hour ago

> In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography

Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.

Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.

ptx

37 minutes ago

I'm pretty sure it's really one HTTP library: urllib.request is built on top of http.client. But the very Java-inspired API for the former is awful.

troad

16 minutes ago

What are some examples of batteries-included languages that folk around here really feel productive in and/or love? What makes them so great, in your opinion?

(Leaving aside thoughts on language syntax, compile times, tooling etc - just interested in people's experiences with / thoughts on healthy stdlibs)

zdc1

2 hours ago

The other thing that keeps coming up is the github-code-is-fine-but-the-release-artifact-is-a-trojan issue. It really makes me question if "packages" should even exist in JavaScript, or if we could just be importing standard plain source code from a git repo.

I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.

embedding-shape

29 minutes ago

> I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.

Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.

Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(

invaliduser

25 minutes ago

For a lot of code, I switched to generating code rather than using 3rd party libraries. Things like PEG parsers, path finding algorithms, string sanitizers, data type conversion, etc are very conveniently generated by LLMs. It's fast, reduces dependencies, and feels safer to me.

troad

21 minutes ago

Ah, so you've traded the possibility of bad dependencies for certainty.

mrsmrtss

2 hours ago

Fully agree with this! I think today .NET is probably the most batteries included platform you can get. This means that even if you use third-party libraries, these typically depend only on first-party dependencies, making it much less likely for something shady to sneak in.

Imustaskforhelp

2 hours ago

To me, I really like Golang's batteries included platform. I am not sure about .NET though

junon

2 hours ago

This is a rather superlative and tunnel vision, "everything is a nail because I'm a hammer" approach. The truth is this is an exceedingly difficult problem nobody has adequately solved yet.

christophilus

2 hours ago

Honestly, you can get pretty far with just Bun and a very small number of dependencies. It’s what I love most about Bun. But, I do agree with you generally. .NET is about as good as I’ve ever seen for being batteries included. I just hate the enterprisey culture that always seems to pervade .NET shops.

bob1029

an hour ago

I agree about the culture. If I take my eye off the dev team for too long, I'll come back and we'll be using entity framework and a 20 page document about configuring code cleanup rules in visual studio.

vips7L

an hour ago

Entity framework is pretty good.

gib444

2 hours ago

What kind of apps do you build / industry etc?

h4ch1

8 hours ago

I can't even imagine the scale of the impact with Axios being compromised, nearly every other project uses it for some reason instead of fetch (I never understood why).

Also from the report:

> Neither malicious version contains a single line of malicious code inside axios itself. Instead, both inject a fake dependency, plain-crypto-js@4.2.1, a package that is never imported anywhere in the axios source, whose only purpose is to run a postinstall script that deploys a cross-platform remote access trojan (RAT)

Good news for pnpm/bun users who have to manually approve postinstall scripts.

beart

8 hours ago

> nearly every other project uses it for some reason instead of fetch (I never understood why).

Fetch wasn't added to Node.js as a core package until version 18, and wasn't considered stable until version 21. Axios has been around much longer and was made part of popular frameworks and tutorials, which helps continue to propagate it's usage.

seer

8 hours ago

Also it has interceptors, which allow you to build easily reusable pieces of code - loggers, oauth, retriers, execution time trackers etc.

These are so much better than the interface fetch offers you, unfortunately.

reactordev

8 hours ago

You can do all of that in fetch really easily with the init object.

   fetch('https://api.example.com/data', {
  headers: {
    'Authorization': 'Bearer ' + accessToken
  }
})

zdragnar

6 hours ago

There are pretty much two usage patterns that come up all the time:

1- automatically add bearer tokens to requests rather than manually specifying them every single time

2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.

There's no reason to repeat this logic in every single place you make an API call.

Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.

You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.

arghwhat

3 hours ago

Interceptors are just wrappers in disguise.

    const myfetch = async (req, options) => {
        let options = options || {};
        options.headers = options.headers || {};
        options.headers['Authorization'] = token;
    
        let res = await fetch(new Request(req, options));
        if (res.status == 401) {
            // do your thing
            throw new Error("oh no");
        }
        return res;
    }
Convenience is a thing, but it doesn't require a massive library.

nailer

2 hours ago

That fetch requires so many users to rewrite the same code - that was already handled well by every existing node HTTP client- says something about the standards process.

arghwhat

2 hours ago

It could also be trivially written for XMLHttpRequest or any node client if needed. Would be nice if they had always been the same, but oh well - having a server and client version isn't that bad.

Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...

(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)

abluecloud

an hour ago

that's such a weak argument. you can write about 20 lines of code to do exactly this without requiring a third party library.

anon7000

5 hours ago

Helper functions seem trivial and not like you’re reimplementing much.

creshal

3 hours ago

Don't be silly, this is the JS ecosystem. Why use your brain for a minute and come up with a 50 byte helper function, if you can instead import a library with 3912726 dependencies and let the compiler spend 90 seconds on every build to tree shake 3912723 out again and give you a highly optimized bundle that's only 3 megabytes small?

sayamqazi

5 hours ago

> usage patterns

IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.

> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else

mhio

7 hours ago

What does an interceptor in the RequestInit look like?

meekins

7 hours ago

It also supports proxies which is important to some corporate back-end scenarios

eviks

7 hours ago

> Good news for pnpm/bun users who have to manually approve postinstall scripts.

Would they not have approved it for earlier versions? But also wouldn't the chance of addition automatic approval be high (for such a widely used project)?

arcfour

6 hours ago

The prompt would be to approve the new malicious package (plain-crypto-js)'s scripts, too, which could tip users off that something was fishy. If they were used to approving one for axios and the attackers had just overwrote axios's own instead of making a new package, it would probably catch people out.

bpev

6 hours ago

Assuming axios didn't have a postinstall script before, it wouldn't have been approved for a previous version. If you ignore it, you ignore it, but postinstall scripts are relatively rare in npm deps, so it would seem a bit out of place when the warning pops up.

h4ch1

6 hours ago

Can't speak for other devs but I like to read postinstall scripts or at least put them through an LLM if they're too hard to grok.

It's also a little context dependent, for example if I was using Axios and I see a prompt to run the plain-crypto-js postinstall script, alarm bells would instantly ring, which would at least make me look up the changelog to see why this is happening.

In most cases I don't even let them run unless something breaks/doesn't work as expected.

martmulx

7 hours ago

Does pnpm block postinstall on transitive deps too or just top-level? We have it configured at work but I've never actually tested whether it catches scripts from packages that get pulled in as sub-dependencies.

arcfour

6 hours ago

It prompts for transitive dependencies, too. I have never had workerd as a direct dependency of any project of mine but I get prompted to approve its postinstall script whenever I install cloudflare's wrangler package (since workerd needs to download the appropriate Workers runtime for your platform).

dawnerd

7 hours ago

From what I can tell, it blocks it everywhere.

martmulx

3 hours ago

That's solid, really helps lock down the supply chain attack surface. Do you ever end up having to whitelist anything that legitimately needs to run on install?

homebrewer

25 minutes ago

After using pnpm for years (at least 5, don't remember exactly), I've only ever had to whitelist one library that uses a postinstall script to download a native executable for your system. And even this is not necessary, it's just poorly designed.

For example, esbuild and typescript 7 split binaries for different systems and architectures into separate packages, and rely on your package manager to pull the correct one.

nananana9

6 hours ago

Package managers are a failed experiment.

We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.

The issue is not dependencies themselves, it's transitive ones. Nobody installs left-pad or is-even-number directly, and "libraries" like these are the vast majority of the attack surface. If you get rid of transitive dependencies, you get rid of the need of a package manager, as installing a package becomes unzipping a few files into a vendor/ folder.

There's so many C libraries like this. Off the top of my head, SQLite, FreeType, OpenSSL, libcurl, libpng/jpeg, stb everything, zlib, lua, SDL, GLFW... I do game development so I'm most familiar with the ones commonly used in game engines, but I'm sure other fields have similarly high quality C libraries.

They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust, and C#/Java/JS/Python libraries are never used outside their respective language (aside form Java ones in other JVM langs).

pjc50

5 hours ago

Package managers are now basically a requirement for language adoption. Doing it manually is not a solution, in an automated world.

What is a problem is library quality. Which is downstream of nobody getting paid for it, combined with an optimistic but unrealistic "all packages are equal" philosophy.

> High quality C libraries

> OpenSSL

OpenSSL is one of the ones where there's a ground up rewrite happening because the code quality is so terrible while being security critical.

On the other end, javascript is uniquely bad because of the deployment model and difficulty of adding things to the standard library, so everything is littered with polyfills.

hresvelgr

3 hours ago

> Package managers are now basically a requirement for language adoption. Doing it manually is not a solution, in an automated world.

Absolute nonsense. What does automated world even mean? Even if one could infer reasonably, it's no justification. Appealing to "the real world" in lieu of any further consideration is exactly the kind of mindlessness that has led to the present state of affairs.

Automation of dependency versions was never something we needed it was always a convenience, and even that's a stretch given that dependency hell is abundant in all of these systems, and now we have supply chain attacks. While everyone is welcome to do as they please, I'm going to stick to vendoring my dependencies, statically compiling, and not blindly trusting code I haven't seen before.

nailer

2 hours ago

> Automation of dependency versions was never something we needed

How do you handle updating dependencies then?

pjc50

an hour ago

> What does automated world even mean?

People are trying to automate the act of programming itself, with AI, let alone all the bits and pieces of build processes and maintenance.

PedroBatista

2 hours ago

Relax, while mentioning the real world without any criticism for the soundness of the solution is absolute nonsense, some would say idiotic, thinking only in the absolute best solution given your narrow world view is not any better.

hresvelgr

4 minutes ago

While I agree that my view is narrow, the "best solution" in question is what we used to do, and it was fine. There are still many places that manually manage dependencies. Fundamentally automatic software versioning is an under-developed area in need of attention, and technologies like semantic versioning which are ubiquitous are closer to suggestions, and not true indicators of breaking changes. My personal view is that fully automatic dependency version management is an ongoing experiment and should be treated as such.

doginasuit

an hour ago

> We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.

I'm not sure why you believe this is more secure than a package manager. At least with a package manager there is an opportunity for vetting. It's also trivial that it did not increase your executable's size. If your executable depends on it, it increases its effective size.

hvb2

5 hours ago

If you're developing for the web your attack surface is quite a bit bigger. Your proposed solution of copying a few files might work but how do you keep track of updates? You might be vulnerable to a published exploit fixed a few months ago. A package manager might tell you a new version is available. I don't know how that would work in your scenario.

layer8

5 hours ago

For some reason, NPM is the only ecosystem with substantial issues with supply-chain attacks.

techterrier

5 hours ago

apart from that python one the other day

indy

4 hours ago

The culture within the npm/js community has mainly been one of using the package manager rather than "re-inventing the wheel", as such the blast radius of a compromised package is much greater

progmetaldev

4 minutes ago

It's more to do with the standard library being so barren of common application needs, and looking for a solution that the community has gotten behind. Axios has been a common dependency in many codebases, because it is a solid solution that many have already used. Every developer could try building all the libraries that they would reach for themselves, but then each company has now taken on the task of ensuring their own (much larger) codebase is free from security issues, on top of taking care of their own issues and bugs.

christophilus

2 hours ago

It’s not just NPM, though. Every Rails project and every Rust project I’ve seen ended up with massive numbers of dependencies vs what an equivalent project in Go or C# would have needed.

allreduce

4 hours ago

I don't think this community of professionals is going to come around to a solution which requires marginally more effort.

If no one checks their dependencies, the solution is to centralize this responsibility at the package repository. Something like left-pad should simply not be admitted to npm. Enforce a set of stricter rules which only allow non-trivial packages maintained by someone who is clearly accountable.

Another change one could make is develop bigger standard libraries with all the utilities which are useful. For example in Rust there are a few de facto standard packages one needs very often, which then also force you to pull in a bunch of transitive dependencies. Those could also be part of the standard library.

This all amounts to increasing the minimal scope of useful functionality a package has to have to be admitted and increasing accountability of the people maintaining them. This obviously comes with more effort on the maintainers part, but hey maybe we could even pay them for their labor.

voidfunc

5 hours ago

I'd really like to see package managers organized around rings where a very small core of incredibly important stuff is kept in ring 0, ring 1 gets a slightly wider amount of stuff and can only depend on ring 0 dependencies and then ring 2+ is the crapware libraries that infect most ecosystems.

But maybe that's not the right fit either. The world where package managers are just open to whatever needs to die. It's no longer a safe model.

regularfry

3 hours ago

The OS distro model is actually the right one here. Upstream authors hate it, but having a layer that's responsible for picking versions out of the ecosystem and compiling an internally consistent grouping of known mutually-compatible versions that you can subscribe to means that a lot of the random churn just falls away. Once you've got that layer, you only need to be aware of security problems in the specific versions you care about, you can specifically patch only them, and you've got a distribution channel for the fixes where it's far more feasible to say "just auto-apply anything that comes via this route".

That model effectively becomes your ring 1. Ring 0 is the stdlib and the package manager itself, and - because you would always need to be able to step outside the distribution for either freshness or "that's not been picked up by the distro yet" reasons - the ecosystem package repositories are the wild west ring 2.

In the language ecosystems I'm only aware of Quicklisp/Ultralisp and Haskell's Stackage that work like this. Everything else is effectively a rolling distro that hasn't realised that's what it is yet.

swiftcoder

5 hours ago

In practice, "ring 0" is whatever gets merged into your language's standard library. Node and python both have pretty expansive standard libraries at this point, stepping outside of those is a choice

anakaine

5 hours ago

Malicious actor KPI: affect a Ring 0 package.

pie_flavor

4 hours ago

Rust libraries are infrequently used outside of Rust because if you have the option, you'd just use Rust, not the ancient featureless language intrinsically responsible for 70% of all security issues. C libraries are infrequently used in Rust outside of system libc, for the same reason; I go and toggle the reqwest switch to use rustls every time, because OpenSSL is horrendous. This is also why you say 'rarely' instead of 'never', when a few years ago it was 'never'; a few years from now you'll say 'uncommonly', and so on. The reason C libraries are used is because you don't feel like reimplementing it yourself, and they are there; but that doesn't apply more to C libraries than Rust libraries, and the vast majority of crates.io wouldn't be usefully represented in C anyway, or would take longer to bind to than to rewrite. (No, nobody uses libcurl.) Finally, this only happens in NPM, and the Rust libraries you pull in are all high-quality. So this sounds like a bunch of handwaving about nonsense.

physicsguy

3 hours ago

Rust is terrible for pulling in hundreds of dependencies though. Add tokio as a dependency and you'll get well over 100 packages added to your project.

pie_flavor

2 hours ago

pin-project-lite is the only base dependency, which itself has no dependencies. If you enable the "full" feature, ie all optional doodads turned on (which you likely don't need), it's 17: bytes, cfg-if, errno, libc, mio, parking_lot+parking_lot_core+lock_api, pin-project-lite, proc_macro2+quote+syn+unicode-ident, scopeguard, signal-hook-registry, smallvec, and socket2. You let me know which ones you think are bloat that it should reimplement or bind to a C library about, and without the blatant fabrication this time.

vincnetas

5 hours ago

no no, please we don't want to get back to dragging files to your project to make them work.

Tarraq

2 hours ago

And manual FTP uploads, while we're at it.

victorbjorklund

2 hours ago

I think you can do copy paste in most languages. But it will be a pain to update when there are improvements / security fixes.

You got a project with 1-2 depencies? Sure. But if you need to bring in 100 different libs (because you bring in 10 libs which in turn brings in 10 libs) good luck.

jonkoops

2 hours ago

> We have libraries like SQLite, which is a single .c file that you drag into your project

You are just swapping a package manager with security by obscurity by copy pasting code into your project. It is arguably a much worse way of handling supply chain security, as now there is no way to audit your dependencies.

> If you get rid of transitive dependencies, you get rid of the need of a package manager

This argument makes no sense. Obviously reducing the amount of transitive dependencies is almost always a good thing, but it doesn't change the fundamental benefits of a package manager.

> There's so many C libraries like this

The language with the most fundamental and dangerous ways of handling memory, the language that is constantly in the news for numerous security problems even in massively popular libraries such as OpenSSL? Yes, definitely copy-paste that code in, surely nothing can go wrong.

> They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust

This is a WILD assumption, doing C-style bindings is actually quite common. YOu will of course then also be exposing a memory unsafe interface, as that is what you get with C.

What exactly is your argument here? It feels like what you are trying to say is that we should just stop doing JS and instead all make C programs that copy paste massive libraries because that is somhow 'high quality'.

This seems like a massively uninformed, one-sided and frankly ridiculous take.

nananana9

2 hours ago

> You are just swapping a package manager with security by obscurity by copy pasting code into your project

You should try writing code, and not relying on libraries for everything, it may change how you look at programming and actually ground your opinions in reality. I'm staring at company's vendor/ folder. It has ~15 libraries, all but one of which operate on trusted input (game assets).

> fundamental benefits of a package manager.

I literally told you why they don't matter if you write code in a sane way.

> doing C-style bindings is actually quite common

I know bindings for Rust libraries exist. Read the literal words you quoted. "Rust libraries are very rarely used outside of Rust". Got some counterexamples?

vsgherzi

7 hours ago

Not to beat a dead horse but I see this again and again with dependencies. Each time I get more worried that the same will happen with rust. I understand the fat std library approach won’t work but I really still want a good solution where I can trust packages to be safe and high quality.

pier25

6 hours ago

If the fat std library is not viable you can only increase security requirements.

Axios has like 100M downloads per week. A couple of people with MFA should have to approve changes before it gets published.

cromka

6 hours ago

This is the actual answer: stupid cost saving creating an operational risk.

Barbing

5 hours ago

At least then they will have to pay off a dev or something, changes their economic calculus and is additionally illegal

rectang

7 hours ago

Hosting curated dependencies is a commercially valuable service. Eventually an economy arises where people pay vendors to vet packages.

goodpoint

4 hours ago

It's what linux distributions do.

consp

an hour ago

Queue appimage or other packed binary and there go your finetuned packages.

tankenmate

6 hours ago

It already exists; cloudsmith

a-french-anon

2 hours ago

Why wouldn't the "fat std" thing work? Yes it's hard to design properly, both in scope and actual design (especially for an unstandardized language still moving fast), but throwing the towel and punting the problem to the "free market" of uncurated public repos is even worse.

It's what we call in France "la fête du slip".

PS: that's one reason I try to use git submodules in my Common Lisp projects instead of QuickLisp, because I really see the size of my deptree this way.

junon

2 hours ago

Because fat std is rigid, impractical, and annoying.

hypeatei

an hour ago

Fat std library mistakes/warts would likely result in third party packages being used anyway.

Joeri

an hour ago

NPM should have a curation mechanism, via staff review or crowdsourcing, where versions of popular packages are promoted to a stable set, like linux distros do. I would only use curated versions if they had such a thing.

brigandish

6 hours ago

An alternative:

- copy the dependencies' tests into your own tests

- copy the code in to your codebase as a library using the same review process you would for code from your own team

- treat updates to the library in the same way you would for updates to your own code

Apparently, this extra work will now not be a problem, because we have AI making us 10x more efficient. To be honest, even without AI, we should've been doing this from the start, even if I understand why we haven't. The excuses are starting to wear thin though.

pjc50

5 hours ago

Just going to put features on hold for a month while I review the latest changes to ffmpeg.

tick_tock_tick

6 hours ago

I don't know where you've worked but a hostile and intelligent actor or internal red team would succeed under each of those cases at every job I've worked at.

Hackbraten

4 hours ago

Defending against a targeted attack is difficult, yes. But these recent campaigns were all directed at everyone. Auditing and inspecting your dependencies does absolutely help thwart that because there will always be people who don't.

bitwank

5 hours ago

Good to know. Where were the places you worked at?

himata4113

7 hours ago

I recommend everyone to use bwrap if you're on linux and alias all package managers / anything that has post build logic with it.

I have bwrap configured to override: npm, pip, cargo, mvn, gradle, everything you can think of and I only give it the access it needs, strip anything that is useless to it anyway, deny dbus, sockets, everything. SSH is forwarded via socket (ssh-add).

This limits the blast radius to your CWD and package manager caches and often won't even work since the malware usually expects some things to be available which are not in a permissionless sandbox.

You can think of it as running a docker container, but without the requirement of having to have an image. It is the same thing flatpak is based on.

As for server deployments, container hardening is your friend. Most supply chain attacks target build scripts so as long as you treat your CI/CD as an untrusted environment you should be good - there's quite a few resources on this so won't go into detail.

Bonus points: use the same sandbox for AI.

Stay safe out there.

captn3m0

5 hours ago

This only works for post-install script attacks. When the package is compromised, just running require somewhere in your code will be enough, and that runs with node/java/python and no bwrap.

himata4113

5 hours ago

node is also sandboxed within bwrap I have sandbox -p node if I have to give node access to other folders, I also have sandbox -m to define custom mountpoints if necessary and UNSAFE=1 as a last resort which just runs unsandboxed.

mxmlnkn

3 hours ago

I like the idea of bubblewrap, but my pain point is that it is work to set it up correctly with bind mounts and forwarding necessary environment variables to make the program actually work usefully. Could you share your pip bwrap configuration? It sounds useful.

himata4113

2 hours ago

can't really share a file here, feel free to email me

kanbankaren

4 hours ago

I think firejail is a much more flexible security sandbox than bwrap. It also comes with pre-defined profiles

himata4113

2 hours ago

bwrap is as secure as you want it to be which I think is the primary advantage over anything else.

micw

5 hours ago

> SSH is forwarded via socket

Maybe I misunderstood this point. But the ssh socket also gives access to your private keys, so I see no security gain in that point. Better to have a password protected key.

himata4113

5 hours ago

It's so your private key is not stolen, but you're right passphrase protected keys win anyway. I use hardware keys so this isn't a problem for me to begin with.

johntash

5 hours ago

Do you have a recommendation for something like bwrap but for macos? I've been trying to use bwrap more on my servers when I remember.

himata4113

5 hours ago

unfortunately not, but there is work being done to support overlays properly I think?

vips7L

6 hours ago

AFAIK maven doesn’t support post install logic like npm does. You have to explicitly optin with build plugins. It doesn’t let any arbitrary dependency run code on your machine.

himata4113

5 hours ago

some post processors have chains to execution (ex: lombok)

vips7L

an hour ago

You explicitly opt in by using a compiler plugin. Merely having it as a dependency, like in npm, doesn’t mean it can run code at build time.

red_admiral

2 hours ago

There's a package manager discussion, but the bit that stands out to me is that this started with a credential compromise. At some point when a project gets big enough like axios, maybe the community could chip in to buy the authors a couple of YubiHSM or similar. I wish that _important keys live in hardware_ becomes more standard given the stakes.

Dealing with dependencies is another question; if it's stupid stuff like leftpad then it should be either vendored in or promoted to be a language feature anyway (as it has been).

embedding-shape

24 minutes ago

> At some point when a project gets big enough like axios, maybe the community could chip in to buy the authors a couple of YubiHSM or similar

I kind of feel like the authors here should want that for themselves, before the community would even realize it's needed. I can't say I've worked on packages that are as popular as axios, but once some packages we were publishing hit 10K downloads or so, we all agreed that we needed to up our security posture, and we all got hardware keys for 2FA and spent 1-2 weeks on making sure it was as bullet-proof we could make it.

To be fair, most FOSS is developed by volunteers so I understand not wanting to spend any money on something you provide for free, but on the other hand, I personally wouldn't feel comfortable being responsible for something that popular without hardening my own setup as much as I could, even if it means stopping everything for a week.

filleokus

35 minutes ago

Totally agree.

Also, considering how prevalent TPM/Secure Enclaves are on modern devices, I would guess most package maintainers already have hardware capable of generating/using signing keys that never leave hardware.

I think it is mostly a devex/workflow question.

Considering the recent ci/cd-pipeline compromises, I think it would make sense to make a two phase commit process required for popular packages. Build and upload to the registry from a pipeline, but require a signature from a hardware resident key before making the package available.

rjmunro

25 minutes ago

Most of axios' functionality has effectively been promoted to a language feature as `fetch`, but the problem is people don't bother to migrate. I've migrated our direct usage of it but it's still pulled in transitively in several parts of our codebase.

Even left-pad is still getting 1.6 million weekly downloads.

embedding-shape

17 minutes ago

Annoyingly, the times I reach for axios and similar is when I need to keep track of upload progress, which I could only do with XMLHttpRequest, not fetch, unless I've missed some recent browser changes, and the API of XMLHttpRequest remains as poor as the first times I had to use it. Download progress been supported by fetch since you can track chunks yourself, but somehow they didn't think to do that for requests for some reason, only responses.

pamcake

an hour ago

Or those people can (fund) separate repackaging and redistribution with more stringent and formalized review process.

Maybe not all users should pull all packages straight from what devs are pushing.

There's no reason we can't have "node package distributions" like we have Linux distributions. Maybe we should stop expecting devs and maintainers and Microsoft to take responsibility for our supply-chain.

wps

7 hours ago

Genuinely how are you supposed to make sure that none of the software you have on your system pulls this in?

It’s things like this that make me want to swap to Qubes permanently, simply as to not have my password manager in the same context as compiling software ever.

semi-extrinsic

4 hours ago

We run everything NPM related inside Apple containers, and are looking to do the same with Python and Rust soon. Bwrap on Linux does the same.

I like to think of it like working with dangerous chemicals in the lab. Back in the days, people were sloppy and eventually got cancer. Then dangers were recognized and PPE was developed and became a requirement.

We are now at the stage in software development where we are beginning to recognizing the hazards and developing + mandating use of proper PPE.

A couple of years ago, pip started refusing to install packages outside of a virtualenv. I'm guessing/hoping package managers will start to have an opt-in flag you can set in a system-wide config file, such that they refuse to run outside of a sandbox.

mike_hearn

3 hours ago

The problem is that package managers are a distraction. You have to sandbox everything or else it doesn't work. These attacks use post-install hooks for convenience but nothing would have stopped them patching axios itself and just waiting for devs to run the app on their local workstation. So you end up needing to develop in a fully sandboxed environment.

PhilipRoman

5 hours ago

This sounds like satire but isn't - I just make sure the nodejs/npm packages don't exist on my system. I've yet to find a crucial piece of software that requires it. As much as I love that cute utility that turns maps into ascii art, it's not exactly sqlite in terms of usefulness.

whywhywhywhy

2 hours ago

Bit ridiculous to dismiss the most popular programming languages packaging repo as silly toys.

PhilipRoman

25 minutes ago

I don't deny that node/npm is useful for building servers, devtools for JS development itself, etc. but as an end user I haven't encountered anything useful which requires having it on my machine.

jadar

8 hours ago

How much do you want to bet me that the credential was stolen during the previous LiteLLM incident? At what point are we going to have to stop using these package managers because it's not secure? I've got to admit, it's got me nervous to use Python or Node.js these days, but it's really a universal problem.

rybosome

8 hours ago

> it’s got me nervous to use Python or Node.js these days

My feelings precisely. Min package age (supported in uv and all JS package managers) is nice but I still feel extremely hesitant to upgrade my deps or start a new project at the moment.

I don’t think this is going to stabilize any time soon, so figuring out how to handle potentially compromised deps is something we will all need to think about.

Tazerenix

7 hours ago

NPM only gained minimum package age in February of this year, and still doesn't support package exclusions for internal packages.

https://github.com/npm/cli/pull/8965

https://github.com/npm/cli/issues/8994

Its good that that they finally got there but....

I would be avoiding npm itself on principle in the JS ecosystem. Use a package manager that has a history of actually caring about these issues in a timely manner.

arcfour

7 hours ago

PNPM makes you approve postinstall scripts instead of running them by default, which helps a lot. Whenever I see a prompt to run a postinstall script, unless I know the package normally has one & what it does, I go look it up before approving it.

(Of course I could still get bitten if one of the packages I trust has its postinstall script replaced.)

crimsonnoodle58

6 hours ago

More like the Trivy incident (which led to the compromise of LiteLLM).

supernes

5 hours ago

There are ways to limit the blast radius, like running them in ephemeral rootless containers with only the project files mounted.

tkel

6 hours ago

JS package managers (pnpm, bun) now will ignore postinstall scripts by default. Except for npm, it still runs them for legacy reasons.

You should probably set your default to not run those scripts. They are mostly unnecessary.

  ~/.npmrc :
  ignore-scripts=true

83M weekly downloads!

strogonoff

5 hours ago

Essential steps to minimise your exposure to NPM supply chain attacks:

— Run Yarn in zero-installs mode (or equivalent for your package manager). Every new or changed dependency gets checked in.

— Disable post-install scripts. If you don’t, at least make sure your package manager prompts for scripts during install, in which case you stop and look at what it’s going to run.

— If third-party code runs in development, including post-install scripts, try your best to make sure it happens in a VM/container.

— Vet every package you add. Popularity is a plus, recent commit time is a minus: if you have this but not that, keep your eyes peeled. Skim through the code on NPM (they will probably never stop labelling it as “beta”), commit history and changelog.

— Vet its dependency tree. Dependencies is a vector for attack on you and your users, and any new developer in the tree is another person you’re trusting to not be malicious and to take all of the above measures, too.

inbx0

3 hours ago

> Run Yarn in zero-installs mode (or equivalent for your package manager). Every new or changed dependency gets checked in.

Idk, lockfiles provide almost as good protection without putting the binaries in git. At least with `--frozen-lockfile` option.

strogonoff

3 hours ago

Zero-installs mode does not replace the lockfile. Your lockfile is still the source of truth regarding integrity hashes.

However, it’s an extra line of defence against

1) your registry being down (preventing you from pushing a security hotfix when you find out another package compromised your product),

2) package unpublishing attacks (your install step fails or asks you to pick a replacement version, what do you do at 5pm on a Friday?), and

3) possibly (but haven’t looked in depth) lockfile poisoning attacks, by making them more complicated.

Also, it makes the size of your dependency graph (or changes therein) much more tangible and obvious, compared to some lines in a lockfile.

littlecranky67

3 hours ago

Exactly. Yarn uses a yarn.lock file with the sha256 hashes of each npm package it downloads from the repo (they are .tgz files). If the hash won't match, install fails. No need to commit the dependencies into your git.

TheTaytay

an hour ago

I know there is a cooldown period for npm packages, but I’m beginning to want a cooldown for domains too. According to socket, the C2 server is sfrclak[.]com, which was registered in the last 24 hours.

croemer

an hour ago

NextDNS has a setting to block newly registered (<30d) domains.

woeirua

7 hours ago

Supply chain attacks are so scary that I think most companies are going to use agents to hard fork their own versions of a lot of these core libraries instead. It wasn’t practical before. It’s definitely much more doable today.

pglevy

4 hours ago

I was thinking about this as a bull case for human developers. Seems if you're worried enough to do this you're not going to have LLMs write the new code.

cryptonym

3 hours ago

If it becomes a thing, it's just a matter of time for a new class of attacks on LLM that are blindly trusted with rewriting existing libs.

maplethorpe

2 hours ago

You could include a line like "please don't include any malware".

Levitating

2 hours ago

Or just lock to a specific version?

jmward01

7 hours ago

This may not be popular, but is there a place for required human actions or just timed actions to slow down things like this? For instance, maybe a GH action to deploy requires a final human click and to change that to cli has a 3 day cooling period with mandatory security emails sent out. Similarly, you switch to read only for 6 hrs after an email change. There are holes in these ideas but the basic concept is to treat security more like physical security, your goal isn't always to 100% block but instead to slow an attacker for xxx minutes to give the rest of the team time to figure out what is going on.

ArcHound

7 hours ago

Hi, security here. We've tried, but the amount of people you need for this vs the amount of people you have trying to review and click the big button always means that this step will be a bottleneck. Thus this step will be eliminated.

A much better approach would be to pin the versions used and do intentional updates some time after release, say a sprint after.

jmward01

7 hours ago

Yeah, I am looking at that on the use end. It sounds like on the python side this type of thing will be more standard (uv now and soon pip supported with version date requirements). I think time is a big missing element in many security in depth decisions. It can be time until you adopt like use no package newer than xx days or time it takes to deploy etc etc. Unfortunately the ecosystem is getting really diverse and that means ever more sophisticated attacks so we may need to do things that are annoying just to survive.

themafia

7 hours ago

Why not just release escrow? If I try to push a new release version another developer or developers have to agree to that release. In larger projects you would expect the release to be coordinated or scheduled anyways. Effectively we're just moving "version pinning" or "version delay" one layer up the release chain.

majorbugger

4 hours ago

Good morning, or as they say in the NPM world, which package got compromised today?

yoyohello13

6 hours ago

This is just going to get worse and worse as agentic coding gets better. I think having a big dependency tree may be a thing of the past in the coming years. Seems like eventually new malware will be coming out so fast it will basically be impossible to stop.

fluxist

3 hours ago

A command to recursively check for the compromised axios package version:

   find / -path '*/node_modules/axios/package.json' -type f 2>/dev/null | while read -l f; set -l v (grep -oP '"version"\s*:\s\*"\K(1\.14\.1|0\.30\.4)' $f 2>/dev/null); if test -n "$v"; printf '\a\n\033[1;31m FOUND v%s\033[0m  \033[1;33m%s\033[0m\n' $v (string replace '/package.json' '' -- $f); else; printf '\r\033[2m scanning: %s\033[K\033[0m' (string sub -l 70 -- $f); end; end; printf '\r\033[K\n\033[1;32m scan complete\033[0m\n'

hk__2

3 hours ago

Or more simply:

    find / -type f -path '*/node_modules/axios/package.json' \
        -exec grep -Pl '"version"\s*:\s*"(1\.14\.1|0\.30\.4)"' {} + 2>/dev/null
Let’s not encourage people to respond to security incidents by… copy/pasting random commands they don’t understand.

acheong08

7 hours ago

There are so many scanners these days these things get caught pretty quick. I think we need either npm or someone else to have a registry that only lets through packages that pass these scanners. Can even do the virustotal thing of aggregating reports by multiple scanners. NPM publishes attestation for trusted build environments. Google has oss-rebuild.

All it takes is an `npm config set` to switch registries anyways. The hard part is having a central party that is able to convince all the various security companies to collaborate rather than having dozens of different registries each from each company.

Rather than just a hard-coded delay, I think having policies on what checks must pass first makes sense with overrides for when CVEs show up.

(WIP)

pamcake

an hour ago

Sounds great until trivy images get compromised, like last week.

drum55

6 hours ago

The ones you hear about are caught quickly, I’m more worried about the non obvious ones. So far none of these have been as simple as changing a true to a false and bypassing all auth for all products or something, and would that be caught by an automated scanner?

zar1048576

4 hours ago

In case it helps, we open-sourced a tool to audit dependencies for this kind of supply-chain issue. The motivation was that there is a real gap between classic “known vulnerability” scanning and packages whose behavior has simply turned suspicious or malicious. We also use AI to analyze code and dependency changes for more novel or generic malicious behavior that traditional scanners often miss.

Project: https://point-wild.github.io/who-touched-my-packages/

riteshkew1001

5 hours ago

Ran npm ci --ignore-scripts in our CI for months but never thought about local dev. Turns out that's the gap, your CI is safe but your laptop runs postinstall on every npm install.

The anti-forensics here are much more complicated that I had imagined. Sahring after getting my hands burned.

After the RAT deploys, setup.js deletes itself and swaps package.json with a clean stub. Your node_modules looks fine. Only way to know is checking for artifacts: /Library/Caches/com.apple.act.mond on mac, %PROGRAMDATA%\wt.exe on windows, /tmp/ld.py on linux. Or grep network logs for sfrclak.com.

Somehow noboady is worried about how agentic coding tools run npm install autonomously. No human in the loop to notice a weird new transitive dep. That attack surface is just getting worsened day by day.

mcintyre1994

4 hours ago

The frustrating thing here is that axios versions display on npmjs with verified provenance. But they don’t use trusted publishing: https://github.com/axios/axios/issues/7055 - meaning the publish token can be stolen.

I wrongly thought that the verified provenance UI showed a package has a trusted publishing pipeline, but seems it’s orthogonal.

NPM really needs to move away from these secrets that can be stolen.

wolvesechoes

4 hours ago

I am glad I don't need to touch JS or web dev at all.

Now, I tend to use Python, Rust and Julia. With Python I am constantly using few same packages like numpy and matplotlib. With Rust and Julia, I try as much as possible to not use any packages at all, because it always scares me when something that should be pretty simple downloads half of the Internet to my PC.

Julia is even worse than Rust in that regard - for even rudimentary stuff like static arrays or properly namespaced enums people download 3rd party packages.

someguyornotidk

2 hours ago

Isn't Rust just as susceptible to this issue? For example, how do you deal with Rust's lack of support for HTTP in the standard library? Importing hyper pulls in a couple dozen transitive libraries which exposes you to the exact same kind of threats that compromised axios.

Given how HTTP is now what TCP was during the 90s and almost all modern networked applications needing to communicate in it one way or another, most rust projects come with an inherent security risk.

These days, I score the usability of programming languages by how complete their standard library is. By that measure, Rust and Javascript get an automatic F.

wolvesechoes

11 minutes ago

It is, therefore I have stated I avoid any dependencies while writing Rust, unless they are self-contained. And I said I am glad I don't do web, so I don't have need for HTTP implementations.

hu3

3 hours ago

It's mind boggling when a simple Rust app pulls in Serde and with it half a black hole worth of packages to serialize some mundane JSON.

_pdp_

2 hours ago

I am not saying this is the reason for this compromise but the sudden explosion of coding assistant like claude code, and tools like openclaw is teaching entire crop of developers (and users) that it is ok to have sensitive credentials .env files.

ptx

34 minutes ago

Where would you suggest putting the sensitive credentials?

Surac

6 hours ago

All these supply chain attacks make me nervous about the apps I use. It would be valuable info if an app used such dependencies, but on the other hand, programmers would cut their sales if they gave you this info.

jruohonen

3 hours ago

So the root cause was again a developer's opsec. For improving things, I haven't seen many new initiatives on that side (beyond 2FA, but even that seems unenforced in these repositories, I reckon).

Ciantic

2 hours ago

NPM should learn from Linux distribution package managers.

Have a branch called testing, and packages stay in testing for few weeks, after which they go to stable. That is how many Linux distributions handle packages. It would have prevented many of these.

Advising every user of npm/pnpm to change their settings and set their own cooldown periods is not a real choice.

Levitating

2 hours ago

Not all distributions work with a staging repository, and it's not really intended for this purpose either.

Besides there's always a way to immediately push a new version to stable repositories. You have to in order to deal with regressions and security fixes.

Ciantic

2 hours ago

I know not all, but Debian/Ubuntu/Fedora does, and while the intended purpose of multi-stage releases is not necessarily security but stability, it still does help up with security too. Because third parties can look and scan the dependencies while they are still not in stable.

Most of the supply chain vulnerabilities that ended up in the NPM would have been mitigated with having mandatory testing / stable branches, of course there needs to be some sort of way to skip the testing but that would be rather rare and cumbersome and audited, like it is in Linux distributions too.

ivanjermakov

2 hours ago

NPM is one big AUR, where anyone can submit arbitrary unverified code. The difference is that AUR is intentionally harder to use to prevent catastrophic one-line installs.

Levitating

2 hours ago

Is a "AUR" now just how we name unaudited software repositories?

Just to note, if we're talking about Linux Distributions. There's also COPR in Fedora, OBS for OpenSUSE (and a bunch of other stuff, OBS is awesome), Ubuntu has PPAs. And I am sure there's many more similar solutions.

bluepeter

7 hours ago

Min release age sucks, but we’ve been here before. Email attachments used to just run wild too, then everyone added quarantine delays and file blocking and other frictions... and it eventually kinda/sorta worked. This does feel worse, though, with fewer chokepoints and execution as a natural part of the expectation.

Edit: bottom line is installs are gonna get SOOO much more complicated. You can already see the solution surface... Cooling periods, maintainer profiling, sandbox detonation, lockfile diffing, weird publish path checks. All adds up to one giant PITA for fast easy dev.

mayama

7 hours ago

Min release age might just postpone vulnerability to be applied few days later in non trivial cases like this. More I think about it, Odin lang approach of no package manager makes senses. But, for that approach won't work for Javascript as it needs npm package even for trivial things. Even vendoring approach like golang won't work with Javascript with the amount of churn and dependencies.

tisc

3 hours ago

It does not _need_ it, that’s the thing. It has become a custom to import a dependency for a lot of things. Especially for JavaScript.

OlivOnTech

5 hours ago

The attacker went through the hassle to compromise a very widely used package, but use a non standard port (8000) on their C2... If you plan to do something like that, use 443 at least, many corporate network do not filter this one ;)

cleansy

3 hours ago

To have an initial smoke test, why not run a diff between version upgrades, and potentially let an llm summarise the changes? It’s a baffling practice that a lot of developers are just blindly trusting code repos to keep the security standards. Last time I installed some npm package (in a container) it loaded 521 dependencies and my heart rate jumped a bit

dj_mc_merlin

3 hours ago

Is this the first time you have ever thought about the idea of supply chain attacks? This is the first thought 90% of people have and it doesn't work. Too much work to manually verify diffs and LLMs aren't good enough at this yet.

pjmlp

6 hours ago

The amount of people still using this instead of fetch. Nonetheless when wasn't axios, it would be something else.

This is why corporations doing it right don't allow installing the Internet into dev machines.

Yet everyone gets to throw their joke about PC virus, while having learnt nothing from it.

tgv

4 hours ago

Axios has a long history, and is included in a lot of code, also in indirect dependencies. Just check its npm page: it has 174025 dependents as of this moment, including a lot of new packages (I see openclaw and mcp related packages in the list).

And with LLMs generating more and more code, the risk of copying old setups increases.

shevy-java

5 hours ago

> The amount of people still using this instead of fetch.

People are lazy. And sometimes they find old stuff via a google search and use that.

aizk

5 hours ago

In light of these nonstop supply chain attacks: Tonight I created /supply-chain-audit -- A simple claude code skill that fetches info on the latest major package vulnerability, then scans your entire ~/ and gives you a report on all your projects.

https://github.com/IsaacGemal/claude-skills

It's a bit janky right now but I'd be interested to hear what people think about it.

mirekrusin

5 hours ago

Skills are great attack vector as well.

Hackbraten

4 hours ago

I am now migrating all my unencrypted secrets on my machines to encrypted ones. If a tool supports scripted credential providers (e.g. aws-cli or Ansible), I use that feature. Otherwise, I wrap the executable with a script that runs gpg --decrypt and injects an environment variable.

That way, I can at least limit the blast radius when (not if) I catch an infostealer.

hyperadvanced

5 hours ago

Just sanity checking - if I only ever install axios in a container that has no secrets mounted in to its env, is there any real way I can get pwned by this kind of thing?

monarchwadia

3 hours ago

Yes. Docker breakout is a class of vulnerabilities into itself.

lepuski

4 hours ago

I believe compartmentalized operating systems like Qubes are the future for defending against these kinds of attacks.

Storing your sensitive data on a single bare-metal OS that constantly downloads and runs packages from unknown maintainers is like handing your house key out to a million people and hoping none of them misuse it.

mtud

9 hours ago

Supply chain woes continue

rvz

an hour ago

Called it yesterday.

kdavis01

8 hours ago

One more reason to use Fetch

marjipan200

8 hours ago

until Node is compromised

avaer

8 hours ago

Harder to do. Also node is not updated at the rate of npm deps.

koolba

8 hours ago

> Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline.

Doesn’t npm mandate 2FA as of some time last year? How was that bypassed?

webprofusion

3 hours ago

My first thought was does VS Code Insiders use it (or anything it relies on, or do any extensions etc). Made me think.

neya

5 hours ago

I wonder if this has any connection with the recent string of attacks including the FBI director getting hacked. The attack surface is large, executed extremely cleanly - almost as if done by a high profile state sponsored actor, just like in Hollywood movies.

dinakernel

3 hours ago

Default setting latest should be caught in every static code scanner. How many times has this issue been raised.

sgt

4 hours ago

Is this an issue for those only using axios on the frontend side like in a VueJS app?

dfreire

2 hours ago

Absolutely. If you ever did a npm install on a project using one of the affected axios versions, your entire system may be compromised.

> The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.

I strongly recommend you read the entire article.

kush3434

an hour ago

first day at hacker news and this is the first post i saw

silverwind

3 hours ago

npm really needs to provide a options to set individual packages to only be publishable via trusted publishing.

maelito

3 hours ago

Glad to be using native fetch.

leventhan

6 hours ago

PSA: Make sure to set a minimum release age and pin versions where possible.

0x500x79

7 hours ago

Pin your dependencies folks! Audit and don't upgrade to every brand new version.

onion2k

7 hours ago

But also have a regular review of your dependencies to update them when necessary, because as bad as compromised packages may be things do have vulnerabilities occasionally, and upgrading things that are a long way out-of-date can be quite hard.

croemer

3 hours ago

I lost respect for Axios when they made a breaking change in a patch release. Digging into the root cause, I found the maintainer had approved an outside PR with an obvious AI slop PR description: https://github.com/axios/axios/issues/7059

Looks like the maintainer wasn't just careless when reviewing PRs.

antiloper

3 hours ago

That maintainer (also the one whose creds got stolen) also has an obvious chatgpt slop profile picture on github.

rtpg

7 hours ago

Please can we just have a 2FA step on publishing? Do we really need a release to be entirely and fully automated?

It won't stop all attacks but definitely would stop some of these

Kinrany

5 hours ago

Running almost anything via npx will trigger this

rvz

an hour ago

npx is just a bad command to use.

ksk23

5 hours ago

One paragraph is written two times??

charcircuit

4 hours ago

Hopefully desktop Linux users will start to understand that malware actually does exist for Linux and that their operating system is doing nothing to protect them from getting RATed.

hu3

3 hours ago

What do you mean?

Linux has the most powerful native process isolation arsenal at the user disposal.

And some distros use even more isolation mechanisms on top of the ones provided by the kernel like snap and flatpak.

And then you can recreate the entire thing like a spellbook with nix.

Docker works natively in it. Do I need to say more?

Linux is a decade ahead here with regards for security options available to the user.

charcircuit

3 hours ago

Yet npm isn't using them allowing this RAT to work. It is not secure by default. It requires every app to manually opt in to being secure. This opt in approach to security puts desktop Linux decades behind in regards to security. Not ahead.

hu3

2 hours ago

Linux is not making anything less secure than other OSs.

In fact it even gives the user more security tools.

So I fail to reason on you singling out Linux here.

charcircuit

an hour ago

Take for example iOS and Android. All apps are sandboxed by default. You can't make a program that just steals all of your credentials like you can on desktop Linux. Having security tools means nothing if they aren't being used.

aa-jv

4 hours ago

I have a few projects which rely on npm (and react) and every few months I have to revisit them to do an update and make sure they still build, and I am basically done with npm and the entire ecosystem at this point.

Sure, its convenient to have so much code to use for basic functionality - but the technical debt of having to maintain these projects is just too damn high.

At this point I think that, if I am forced to use javascript or node for a project, I reconsider involvement in that project. Its ecosystem is just so bonkers I can't justify the effort much longer.

There has to be some kind of "code-review-as-a-service" that can be turned on here to catch these things. Its just so unproductive, every single time.

est

2 hours ago

compiled JS solves a problem that no longer exists. IE6 is dead RIP.

Now we have a 20MB main.min.js problem

8cvor6j844qw_d6

8 hours ago

Should increase the delay to dependency updates.

tonymet

8 hours ago

Slow Russian roulette is still a losing strategy

btown

8 hours ago

It’s only a losing strategy if you assume everyone universally adopts the slow strategy, and no research teams spot it in the interim. For things with large splash radius, that’s unrealistic, so defenders have an information advantage.

Makes actual security patches tougher to roll out though - you need to be vigilant to bypass the slowdown when you’re actually fixing a critical flaw. But nobody said this would be easy!

esseph

7 hours ago

> Makes actual security patches tougher to roll out though

Yeah. 7 days in 2026 is a LONG TIME for security patches, especially for anything public facing.

Stuck between a rock (dependency compromise) and a hard place (legitimate security vulnerabilities).

Doesn't seem like a viable long-term solution.

neko_ranger

8 hours ago

but wouldn't it work in this case? sure if a package was compromised for months/years it wouldn't save you

but tell dependabot to delay a week, you'd sleep easy from this nonesense

shevy-java

5 hours ago

NPM gets worse than russian roulette. Perhaps we have to rename russian roulette to node roulette: noulette.

neya

5 hours ago

The NPM ecosystem is a joke. I don't even want anything to do with it, because my stack is fully Elixir. But, just because of this one dependency that is used in some interfaces within my codebase, I need to go back to all my apps and fix it. Sigh.

JavaScript, its entire ecosystem is just a pack of cards, I swear. What a fucking joke.

tonymet

8 hours ago

Has anyone tested general purpose malware detection on supply chains ? Like clamscan . I tried to test the LiteLLM hack but the affected packages had been pulled. Windows Defender AV has an inference based detector that may work when signatures have not yet been published

jesse_dot_id

7 hours ago

I second this question. I usually scan our containers with snyk and guarddog, and have wondered about guarddog in particular because it adds so much build time.

esseph

7 hours ago

> Has anyone tested general purpose malware detection on supply chains ? Like clamscan

You could use Trivy! /s

0x1ceb00da

7 hours ago

Coded has zero nom dependencies. Neat!

slopinthebag

8 hours ago

It's reasons like this why I refuse to download Node or use anything NPM. Thankfully other languages are better anyways.

hrmtst93837

6 hours ago

Skipping Node sounds nice. PyPI and RubyGems have had the same mess, and npm gets more headlines because it is huge and churns fast, so you see more fresh landmines and more people stepping on them. Unless you plan to audit every dep and pin versions yourself, you're mostly trading one supply chain mess for another, with a tiny bit of luck and a differnt logo.

slopinthebag

6 hours ago

Cargo is a great package manager and hasn't suffered from the same problems. I'll take it.

cozzyd

5 hours ago

Yet.

Does cargo contain any mitigations to prevent a similar attack?

Now hopefully no distro signing keys have been compromised in the latest attacks...

waterTanuki

7 hours ago

pianoben

6 hours ago

Log4Shell was hardly a supply-chain attack - just a latent bug in a widely-used library. That can happen anywhere.

Maven to this day represents my ideal of package distribution. Immutable versions save so much trouble and I really don't understand why, in the age of left-pad, other people looked at that and said, "nah, I'm good with this."

imInGoodCompany

5 hours ago

Completely agree. NPM has the only registry where massive supply chain attacks happen several times a year. Mainly the fault lies with NPM itself, but much of it is just a terrible opsec culture in the community.

Most package.jsons I see have semver operators on every dependency, so patches spread incredibly quickly. Package namespacing is not enforced, so there is no way of knowing who the maintainer is without looking it up on the registry first; for this reason many of the most popular packages are basically side projects maintained by a single developer*. Post-install scripts are enabled by default unless you use pnpm or bun.

When you combine all these factors, you get the absolute disaster of an ecosystem that NPM is.

*Not really the case for Axios as they are at least somewhat organized and financed via sponsors.

waterTanuki

5 hours ago

The semantics are irrelevant. The effect is what's important: Hijacking widely used software to exploit systems. The OC is somehow under the illusion that avoiding JS altogether is a silver bullet for avoiding this.

Forest > Trees

skydhash

7 hours ago

Other languages have package managers (perl) and there are package managers in existence that are not so vulnerable to this issue. IMO, it stems from one place: Transitive dependencies and general opaqueness of the issue.

In package managers like pacman, apt, apk,... it's easier to catch such issue. They do have postinstall scripts, but it's part of the submission to the repo, not part of the project. Whatever comes from the project is hashed, and that hash is also visible as part of the submission. That makes it a bit difficult to sneak something. You don't push a change, they pull yours.

slopinthebag

6 hours ago

Come on dude. The issue is the frequency and magnitude of these attacks. Log4Shell was also not a supply chain attack.

I looked at the Rust one for example, which is literally just a malicious crate someone uploaded with a similar name as a popular one:

> The crate had less than 500 downloads since its first release on 2022-03-25, and no crates on the crates.io registry depended on it.

Compared to Axios, which gets 83 million downloads and was directly compromised.

What an extremely disingenuous argument lol

waterTanuki

3 hours ago

What exactly do you think the argument is?

The issues have everything to do with npm as a platform and nothing with JS as a language. You can use JS without npm. Saying you'll escape supply chain attacks by not using JS is like saying you'll be saved from an car crash with a parachute.

k4binSecurity

6 hours ago

local [fuction][Password and Key and DMS] Axes [Password and K [UserID] --1234567890-- [Hacking error Message -- Hello -- hacker typer --97283710-- Security

stevenmh

7 hours ago

This is why Node.js is completely unsuitable as backend. Until recently, there wasn’t even a standard Promise-based HTTP client. Why should we need to download a library just to make a simple HTTP request? It’s because Node.js’s standard library is too limited, leading to an explosive growth in third-party libraries. As a result, it’s vulnerable to security attacks, and maintaining it in an enterprise environment becomes a major challenge. Let’s use .NET or Go. Why use JavaScript outside of the browser when there are excellent backend environments out there?