For anyone confused, this is (very good imo) fiction about supply-chain incidents. It had me very worried during a brief scan that it was real though, which made me read it more attentively :)
As the victim of the one from last year, it wasn't particularly fun to read.
The implication that I don't know what I'm looking at, or that I don't know what security is (despite having a clean track record for about 15 years now) was a bit aggravating.
In fact, even months later, the lasting effects have been panicking over anything that is remotely suspicious. The most recent example was just a few days ago. Had just gotten on the plane to go on vacation when someone Liked the original "I've been pwned" post on Bluesky. I misread the notification as being a new message to me saying "You've been pwned" and started to panick. I'd have had no way to address it and it would have ruined the small chance per year I get to have a break.
The attack last year wasn't me misunderstanding security. It was the sum of many, many small things (my history with and perception of npm especially w.r.t. their security posture and poor outreach over the years, being stressed out overall, and being in a rush at that particular moment, and a few other personal things) coming together in a perfect storm that resulted in the attack.
Well, the one I linked to is real. BIP-42 made bitcoin's monetary policy fixed, by fixing a bug in the client which would have resulted in the initial subsidy code being reset every ~250 years or so. It's just the official writeup documenting it that is silly.
Searching for CVE-2024-YIKES also provides a gallery of AI slop blogs that AI-rewrite the content of this post while being absolutely stone cold serious about it.
Googling is no longer a reliable way to figure out if something is real or not (since, in this case, it just regurgitates the original article, including a couple slop blogs about it)
Just because it's not important to pay attention to CVEs, why not waste the readers' time by creating "fictional" CVEs without a disclaimer in the first line? Just because it's not already difficult to scrape through the information and noise on this internet...
especially if it appears on the front page of hackernews
The dependency cycle is actually the functional mechanism of the code, because they subvert the dedup mechanism in the package manager using a random generation trick. Each recursive copy of the dependencies takes up a little bit more space, which ultimately gets converted to the spaces inserted into the original datum; the caller is expected to adjust the cache settings to signal the desired amount. That's also why if you're using left-justify to process strings, Yarn is recommended for best compatibility. /joke
> Day 1, 14:47 UTC — Among the exfiltrated credentials: the maintainer of vulpine-lz4, a Rust library for “blazingly fast Firefox-themed LZ4 decompression.” The library’s logo is a cartoon fox with sunglasses. It has 12 stars on GitHub but is a transitive dependency of cargo itself.
I got a bit curious and here is an incomplete list of crates to compromise to be part of the cargo build and that already have a build.rs so it doesn't stand out to much:
flate2
tar
curl-sys
libgit2-sys
openssl-sys
libsqlite3-sys
blake3
libz-sys
zstd-sys
cc
As a nice bonus - if you get rights for xz2 you can compromise rustup.
-sys crates are just bindings and doing something else in them is highly suspect. The rest I recognize as being owned by a Rust maintainer like alexcrichton or rustlang itself.
With crates.io using GH as its IdP, I think there would be much farther reaching consequences to account pwning in that scenario. I agree, though, that the security model for crates.io is only as strong as the weakest link there, and would pray someone like Alex is using physical tokens or the like for his MFA and can't be conned by a well-crafted email.
sys crates are also mostly generated and lack a lot of eyeballs. Sneaking something into the build.rs of a sys crate would not be difficult and would land in the builds of everything downstream of it.
I had pondered the same thing about other package ecosystems in the past, in general. Now with the benefit of hindsight we can comfortably say that the absence of known (!) attacks doesn't really say anything about how relatively difficult an attack would be. Are -sys crates, or build script attacks, particularly potent? Who knows. When I did a cursory search, the only attempts I saw were at runtime rather than build time[1]. Which raises a good point; pwning a developer machine or CI box with a build script may be quite valuable, but if you might get that and prod with a runtime exploit, is the build time exploit that much more valuable? Guess it depends! (Of course, I personally think having at least optional build time sandboxing is even better than hoping it won't be valuable to attack.)
Of course, crates.io has surely had some malicious packages. (I'd assume it isn't all that unlikely there could be some undiscovered right now; it's definitely large enough for something like that to slip under the radar, even if it is relatively small compared to say, NPM.) But, I think it really hasn't had its XZ backdoor moment, its left-pad, where you really get to see how well it does or doesn't handle a serious challenge. Since I have actually not published on crates.io, I'm not really sure how the security posture is, but if it's more similar to other programming language repositories than it is to Linux repos, I dunno exactly why it would be hard to believe a high-level compromise is possible and could slip in (really, anywhere, be it a build script or otherwise.). Of course, "would not be difficult" is all relative. I'm sure many of these attacks are not really all that simple, but a lot of them aren't exactly groundbreaking either. It was well executed and took quite a lot of time, sure, but there wasn't all that much about the XZ backdoor that was novel. (Except maybe the slyness with which the payload was hidden in test files. That was pretty cool.)
It's easy to be cynical because, yes, both the problems and solutions seem dead obvious in hindsight. But for a long time (and maybe even still), a hacker creed was "move fast and break things."
It's great that there's so much momentum in fixing the glaring problems with supply chain systems like npm, but I'm concerned that we're entering a new era of security-related problems caused in large part by agentic development.
I'm not just talking about Mythos/Glasswing surfacing vulnerabilities in pretty much everything it touches; I think the way we're developing software, pulling in dependencies, and potentially losing human thought modeling of complex systems is going to lead to a lot of hacked together software and infrastructure that humans won't fully understand.
I hope in a few years we don't look back at today and wonder how we could have been so naive -- how we failed to actually plan for the long-tail of AI development in a way that doesn't solve problems by attempting to just use AI to rebuild complex systems.
He certainly popularized it (maybe coined it), but I've seen a lot of organizations and developers repeat that mantra.
Even without the specific words, look to product teams debating tradeoffs of going to market vs. waiting for better security controls. They're pushing for faster product release every time, at pretty much every org.
Hackers were moving fast and breaking things first. Faster than any corporation in fact. We didn't notice because their computers weren't powering anything useful. How do you think projects like GNU happened?
We don't need hindsight for the problems of supply chain security to be obvious. Security people were writing and doing talks about this stuff over 10 years ago, just (like most things in security) things start getting addressed once the pressure of incidents gets high enough :)
I actually wonder if somebody used a fake identity to set up an account with a warehousing/shipment fulfillment company that stocks things and ships them, then set up the appropriate EDI pipeline to send shipping orders to it... What would be the results if a decently budgeted adversary made something attractive looking that shipped malicious USB flash drives to anyone that requested one.
I know we're not in the era when a windows pc will happily run any autorun.inf and .EXE file found on an inserted flash drive or DVD anymore. But even so. What if it didn't even have any malicious data payload but somebody was shipping USB-A interface capacitor based usb killers?
What if it did have data on it and came with a slick color brochure walking people through how to run the binary, or in a linux or developer specific audience, how to 'sudo' the ELF binary that lives on its filesystem?
>"The legitimate maintainer has won €2.3 million in the EuroMillions and is researching goat farming in Portugal..."
>"Root Cause: A dog named Kubernets ate a Yubikey
Ah, yes, irresponsible to get taken in by one of the well-known classic exploits. The 'ol "distract someone with a lottery windfall & make a dongle irresistibly tasty to another person's pet". When will people learn.
> As a Fish aficionado (Afishionado?) - I feel both attacked and seen by this:
As an alternative, it could apt-get or dnf install 'figlet' and then overwrite the contents of /etc/motd with 'all your base are belong to us' in extremely large ASCII art font.
Root Cause: "A dog named Kubernetes ate a YubiKey."
Technically... that's not even a joke... that really is what kicked off this entire chain of events lol.
This post reads like an actual movie lol. Someone seriously needs to make one based on this.
It has everything:
the missing key that starts the chaos, the scam nobody sees coming, one tiny mistake turning into a full-on domino disaster, sleep-deprived people making very confident bad decisions, the guy who disappeared to a farm living his best life while holding a critical piece of the puzzle... and somehow, in the final act, a completely unrelated villain accidentally saves everyone.
the Karen one gave me a good laugh :D ;) reminds me of a `make`-based build script I once got when reviewing a classmate's project - it attempted to `rm -rf` my home folder if the hostname contains `bpavuk`. that was in seventh grade!!
Supply chain incidents suck and we need to do better. Personally for rust I’m a proponent of the foundation supporting a few core crates that go under the same audit procedure as the main rust language and give funding to the project to limit supply chain vulns. I don’t think the right answer is to remove systems like crates or npm. Crate and npm are a boon for many developers.
Crates has also been making efforts to include rust sec, but in addition to the above I would like the community to shy away from many small dependencies to a few larger ones just as tokio has
Honest question. Commons, Guava, Spring, and more seem to take this approach successfully (as in, the drawbacks are outweighed by the benefits in convenience, quality, and security) in Java. Are benefits in binary size really worth that complexity?
And before someone says “just have a better standard library”, think about why that is considered a solution here. Languages with a large and capable standard library remain more secure than the supply-chain fiascos on NPM because they have a) very large communities reviewing and participating in changes and b) have extremely regulated and careful release processes. Those things aren’t likely to be possible in most small community libraries.
> Honest question. Commons, Guava, Spring, and more seem to take this approach successfully (as in, the drawbacks are outweighed by the benefits in convenience, quality, and security) in Java.
Commons and Spring have spent significant effort to break themselves up in the past, and would probably come as aggregations of much smaller pieces if they could be started today with the benefit of hindsight.
Why? It's the essence of "Simple Made Easy": you don't have other code to complect with. You have a smaller interface, focused on a singular goal. When a library has to work as a standalone project, it can't be accidentally entangled with other components of a larger project.
Smaller implementations are also easier to review against malware, because there are fewer places to hide. You don't have to guess how a component may interact with all the other parts of a large framework, because there aren't any.
There are also practical Rust-specific concerns. Fine-grained code reuse helps with compile times (a smaller component can be reused in more projects, and more crates increase build parallelism).
It makes testing easier. Rust doesn't have enough dynamic monkey-patching for mocking of objects, so testing of code buried deep in a monolith is tricky. Splitting code into small libraries surfaces interfaces that are easily testable in isolation.
It helps with semver. A semver-major upgrade of one large library that everyone uses requires everyone to upgrade the whole thing at the same time, which can stall like the Python 2-to-3 transition. Splitting a monolith into smaller components allows versioning them separately, so the stable parts stay stable, and the churning parts affect smaller subsets of users.
That dead code might have "dead dependencies" - transitive dependencies of its own, that it pulls in even though they are not actually used in the parts of the crate you care about.
In the worst case, you can also have "undead code" - event handlers, hooks, background workers etc that the framework automatically registers and runs and that will do something at runtime, with all the credentials and data access of your application, but that have nothing to do with what you wanted to do. (Looking at you, Spring...)
All those things greatly increase the attack surface, I think even more than pulling in single-purpose library.
The same issue occurs whether you bundle all the code together or not, it's just that if you bundle it together you don't see what's happening and you can't use only part of it easily.
Yeah I’d agree that multiple crates under one project is basically the same as 1 large crate. The real problem is how many people you’re trusting and it’s all coming from the same person.
Contrary to what the article here presents, Rust does not have a culture of microlibraries like NPM does. The author and their LLM are cargo-culting a criticism of Rust made by people whose only experience is with the Node ecosystem. The Rust stdlib may not be especially "wide" compared to languages like Python, but it is quite deep, with the objective of making it so that you don't feel the need to publish single-purpose libraries which only exist to fix papercuts. Dozens of new APIs get added with every Rust release, which, occurring every six weeks, amounts to hundreds per year.
What are you talking about? Every Rust project I see seems to have 5 dependencies that do some simple thing that should be in the standard library, or at least in some centrally-audited monolibrary of utilities.
Indeed, I'm all for maximizing the amount of modules in the standard library. It's pretty obvious to me that Python thrives because of, not despite of, its standard library, "dead batteries" and all.
However, don't make the mistake of thinking that Rust has a small standard library. Read any Rust release and you'll see dozens of new APIs added with every single one. I'm tempted to paste the entire list of stabilized APIs from the most recent release for emphasis, but rather than making this comment three dozen lines longer, just look for yourself: https://blog.rust-lang.org/2026/04/16/Rust-1.95.0/#stabilize...
An extra tier of standard library which can make breaking changes, perhaps. Rust's stability guarantee for std means cryptography really shouldn't go in there, since sometimes algorithms & protocols get broken (DES, MD5, SHA1, etc.) and need to be removable. Without breaking changes you get stuck with security vulnerabilities, if not from cryptography then from other poorly-designed APIs.
It's hard to have zero deps - I put many hours into one to have no required deps in the end but it was not easy, and writing declarative macros to do anything complex takes work (and a proc macro often means a minimum of two crates). Both of the crates it requires are part of the same project, however.
One of my other crates (getaddrinfo) requires windows-sys and libc which would be challenging to get rid of.
“Personally for rust I’m a proponent of the foundation supporting a few core crates that go under the same audit procedure as the main rust language and give funding to the project to limit supply chain vulns. I don’t think the right answer is to remove systems like crates or npm. Crate and npm are a boon for many developers.”
This is my solution. We get the quality of a std lib without forcing it in the std Lib and without extra maintaining cost for the team
One man's bloat is another man's batteries-included, I guess?
My argument would be that if a more featureful standard library could get Rust closer to the superior dependency culture of Go, it'd be worth it. As-is, Rust dependency trees are just wild.
The rust team is already stretched pretty thin. A larger library is going to put more pressure on them. These libraries are already maintained and used. The rust project should just directly, fund, Shepard and guarantee a level of quality for the packages. The foundation has started some of this with the maintainers fund. No need to force it all into the std lib. Go has experienced breaking issues with changes in the crypto library causing churn in the ecosystem.
Point taken about the core team being stretched thin. But I don't see how the "increase stability of some core crates" is enough to change the packaging practices/culture. Maybe I'm wrong, but you really don't get those ecosystem benefits unless the ~entire ecosystem buys into that set of packages. Which really doesn't happen without stdlib.
Also, I think that your example of Go's breaking crypto changes misses the forest for the trees--the stdlib has been incredibly stable through its history, and the vast majority of packages just never have to worry about it. I'm honestly not aware of a language out there with similar adoption, featureset, and robustness. More to the point, I'm not aware of a language out there with a more reliable stdlib that permits the ecosystem to have small dependency graphs.
A ton of the most popular crates on crates.io are already first-party crates provided by the Rust organization itself. This is often overlooked when people are wringing their hands about Rust crate graphs. Looking at the top 10 list of most-downloaded crates on the front page of crates.io, the only one not either from the Rust organization or from a Rust core maintainer is the base64 crate.
If you really want to make sure that it's the right thing (because piping to sudo bash is risky), make sure the URL starts with "pastebin", or ends in ".tk", or is an IP address.
To be absolutely positively certain, be sure that the IP address is also in the same /24 as the same net blocks and hosted on the same AS that appear in every DNS based mail RBL possible.
Hacker uses AI to research countries without extradition to US.
Cops use AI to analyze ransom note. Unfortunately, because the note confidently states that Vietnam has no extradition to the US, the AI recommends paying ransom.
Link this with the fact that anyone can use any name/email in commits and appear as the legitimate contributor on GitHub and it completes the chain.
Love it :)
Customers give us heat for not shipping the latest vulpine-lz4. Their AI-based heuristic antivirus total defence solution automatically flags all software not running latest versions of everything
'The changelog reads “performance improvements.”' was the truest part for me. Surely what we're releasing is the most fundamental thing to understand, yet almost every single app update I see is this or something jokey that really means "don't know" or "don't care"
Not a chance. Far too funny, too well written, too terse while being densely packed with wit. I see zero signs of it being LLM-generated and lots of stuff LLMs have no way of doing.
If I am somehow wrong I would salivate at a chance to see the input.
You don't even need to read past the first timeline entry. The name "Marcus Chen" is literally a meme within AI creative writing circles due to how often Claude defaults to that exact name when naming fictional characters.
And actually I see it clearly now, it has a bunch of signs I have called out multiple times myself. (It is entirely made out of lists of various types, and never states an opinion.)
Just my ego getting hold of me because I didn't realize it on my own.
I’m also struggling with this being AI. The blog owner is a real person who’s made significant contributions to the community for years. His post timeline is organic - wayback machine confirms they were published on the dates they show. So it’s definitely not a bot running the blog.
Whether (or to what extent) he uses AI to generate the content he posts is a valid question.
I agree with your earlier reasoning that this is far more clever than anything I’ve seen AI produce yet. Lots of AI humor is dad-joke level at best. If it is AI then he’s trained it on a hand-curated collection of top-shelf satire.
I never used Pangram before today, but since I've seen it mentioned many times on HN and I enjoyed reading the OP, I decided to try it. I am only using the free plan so let me know if I'm missing something. I am assuming the parent was referring to the tool hosted at pangram.com and not some other tool of the same name.
Pangram indeed claims the OP is 76% AI-generated. It has "high confidence" (EDIT: some parts are "medium confidence") that the early portions of the text were created by AI, and "medium confidence" that some of the later potions were written by a human. EDIT: I was especially dismayed to see that the dog might have been an AI creation :(
When I use the "supporting evidence" option, the main piece of evidence Pangram provides is the frequent use of em-dashes. Each timestamp is followed by an em-dash. Personally I think the em-dashes could be a copy-pasted em-dash or inserted by a markdown to HTML converter. nesbitt.io is apparently using Jekyll [0] - any Jekyll users know anything about this??
Pangram's "supporting evidence" feature also considers → and € to be "unusual Unicode".
Personally, to me it looks like the "supporting evidence" feature still needs some work because Pangram's AI detection is probably a lot more sophisticated than a grep for Unicode symbols. In fact the feature even has a notice claiming that "These patterns aren't used to determine our AI score; they help you see why AI text often reads differently."
As for the rest of the OP's content, it would be interesting to compare the Pangram results to a timeline of a real vulnerability. I tried doing so, but exhausted my free "Pangram credits" - apparently the first 1000 words of this article [1] about the log4j vulnerability is considered 100% human.