More, and More Extensive, Supply Chain Attacks |
April 2nd, 2026 |
| airisk, tech |
Earlier attacks were generally compromises of single projects, but some time around Shai-Hulud in 2025-11 there started to be a lot more ecosystem propagation. Things like the Trivy compromise leading to the LiteLLM compromise and (likely, since it was three days later and by the same attackers) Telnyx. I only counted the first compromise in chain in the chart, but if we counted each one the increase would be much more dramatic. Similarly, I only counted glassworm for 2025, when it came out, but it's still going.
In January I told a friend something like: "I'm surprised we're not seeing more AI-enabled cyberattacks. It seems like AIs have gotten to the point that they'd really be helping bad actors here, but it all still feels pretty normal and I don't understand why." While it's always hard to call the departure of an exponential from a noisy baseline, if this is AI helping with attacks we should expect this rate of increase to continue.
Other data points that have me expecting security to get worse before it gets better:
-
Linux is seeing a large increase in real security reports:
We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
We're seeing the defender side, but attackers can use the same tooling. -
Claude Opus 4.6 seems to be actually good at finding and exploiting holes:
When we pointed Opus 4.6 at some of the most well-tested codebases (projects that have had fuzzers running against them for years, accumulating millions of hours of CPU time), Opus 4.6 found high-severity vulnerabilities, some that had gone undetected for decades.
AI agents eagerly pull in unvetted dependencies if they seem like they'd solve the problem at hand, and while humans do this too the agents massively speed up this process.
But I do think it will get better: while I'm not an expert here, I see many factors that favor defenders:
I think it's pretty likely that security bugs in major software are for the first time being identified faster than they're being written.
Checking package updates for vulnerabilities was never something most people did, but automated systems could plausibly do it well.
Most programmers are pretty terrible reviewing code in enough detail to notice something underhanded, but LLMs excel at this kind of attention to detail.
Developer education is hard, model education is much less so. I remember how long it took for SQL injections to go from a known attack to something most programmers knew not to do; it's way easier to keep LLMs from doing this.
Dependency cooldowns are very simple, but would help a lot.
Migration to more robust systems is more automatable. Automated conversion from C to Rust, switching to TrustedTypes, etc.
I wish defenders in biology had the same structural advantages!
[1] Here's my attempt at earlier years, all with a bar of "compromise
of a widely used open-source trust path that forced action well beyond
the directly compromised maintainer or project":
- 2024: polyfill.io, xz
- 2022: pytorch
- 2021: ua-parser-js
- 2018: event-stream
- 2016: ke-ranger, linux mint
- 2011: vsftpd
Comment via: facebook, lesswrong, mastodon, bluesky
