Supply Chain Thoughts

So there was a malicious package. No surprise there, malicious packages have been a thing for years and it was only a matter of time before one was discovered targeting rust.

So what can be done about that? A number of things. (tl;dr at bottom, although tbh this is a short post)

Reduce viable crate names.

The vast majority of typosquatting falls under two categories.

  1. “I thought decimal was spelled decimel, or I otherwise mistyped it”
  2. “I expected there to be a package called ‘collections’ so I just tried adding it”

(1) is trivial to solve. Enforce a minimum edit distance between crate names. Obviously all current crates would be ‘grandfathered’ and not have to deal with this, but this means that someone has to typo twice instead of once, in exactly the right way, which is radically less likely. The team can also monitor for edit distances of a slightly larger edit threshold, publishing a log of “this crate was published with an edit distance of N” - a feed like that could be ingested and monitored by the public.

(2) Is a bit harder to solve. It basically requires reserving some names. “Thankfully” users have been… graciously reserving every common name on anyways in protest of the lack of namespaces. So, uh, thanks?

Anyway, I suggest that crate authors go ahead and reserve names. Everyone always calls my company ‘graphl’ by accident so I went ahead and created a ‘graphl’ repo to redirect users - I’d just suggest this sort of thing as a good hygiene.

Reduce Impact of Malicious Builds

It’s been discussed before. A should have to state its requirements and cargo should enforce those. This is what browser extensions do, this is what every app does. You expose capabilities, you require a manifest, you lock that manifest, and you alert users to changes in that manifest.

This is RFC worthy and likely requires lots of discussion on what it would look like, but I would suggest looking at how browsers have been doing this.

Of course, the obvious caveat is “but if your can modify your code, the attacker is in prod!”. Yes. Counter-intuitively, that can be far less impactful than an attacker in my build system.

First of all, we have great tools for restricting production services. If an attacker owns a production service via a compromised package, chances are they can’t even get a reverse shell - most services require no egress traffic to the internet. With containers being so trivial they’re also bound to a set of namespaces and likely can’t access things like keys on the host.

In a CI/CD environment things are not so simple. Builds often reach out to the internet, and setting up mirrors is not always straightforward. Further, build environments will often have a lot of credentials.

I’m not happy about an attacker who can mess with my production binaries, but that’s a threat I already have to consider since I already assume RCE in these services - it frankly adds very little to my threat model. Obviously I have to care about code execution in my build environment too - like my tests executing, but again, I have way more tools for dealing with that sort of thing.

I guess the short version is that it’s really easy to sandbox runtime code because I control the vast majority of behaviors, and it’s really hard for me to sandbox build code because I control almost none of the behaviors.

The Update Framework

Lastly, we should have package signing. There are a number of ways this improves things. Most obviously is that if gets owned the attacker can’t just modify crates and own everyone else.

TUF has great properties like multiple signing parties, which means I can also have my CI/CD pipeline sign packages, which means even if my laptop is owned I can leverage all of my various branch controls as well - this is great, it gives me a way to compose all of my security controls.

I don’t really feel like digging into the virtues of package signing, it’s been discussed a million times.

OK but what do I do now?

Yeah, good question. I guess there are a few things.

  1. Sandbox your runtime services. One of the best wins you can get is removing access to the public internet for them - highly recommend.
  2. Run your builds in stages. So like, first vendor dependencies, then disable networking to the public internet. Run builds in docker. Run tests in a separate, limited environment.
  3. Limit exposure to secrets, only run CI/CD tasks that include secrets on code that has already been reviewed, that has passed tests, etc.
  4. Maybe consider cargo-crev? Honestly, I have looked at it, and I want to use it but haven’t had time.
  5. Advocate for the mitigations above.


  • We can kill typosquatting with no breakage, no complex systems, etc, with a basic edit distance check - please do this
  • We should start the process of figuring out how to sandbox builds
  • We should get The Update Framework implemented

Please at least do the typosquatting thing. Happy to chat more about it, or even discuss funding, or whatever - I’ve been asking for the typosquatting thing for years, idk where I’m supposed to suggest these things.

blog comments powered by Disqus


10 May 2022