Have you met the developers of all the Gems you’re using? Do you know their personal stance on security and whether they use strong passwords or reuse their pet’s name on every website?
I don’t know if I’m the only one, but when there are updates to my Gems, I will generally happily update them – run my tests – and be on my merry way. I raise my sword and shout ‘Deploy!’ to my many underlings (well, I type a few commands and deploy – but that sounds nowhere near as impressive).
Note: If you’re a person who carefully reviews every code change in every gem you update, and all their dependencies, then this article isn’t for you… bastard.
Are we too trusting?
I worry that the workflow for updating Gems isn’t transparent enough – especially for new developers. I worry that we’re placing our trust in a system that has one major weakness: the security of individual Gem developers. I worry that someone will gain access to a developers account and push a malicious change to one of the Gems I use. And, most importantly, I worry that I will unwittingly use this Gem and get screwed.
The problem is we’re naively trusting of other developers’ code, and it’s only getting worse. Go count the number of other developers Gems you’re using. Scared yet? Nope? Keep reading.
Proof of concept
To prove the concept, I created a new Gem called
innocent. It just has a
Something::Innocent#perform_action method which takes a string, then returns it… usually. This works fine in version 2.0, but somebody gained access to my laptop; or my GitHub repo; or is holding my girlfriend hostage and demanded I give up my private keys; and pushed version 3.0 which also calls the
If you check the code below, you’ll see all this does is read your database.yml and then raise it as an exception. But all you would need to do is email that off and you’ve got a serious issue at hand – and you’re not even aware of it.
This is obviously assuming you’re running a Rails stack here, but it would be just as easy to sniff out common config files and send them.
Something::Innocent at work, being defiled by
module Something class Innocent def self.perform_action(string) Something::Evil.do_evil string end end class Evil def self.do_evil file = File.open("./config/database.yml", "r") raise file.read.inspect # I could just as easily email this information # Or I could browse directories for API Keys # Or I could email your wife and tell her about the other girl end end end
If you want to check it out, an example project using the innocent Gem is available at https://github.com/snikch/innocent-project, and the innocent gem itself is at https://github.com/snikch/innocent.
Commit d3cff993b62e05d7e1cc is the ‘before’ point.
Commit ba1db02e4405b4fc614b is after bundle update.
Don’t flame me
I know that we’re the ones who should be checking the code we’re using in our projects, and that of course the responsibility in the end lies with ourselves. I’m not trying to deny that, but what I am saying is that we’re human and easily commit time saving mistakes in lieu of spending the time we should on some aspects of our work. Spending hours eyeballing code updates every week / month / decade is not something we want, or should need, to spend our time on.
My workflow — a step in the right direction
The workflow we’ve adopted at Learnable isn’t about necessarily solving this issue, but it is about risk mitigation. We no longer include git Gems that aren’t in a repository we own. This means that for us to update a Gem we need to merge the owners branch into our own fork’s master branch. When we do this, we get to see the changes that are being made and have a chance to spot any funny business.
This doesn’t solve the problem, but it does go one step towards a more transparent update process, where we can see the changes being made, and by having this in place it’s difficult for anyone to skip the code review process. The majority of our Gems aren’t pulled from a git repo anyway, so this really only provides one level of protection to a minority of the Gems we use.
Is it a tedious step? Yes.
Does it make our code safer? Maybe.
Will I be happy if this saves us from an attack at some point? Hell yes.
Is it worth it? I’m not sure.
I’d love to hear from people on this, with any ideas they’ve got to do with reducing this risk.
Won’t people notice?
It’s obvious to most people that this isn’t going to be an issue for popular, well contributed, Gems. Slipping code this dastardly past a strong community of users and contributors is nigh on impossible. A Gem with 100,000 users isn’t about to slip in a malicious commit without being caught.
A Gem with a few hundred, or thousand, users that commits often but only has one core contributer would be a more likely candidate – especially if it is only a small Gem that is really only on the periphery of your project.
Who wants to code review the Gem that provides slightly faster csv parsing for that one admin report your marketing guy wanted? Or, more appropriately, who wants to eyeball the Gem that was a dependency for the csv parsing Gem the marketing guy ‘needed’!
Are we too trusting? I think so.
Can we do anything about it? You tell me.
Seriously. I want to know people’s opinions regarding this, so please contribute in the comments.
I’d been spending a bit of time thinking about this one day, and drafted an email to my colleauges then decided it was a non issue. The very next day the attack vector I’d been pondering occured to some popular WordPress plugins.
Post Post Script.
If anyone mentions code signing I’ll kick them in the shins.