Fake S3 – Save time, money, and develop offline

Amazon S3 is a extremely powerful service at the core of Amazon Web Services. However, outside of a production environment, S3 can be challenging to work with. It involves passing keys around, provisioning user accounts, and a reliable network connection — not to mention it costs money.

At Spool, we built Fake S3 to make working with S3 in development and testing environments much easier. Our goal was to make a self contained executable that can mimic the majority of S3 Rest API with few external dependencies.

For development, each engineer runs her own instance of Fake S3 where she can put gigabytes of images and video to develop and test against, and her setup will work offline because it is all local. We also have a continuous integration setup that is running tests 24/7 (often against large video files). Fake S3 saves us $1000 a month in bandwidth alone for our tests. In both development and testing, the time saved in not waiting for assets to go back and forth into AWS, especially our larger media files, makes Fake S3 very useful.

We’re releasing Fake S3 as a gem on github. It’s an early release and we’ll keep improving it. If you have ideas or issues, please contribute to the project!


gem install fakes3


fakes3 -r ~/fakes3_root -p 10001

Example Client Code

require 'rubygems'
require 'aws/s3'

include AWS::S3
AWS::S3::Base.establish_connection!(:access_key_id => "123",
                                    :secret_access_key => "abc",
                                    :server => "localhost",
                                    :port => "10001")


('a'..'z').each do |filename|
  S3Object.store(filename, 'Hello World', 'mystuff')

bucket = Bucket.find('mystuff')
bucket.objects.each do |s3_obj|
  puts "#{s3_obj.key}:#{s3_obj.value}"

Bucket.delete("mystuff",:force => true) # Delete your bucket and all its keys

In general clients can work if you can specify a host and port to connect them, as well as forcing path style requests (instead of subdomains.) Subdomain style S3 requests can work, but it involves adding your bucket names into /etc/hosts (ie. s3.localhost or mybucket.localhost) or using dnsmasq if you have a large number of buckets.

Simulating Network Conditions

Another useful feature is the support for simulating network conditions. You can run fakes3 with bandwidth limiting enabled like so:

fakes3 -r ~/fakes3_root -p 10001 --limit=50K

This will limit your GET request bandwidth to 50K/s per request instead of instantly off of your local machine. This is very convenient, for example, to simulate how mobile devices would behave in the real world.

Related Tools

Fake S3 is great for development and testing due to its simplicity, but is not intended to replace S3 in production. If you want to replace S3, there are other tools such as Ceph, ParkPlace (supports bitorrent), Boardwalk (S3 interface in front of MongoDB), and RiakCS that you can check out.

This article has been republished from the Spool Blog with permission.

Audi S3 image via Shutterstock