S3fs, or, 256TB of Storage on the Cheap
There’s something pretty satisfying about seeing 256TB of storage available on a machine and knowing that you’re only paying pennies for what you’re using:
> df -h /cloud/hrc/src/ Filesystem Size Used Avail Use% Mounted on s3fs-1.35 256T 0 256T 0% /cloud/hrc/src
In the words of its authors, “s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files).”
Now, make no mistake about it – since s3fs is backed by object storage in a remote data center, this is not for high- or even moderate-IOPS workloads. Routine tasks like expanding tarballs containing many small files or compiling code on an s3fs file system can be painful. But for “colder” storage applications – think online archives, or possibly some backup applications – it shines.
The installation procedure for s3fs is straightforward. I’ve also put a Puppet module for installing s3fs and managing its mounts on GitHub, although you may want to adapt it to distribute your own package of s3fs instead of building it locally on each machine.
S3fs is licensed under the GPL, as is my Puppet module.