views:

145

answers:

1

I need to be able to update my ec2 instance from a label in mercurial when it resets so my application is always set at the right revision.

It'd be great to be able to push my changes to a mercurial host and have my instances automatically update across the ec2 network when they are reset!

I really don't want to host mercurial on the same instance (or even a dedicated instance)

A: 

What you want is Mercurial to natively support using S3 as a backend for data storage, and no such code exists. You could maybe find an S3 bridge to run in FUSE (or an S3->DAV bridge that you could possibly mount as a filesystem), and then tell Hg to push and pull data from that virtual filesystem, but otherwise you would need a dedicated ec2 instance to actually serve the data (you could launch it on demand, but the latency on that is pretty bad, as you probably well know).

There is a FUSE-based filesystem for S3 called s3fs, but it looks like it's mainly a driver for a commercial offering.

(As a separate aside, depending on your ec2 architecture, and assuming you have overlapping uptimes on multiple instances, you could theoretically leverage the distributed nature of Hg to use your existing instances to pass the changes around amongst themselves, without a "root" repository. If you only have one instance, of course, this is a non-starter.)

Nick Bastin
Nick is on the right track. I suspect one could do read-only hosting off of S3 using the static-http:// fallback method that mercurial provides. Then you'd push to a non-ec2 repo, and use something like s3sync to put the .hg directory on s3, from whence your ec2 nodes could efficiently checkout.
Ry4an
I think your only realistic option is s3fs, but you have to be concerned about writes overlapping - S3 doesn't provide locking. I looked over a few other s3 filesystem libraries, but they're almost all implemented like iSCSI - they emulate a block device in s3 which can be mounted as a disk on a local machine, but you can't mount it on more than one machine at a time.
Nick Bastin