views:

192

answers:

2

Much has been written about deploying data crunching applications on EC2/S3, but I would like to know, what is the typical workflow for developing such applications?

Lets say I have a 1 TB of time series data to begin with and I have managed to store this on S3. How would I write applications and do interactive data analysis to build machine learning models and then write large programs to test them? In other words, how does one go about setting up a dev environment in such a situation? Do I boot up an EC2 instance, develop software on it and save my changes, and shutdown every time I want to do some work?

Typically, I fire up R or Pylab, read data from my local drives and do my analysis. Then I create applications based on that analysis and let it loose on that data.

On EC2, I am not sure if I can do that. Do people keep data locally for analysis and only use EC2 when they have large simulation jobs to run?

I am very curious to know what other people are doing, especially start ups who have their entire infrastructure based on EC2/S3.

+2  A: 

We create a baseline, custom AMI with all programs that we know we'll always need already on the AMI.

The software we develop (and update constantly) is stored on external storage (we use a Maven repository, but you could use anything that works well with your environment.

We then fire up our custom AMI with everything we need on it, deploy the latest version of our software from Maven, and we're good to go.

So the workflow is:

Setup

Create a custom AMI with stuff we'll always need

Ongoing

Develop software locally Deploy binaries to external storage (Maven repository in our case) Fire up multiple instances of custom AMI as needed Copy binaries from external storage to each instance Run on each instance

Eric J.
Thanks for sharing this. So basically, you will keep a local(off-amazon) copy of the data and also develop locally(off-amazon) but run experiements on amazon?
signalseeker
Our business isn't experiments, but essentially that's what we do. Part of out application includes very large tax tables and rules. We maintain them in our own network and push updates out to Amazon whenever rates or rules change (usually midnight at the end of each month).
Eric J.
A: 

I-slash-We have some experience doing the kind of thing you are trying to do. What Eric J. said basically sums it up. But allow me to reiterate,

  1. Set up a code repository on a server which is always up. We use subversion. This server need not be an ec2 machine, or very well could be an ec2 as well. Your choice.

  2. Build a custom AMI by; (a) Checking out your code base on an EC2 machine. (b) Installing all code dependencies on this machine. (c) Save image on s3.

  3. Next time boot with AMI saved in 1. Do your experiments, change code as you wish, check changes back in subversion so that when you come back you have it saved there.

  4. An alternative is to use an EBS volume. Every time you start an EC2 instance , attach your EBS volume to it. This volume can hold your code and anything else that you need to persist on cloud!

shrijeet
Thanks. So once again, the important point is that all development/analysis happens off-amazon and once you are ready to run experiments, you set it up on a custom ec2 cluster.
signalseeker