views:

53

answers:

1

Looking at http://www.nearmap.com/,

Just wondering if you can approximate how much storage is needed to store the images? (NearMap’s monthly city PhotoMaps are captured at 3cm, 5cm, 7.5cm, or 10cm resolution)

And what kind of systems/architecture is suitable to deliver those data/images? (say you are not Google, and want to implement this from scratch, what would you do? )

ie. would you store the images in Hadoop, and use apache/php/memcache to deliver etc ?

A: 

It's pretty hard to estimate how much space is required without being able to determine the compression ratio. Simply put, if aerial photographs of houses compress well, then it can significantly change how much data needs to be stored.

But, in the interests of math we can try to figure out what is required.

So, if each pixel measures 3cm by 3cm they cover 9cm^2. A quick wikipedia search tells us that London is about 1700km^2, and at 10 billion cm^2 per km^2, is 17,000,000,000,000 cm^2. This mean that we need 1,888,888,888,888 pixels to cover London at a resolution of 3cm. Putting this into bytes, at 4 bytes per pixel, is about 7000 GiB. If you get 50% compression, that drops it down to 3500GiB for London. Multiply this out by every city you want to cover to get an idea for what kind of data storage you will need.

Delivering the content is simple compared to gathering it. Since this is an embarrassingly parallel solution a share-nothing cluster with an appropriate front-end to route traffic to the right nodes would probably be easiest way to implement it. This is because the nodes don't have to maintain state or communicate with each other. The ideal method would depend on how much data you are pushing through, if you do push enough data it might be worthwhile to implement your own webserver that just responds to HTTP GETs.

I'm not sure a distributed FS would be the best way to distribute things since you'd have to spend a significant amount of time trying to pull data from somewhere else in the cluster.

Mark Robinson
Compression ratio can be up to 90% (lossy) using ECW format. Ok, share-nothing cluster makes sense, where each node handles a city or region for example? Changing the webserver to just respond to HTTP GETs sounds good, how much can it improve the speed you reckon?
portoalet
I was thinking that a shared nothing cluster would be most-resilient to fall-overs and would produce the best performance. An HTTP server optimized for just GETs could improve performance but how much I can't say. Even if you reduce processing time by 90%, if the total response time is dominated by simply sending data then you won't gain much. For Google it might make sense since they server huge amounts of traffic but it's impossible to say without benchmarks.
Mark Robinson