views:

137

answers:

4

I've been doing some thinking about data redundancy, and just wanted to throw everything out in writing before I kept going along with this (and furthermore to double check whether or not this idea has already been put into practice).

Alright, so here goes.

The internet is filled with redundant data, including text, images, videos, etc. A lot of effort has gone into gzip and bzip2 on-the-fly compression and decompression over HTTP as a result. Large sites like Google and Facebook have entire teams that devote their time to making their pages load more quickly.

My 'question' relates to the fact that compression is done solely on a per file basis (gzip file.txt yields file.txt.gz). Without a doubt there are many commonalities between seemingly unrelated data scattered around the Internet. What if you could store these common chunks and combine them, either client-side or server-side, to dynamically generate content?

To be able to do this, one would have to find the most common 'chunks' of data on the Internet. These chunks could be any size (there's probably an optimal choice here) and, in combination, would need to be capable of expressing any data imaginable.

For illustrative purposes, let's say we have the following 5 chunks of common data - a, b, c, d, and e. We have two files that only contain these chunks. We have programs called chunk and combine. chunk takes data, compresses it through bzip2, gzip, or some other compression algorithm, and outputs the chunks that comprise said data (after compression). combine expands the chunks and decompresses the concatenated result. Here's how they might be used:

$ cat gettysburg.txt
"Four score and seven years ago...cont'd"
$ cat test.txt
"This is a test"
$ chunk gettysburg.txt test.txt
$ cat gettysburg.txt.ck
abdbdeabcbdbe
$ cat test.txt.ck
abdeacccde
$ combine gettysburg.txt.ck test.txt.ck
$ cat gettysburg.txt
"Four score and seven years ago...cont'd"
$ cat test.txt
"This is a test"

When sending a file through HTTP, for instance, the server could chunk the data and send it to the client, who then has the capability to combine the chunked data and render it.

Has anyone attempted this before? If not I would like to know why, and if so, please post how you might make this work. A nice first step would be to detail how you might figure out what these chunks are. Once we've figured out how to get the chunks, then we figure out how these two programs, chunk and combine, might work.

I'll probably put a bounty on this (depending upon reception) because I think this is a very interesting problem with real-world implications.

+1  A: 

You don't really have to analyze it for the most common chunks - in fact, such distributed decision making could really be quite hard. How's something like this:

Let's take the case of HTTP data transfer. Chunk each file into 10MiB blocks (or whatever size you care to, I'm sure that there are performance implications each way) and compute their SHA-256 (or some hash which you are fairly sure should be safe against collisions)

For example, you have file F1 with blocks B1..Bn and checksums C1..Cn. Now, the HTTP server can respond to a request for file F1 with simply the list C1..Cn

To make this actually useful, the client has to keep a registry of known blocks - if the checksum is already there, just fetch the block locally. Done. If it's not known, either grab it from a local cache or just fetch the blocks from the remote HTTP server you just got the checksum list from.

If you ever download another file from any server (even a totally different one) which happens to share a block, you already have it downloaded and it's as secure as the hash algorithm you chose.

Now this doesn't address the case where there are offsets (e.g. one file is

AAAAAAAA

and the other

BAAAAAAAA

which a compression algorithm probably could deal with. But maybe if you compressed the blocks themselves, you'd find that you get most of the savings anyway...

Thoughts?

Steven Schlansker
+2  A: 

You asked if someone had done something similar before and what the chunk size ought to be, and I thought I'd point you to the two papers that came to my mind:

  • (A team at) Google is trying to speed up web requests by exploiting data that is shared between documents. The server communicates a pre-computed dictionary to the client, which contains data that is common between documents and is referenced on later requests. This only works for a single domain at a time, and -- currently -- only with Google Chrome: A Proposal for Shared Dictionary Compression Over HTTP

  • (A team at) Microsoft determined in their work Optimizing File Replication over Limited-Bandwidth Networks using Remote Differential Compression that for their case of filesystem synchronization a chunk size of about 2KiB works well. They use a level of indirection, so that the list of chunks needed to recreate a file is itself split into chunks -- the paper is fascinating to read, and might give you new ideas about how things might be done.

Not sure if it helps you, but here it is in case it does. :-)

DataWraith
A: 

Not exactly related to your answer but you already see this. Microsoft (and others) already provide edge networks to host the jquery libraries. You can refer to these same URIs and get the benefits of the user having accessed the file from a different site and his browser caching it.

However, how much content do you refer to that someone else has referred to in the past 20 minutes (an arbitrary number.)? You might see some benefit at a large company where lots of employees are sharing an application but otherwise I think you'd have a hard time DETERMINING the chunk you want and that would outweigh any benefit to sharing it.

No Refunds No Returns
+1  A: 

There is an easier way to deal with textual data. Currently, we store text as streams of letters which represent sounds. However, the unit of language is word not sound. Therefore, if we have a dictionary of all the words and then store "pointers" to such words in files, we can dynamically re-constitute the text by using the pointers and looking up the word list.

This should reduce the size of things by a factor of 3 or 4 right away. In this method, words are the same as chunks you have in mind. Next step is the common word groups such as "this is", "i am", "full moon", "seriously dude", "oh baby" etc.

A word list also helps for spell checking and should be implemented by Operating System. Is there any reason why spell checkers are not part of the operating system?

Square Rig Master