views:

104

answers:

8

Hello,

In an application I've been working on, I have to send a 256 x 256 matrix over a socket. I'm developing a visualization client for a offshore system simulator that runs on a cluster, and this matrix is a heightmap representing the current state of the oceanic surface.

This is an realtime application, so speed is a must. And, using an 256 x 256 matrix of floats, I have to send 256 kbytes of data every second, for a bandwith requirement of 256 kbytes/second.

That's a lot, at least for my application.

So, my question is, is there some good method for compressing this matrix before sending it via socket? And, if there is such a method, how much os reduction can I expect?

As my matrix represent an continuous surface, lossy compression methods are not a problem for me. I'm mostly concerned with the compression ratio, the time that it takes for the compression to take place and, finally, if there is already an implementation of this method for C++.

Thank you in advance for your time and help, and sorry for any bad english, as this is not my native language.

+1  A: 

Well, a matrix is just a 2D signal. So there is a lot of different compression methods. I would first try the easy solution: go for inflate/deflate without a container (basically a Zip, without a Zip). http://en.wikipedia.org/wiki/DEFLATE The compression level will depend on the data, so I can not say, you must try it yourself.

Otherwise, the smarter way to do it would be to send only the changes. If you have access to the server-side code, you can just send few bytes of the heightmap that is changed every second. That would be the ideal solution, and if you wish, you can even compress the changed bytes with a deflater.

Kel
deflate and lzo are always worth testing for this kind of thing. They are normally fast and might yield a significant reduction with only a single call to the library on your side!
Matthieu M.
+3  A: 

If you are far enough offshore and/or in calm sea states, breaking waves are not likely to be a big problem. If this is the case, then the surface will be nicely continuous, and will likely look a lot like the superposition of multiple sine/cosine waves in X and Y.

2-D FFTs of the surface might give you some insight. You might be able to represent the surface as a bandwidth-limited 2-D FFT, and discard data for higher spatial frequencies.

John R. Strohm
+1  A: 

First, I'd figure out whether you can change the basic encoding from 32-bit floating point to some sort of fixed point. Assuming your values all fall within a fairly specific range (which seems likely) this may well be enough to cut your bandwidth in half. Depending on the range (and precision) needed, you might well want to represent an exponential value, so you're capturing a fairly decent idea of a wide range of magnitudes, but small differences are mostly ignored.

I'd guess you don't (probably) expect huge height changes from one sample to the next, and (at a guess) you probably fairly frequently see slopes that continue across a number of samples.

If that's the case, a predictive delta compression will probably work well. The basic idea is that for every (non-edge) point, you predict the value of that point based on surrounding points, and then encode only the difference between what that predicts and the actual value for the point. Depending on how much precision you can lose, you might well be able to encode that delta into a single byte (or maybe even two per byte).

Once you've done that, you can consider using Huffman compression or even arithmetic compression, but either one will slow your compression a fair amount.

Jerry Coffin
+1  A: 

First, look at your data. How many bits of information in those floats do you really need to send? Play with chopping off the least significant bits and seeing if it's accurate enough. Next start with the basic lossless algorithms. Compress it via the LZ, lossless methods (LZ78, LZW, ...) Get a baseline lossless ratio with a fast decompress speed. Then try BZip and the likes for a possibly better compression method and a slower decompress. You now have your lossless limit. Now try some lossy algorithms. JPEG and the likes have tunable lossy ratios and still decompress really fast. Finally, add some filters. Your data would probably compress very well with a simple differential pass along the X or Y axis (or try both and save the result as 1 bit.) This should make your data even more compressable.

All told, I'd guess you could get at least x3 your current bandwidth lossless and x10 with a little loss.

Michael Dorgan
A: 

I would try the following:

Because the height change is expected to be quite small in every second, try sending the differences in height between 2 consecutive transfers. Multiplying these numbers with 10^n so we don't have to send them as floats but integers instead. Next, use the zero-compressed encoding (plz google for it) which can reduce the number of bytes to be sent significantly. After that, use some compression algorithm to pack these bytes.

I would think it can be reduced about 50% (unless the differences are big enough to be used 3-4 bytes for each).

instcode
+1  A: 

First off: Numeric representation

Since I assume the physical range of the ocean high is limited (say -50.0 to 50 metres waves) if I understand your description correctly, the typical IEEE 754-2008 the 32-bit floating point (i.e. float in C/C++) uses 8-bits for it's exponent (range of -126 to 127), and 23 bits for the fraction and one bit for the sign. Note, that's base 2.

If your minimal measured (or computed) variance is 1mm, 0.001 metres, then you can reduce the floating point size need to at least 16-bits. IEEE 754 does define a 16-bit floating point value, for uses as an interchange format. Which is 5-bits for the exponent, 10-bits for the fraction, and 1-bit for the sign. I believe that would be suitable, and immediately reduce your requirements to 128KB/s (1024Kbps).

After I originally wrote this I realized that if you wanted a uniform representation, with a very small amount of error in the representation (<= 2mm), then converting to an 16-bit signed integer that a unit represents 2mm of physical height. So that you would have a uniform representation with a resolution of 2mm, from with values ranging from -32768 (== -65536 mm or approximately -65 metres, -200 ft) to 32767 (== 65534 mm or approximately 65.5 metres).

That's a very simple alternative representation based on the simples assumption that a) the valid range of values is with +/- 65.5 metres, and that 2mm resolution is acceptable for transmission.

Second: Modifying (filtering) the data

I don't know if a Discrete Cosine Transform (DCT), similar to what is used in JPEG compression might be useful as a lossy compression technique. Basically this is quantizing the data so that nearly equal neighbouring values are smoothed such that they can then be better compressed by loss-less compression methods.

Third: Traditional Lossless Compression

Otherwise reasonably fast loss-less compression techniques such as Lempel-Ziv based methods (LZ, LZH, LZW, etc.) and perhaps the fast LZO method.

mctylr
+1  A: 

If i understand you right, the surface is measured every second. So if the changes within one second are not to high, why not treat the data as video and try a video compression algorithm. Video compression also takes motion compensation into account. Motion compensation is, among the other parts of the algorithm, important for the high video compression rates.

Christian Ammer
A: 

Well first off, how many levels of height do you need? What's the maximum difference in height from wave peak to trough? I bet you could represent it with only 256 or 65536 possible height values which immediately cuts your data to 1/2 or 1/4 without you having to modify your data structure. You can send the min/max values as floats as well each update, so the 256 levels are always used fully to get the most accuracy possible... as the sea gets rougher you lose accuracy.

You can also save an image of 256x256 using standard image algorithms. You've not quite got a standard format bitmap cut could treat it as a grayscale - if each vertex V is scaled to a value 0-255, you can build a color (V,V,V) and for free use a JPG library that already exists. Or you can probably find a DDS file format that has a single channel of 8/16/32-bit data too.

The first part of this I did in the past, successfully. The 2nd part, I'd be keen to avoid writing your own algorithm but get your data in a form it can use existing libraries, like D3DX for example.

John