views:

1281

answers:

11

I have a large array with a range of integers that are mostly continuous, eg 1-100, 110-160, etc. All integers are positive. What would be the best algorithm to compress this?

I tried the deflate algorithm but that gives me only 50% compression. Note that the algorithm cannot be lossy.

All numbers are unique and progressively increasing.

Also if you can point me to the java implementation of such algorithm that would be great.

A: 

If you have series of repeated values RLE is the easiest to implement and could give you a good result. Nontheless other more advanced algorithms that take into account the entrophy such as LZW, which is now patent-free, can usually achive a much better compression.

You can take a look at these and other lossless algorithms here.

Fernando Miguélez
+1  A: 

compress the string "1-100, 110-160" or store the string in some binary representation and parse it to restore the array

Ray Tayek
+11  A: 

Well, i'm voting for smarter way. All you have to store is [int:startnumber][int/byte/whatever:number of iterations] in this case, you'll turn your example array into 4xInt value. After it you can compress as you want :)

Tamir
.. and *then* use deflate for another 50%
peterchen
... and store not the startnumber, but the difference after the last integer instead.
Denes Tarjan
+12  A: 

First, preprocess your list of values by taking the difference between each value and the previous one (for the first value, assume the previous one was zero). This should in your case give mostly a sequence of ones, which can be compressed much more easily by most compression algorithms.

This is how the PNG format does to improve its compression (it does one of several difference methods followed by the same compression algorithm used by gzip).

CesarB
+2  A: 

I'd combine the answers given by CesarB and Fernando Miguélez.

First, store the differences between each value and the previous one. As CesarB pointed out, this will give you a sequence of mostly ones.

Then, use a Run Length Encoding compression algorithm on this sequence. It will compress very nicely due to the large number of repeated values.

Michael Dorfman
...and then apply yet another compression layer on top, for even greater gains. (If you have a binary representation "100:1;1:10;50:1" after the previous steps, another compression method could do something about the leftover redundancy.)
CesarB
A: 

As they are all unique and sorted what you have is a set of integers. The code below is my attempt at compression like Tamir suggested assuming already allocated null terminated arrays:


void
compress(int *out, int *in)
{
    while(*in) {
        *out++ = *in;
        while(in[0]+1 == in[1])
          ++in;
        *out++ = *in++;
    }
    *out++ = 0;
}

And decompression:

void
decompress(int *out, int *in)
{
    while(*out = *in++) {
        while(*out 
A: 

I'd suggest taking a look at Huffman Coding, a special case of Arithmetic Coding. In both cases you analyse your starting sequence to determine the relative frequencies of different values. More-frequently-occurring values are encoded with fewer bits than the less-frequently-occurring ones.

Martin
As i mentioned, all values in the array are unique. no repetitions.
pdeva
Sorry, I guess I should have been explicit: you would of course have to pre-process your set in the same way as for the RLE suggestions.
Martin
+6  A: 

While you could design a custom algorithm specific to your stream of data, it's probably easier to use an off the shelf encoding algorithm. I ran a few tests of compression algorithms available in Java and found the following compression rates for a sequence of one million consecutive integers:

None        1.0
Deflate     0.50
Filtered    0.34
BZip2       0.11
Lzma        0.06
brianegge
+2  A: 

What size are the numbers? In addition to the other answers, you could consider base-128 variant-length encoding, which lets you store smaller numbers in single bytes while still allowing larger numbers. The MSB means "there is another byte" - this is described here.

Combine this with the other techniques so you are storing "skip size", "take size", "skip size", "take size" - but noting that neither "skip" nor "take" will ever be zero, so we'll subtract one from each (which lets you save an extra byte for a handful of values)

So:

1-100, 110-160

is "skip 1" (assume start at zero as it makes things easier), "take 100", "skip 9", "take 51"; subtract 1 from each, giving (as decimals)

0,99,8,50

which encodes as (hex):

00 63 08 32

If we wanted to skip/take a larger number - 300, for example; we subtract 1 giving 299 - but that goes over 7 bits; starting with the little end, we encode blocks of 7 bits and an MSB to indicate continuation:

299 = 100101100 = (in blocks of 7): 0000010 0101100

so starting with the little end:

1 0101100 (leading one since continuation)
0 0000010 (leading zero as no more)

giving:

AC 02

So we can encode large numbers easily, but small numbers (which sound typical for skip/take) take less space.

You could try running this through "deflate", but it might not help much more...


If you don't want to deal with all that messy encoding cruff yourself... if you can create the integer-array of the values (0,99,8,60) - you could use protocol buffers with a packed repeated uint32/uint64 - and it'll do all the work for you ;-p

I don't "do" Java, but here's a full C# implementation (borrowing some of the encoding bits from my protobuf-net project):

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
static class Program
{
    static void Main()
    {
        var data = new List<int>();
        data.AddRange(Enumerable.Range(1, 100));
        data.AddRange(Enumerable.Range(110, 51));
        int[] arr = data.ToArray(), arr2;

        using (MemoryStream ms = new MemoryStream())
        {
            Encode(ms, arr);
            ShowRaw(ms.GetBuffer(), (int)ms.Length);
            ms.Position = 0; // rewind to read it...
            arr2 = Decode(ms);
        }
    }
    static void ShowRaw(byte[] buffer, int len)
    {
        for (int i = 0; i < len; i++)
        {
            Console.Write(buffer[i].ToString("X2"));
        }
        Console.WriteLine();
    }
    static int[] Decode(Stream stream)
    {
        var list = new List<int>();
        uint skip, take;
        int last = 0;
        while (TryDecodeUInt32(stream, out skip)
            && TryDecodeUInt32(stream, out take))
        {
            last += (int)skip+1;
            for(uint i = 0 ; i <= take ; i++) {
                list.Add(last++);
            }
        }
        return list.ToArray();
    }
    static int Encode(Stream stream, int[] data)
    {
        if (data.Length == 0) return 0;
        byte[] buffer = new byte[10];
        int last = -1, len = 0;
        for (int i = 0; i < data.Length; )
        {
            int gap = data[i] - 2 - last, size = 0;
            while (++i < data.Length && data[i] == data[i - 1] + 1) size++;
            last = data[i - 1];
            len += EncodeUInt32((uint)gap, buffer, stream)
                + EncodeUInt32((uint)size, buffer, stream);
        }
        return len;
    }
    public static int EncodeUInt32(uint value, byte[] buffer, Stream stream)
    {
        int count = 0, index = 0;
        do
        {
            buffer[index++] = (byte)((value & 0x7F) | 0x80);
            value >>= 7;
            count++;
        } while (value != 0);
        buffer[index - 1] &= 0x7F;
        stream.Write(buffer, 0, count);
        return count;
    }
    public static bool TryDecodeUInt32(Stream source, out uint value)
    {
        int b = source.ReadByte();
        if (b < 0)
        {
            value = 0;
            return false;
        }

        if ((b & 0x80) == 0)
        {
            // single-byte
            value = (uint)b;
            return true;
        }

        int shift = 7;

        value = (uint)(b & 0x7F);
        bool keepGoing;
        int i = 0;
        do
        {
            b = source.ReadByte();
            if (b < 0) throw new EndOfStreamException();
            i++;
            keepGoing = (b & 0x80) != 0;
            value |= ((uint)(b & 0x7F)) << shift;
            shift += 7;
        } while (keepGoing && i < 4);
        if (keepGoing && i == 4)
        {
            throw new OverflowException();
        }
        return true;
    }
}
Marc Gravell
@marc "mostly continuous" could be taken to mean 51% continuous in which case, bitmaps are good... :p
Sam Saffron
+1  A: 

In addition to the other solutions:

You could find "dense" areas and use a bitmap to store them.

So for example:

If you have 1000 numbers in 400 ranges between 1000-3000, you could use a single bit to denote the existence of a number and two ints to denote the range. Total storage for this range is 2000 bits + 2 ints, so you can store that info in 254bytes, which is pretty awesome since even short integers will take up two bytes each, so for this example you get 7X savings.

The denser the areas the better this algorithm will do, but at some point just storing start and finish will be cheaper.

Sam Saffron
If they are "mostly continuous", then start/end (or similar) will probably be the optimum from the outset; bitmap would be good for blocks of more random data
Marc Gravell
I agree, only way to write an optimal algorithm here would be to have some sample data.
Sam Saffron
Agree 100% re needing sample data... but since we've arrived 6 months late, I'm not holding my breath ;-p
Marc Gravell
+1  A: 

The basic idea you should probably use is, for each range of consecutive integers (I will call these ranges), to store the starting number and the size of the range. For example, if you have a list of 1000 integers, but there are only 10 separate ranges, you can store a mere 20 integers (1 start number and 1 size for each range) to represent this data which would be a compression rate of 98%. Fortunately, there are some more optimizations you can make which will help with cases where the number of ranges is larger.

  1. Store the offset of the starting number relative to the previous starting number, rather than the starting number itself. The advantage here is that the numbers you store will generally require less bits (this may come in handy in later optimization suggestions). Additionally, if you only stored the starting numbers, these numbers would all be unique, while storing the offset gives a chance that the numbers are close or even repeat which may allow for even further compression with another method being applied after.

  2. Use the minimum number of bits possible for both types of integers. You can iterate over the numbers to obtain the largest offset of a starting integer as well as the size of the largest range. You can then use a datatype that most efficiently stores these integers and simply specify the datatype or number of bits at the start of the compressed data. For example, if the largest offset of a starting integer is only 12,000, and the largest range is 9,000 long, then you can use a 2 byte unsigned integer for all of these. You could then cram the pair 2,2 at the start of the compressed data to show that 2 bytes is used for both integers. Of course you can fit this information into a single byte using a little bit of bit manipulation. If you are comfortable with doing a lot of heavy bit manipulation you could store each number as the minimum possible amount of bits rather than conforming to 1, 2, 4, or 8 byte representations.

With those two optimizations lets look at a couple of examples (each is 4,000 bytes):

  1. 1,000 integers, biggest offset is 500, 10 ranges
  2. 1,000 integers, biggest offset is 100, 50 ranges
  3. 1,000 integers, biggest offset is 50, 100 ranges

WITHOUT OPTIMIZATIONS

  1. 20 integers, 4 bytes each = 80 bytes. COMPRESSION = 98%
  2. 100 integers, 4 bytes each = 400 bytes. COMPRESSION = 90%
  3. 200 integers, 4 bytes each = 800 bytes. COMPRESSION = 80%

WITH OPTIMIZATIONS

  1. 1 byte header + 20 numbers, 1 byte each = 21 bytes. COMPRESSION = 99.475%
  2. 1 byte header + 100 numbers, 1 byte each = 101 bytes. COMPRESSION = 97.475%
  3. 1 byte header + 200 numbers, 1 byte each = 201 bytes. COMPRESSION = 94.975%
MahlerFive
For info, variant-length encoding would allow you to have occasional large values without having to make everything wide.
Marc Gravell