views:

3007

answers:

15
+64  Q: 

In-Place Radix Sort

This is a long text. Please bear with me. Boiled down, the question is: does someone know a workable in-place radix sort algorithm?


Preliminary

I've got a huge number of small fixed-length strings that only use the letters “A”, “C”, “G” and “T” (yes, you've guessed it: DNA) that I want to sort.

At the moment, I use std::sort which uses introsort in all common implementations of the STL. This works quite well. However, I'm convinced that radix sort fits my problem set perfectly and should work much better in practice.

Details

I've tested this assumption with a very naive implementation and for relatively small inputs (in the order of 10,000) this was true (well, at least more than twice as fast). However, runtime degrades abysmally when the problem size becomes larger (N > 5,000,000).

The reason is obvious: radix sort requires copying the whole data (more than once in my naive implementation, actually). This means that I've put ~ 4 GiB into my main memory which obviously kills performance. Even if it didn't, I can't afford to use this much memory since the problem sizes actually become even larger.

Use Cases

Ideally, this algorithm should work with any string length between 2 and 100, for DNA as well as DNA5 (which allows an additional wildcard character “N”), or even DNA with IUPAC ambiguity codes (resulting in 16 distinct values). However, I realize that all these cases cannot be covered so I'm happy with any speed improvement I get. The code can decide dynamically which algorithm to dispatch to.

Research

Unfortunately, the Wikipedia article on radix sort is useless. The section about an in-place variant is complete rubbish. The NIST-DADS section on radix sort is next to nonexistent. There's a promising-sounding paper called Efficient Adaptive In-Place Radix Sorting which describes the algorithm “MSL”. Unfortunately, this paper, too, is disappointing.

In particular, there are the following things.

First, he algorithm contains several mistakes and leaves a lot unexplained. In particular, it doesn’t detail the recursion call (I simply assume that it increments or reduces some pointer to calculate the current shift and mask values). Also, it uses the functions dest_group and dest_address without giving definitions. I fail to see how to implement these efficiently (i.e. in O(1); at least dest_address isn’t trivial).

Last but not least, the algorithm achieves in-place-ness by swapping array indices with elements inside the input array. This obviously only works on numerical arrays. I need to use it on strings. Of course, I could just screw strong typing and go ahead assuming that the memory will tolerate my storing an index where it doesn’t belong. But this only works as long as I can squeeze my strings into 32 bits of memory (assuming 32 bit integers). That's only 16 characters (let's ignore for the moment that 16 > log(5,000,000)).

Another paper by one of the authors gives no accurate description at all but it gives MSL’s runtime as sub-linear which is flat out wrong.

To recap: Is there any hope of finding a working reference implementation or at least a good pseudocode/description of a working in-place radix sort that works on DNA strings?

+20  A: 

Well, here's a simple implementation of an MSD radix sort for DNA. It's written in D because that's the language that I use most and therefore am least likely to make silly mistakes in, but it could easily be translated to some other language. It's in-place but requires 2 * seq.length passes through the array.

void radixSort(string[] seqs, size_t base = 0) {
    if(seqs.length == 0)
        return;

    size_t TPos = seqs.length, APos = 0;
    size_t i = 0;
    while(i < TPos) {
        if(seqs[i][base] == 'A') {
             swap(seqs[i], seqs[APos++]);
             i++;
        }
        else if(seqs[i][base] == 'T') {
            swap(seqs[i], seqs[--TPos]);
        } else i++;
    }

    i = APos;
    size_t CPos = APos;
    while(i < TPos) {
        if(seqs[i][base] == 'C') {
            swap(seqs[i], seqs[CPos++]);
        }
        i++;
    }
    if(base < seqs[0].length - 1) {
        radixSort(seqs[0..APos], base + 1);
        radixSort(seqs[APos..CPos], base + 1);
        radixSort(seqs[CPos..TPos], base + 1);
        radixSort(seqs[TPos..seqs.length], base + 1);
   }
}

Obviously, this is kind of specific to DNA, as opposed to being general, but it should be fast.

Edit: I got curious whether this code actually works, so I tested/debugged it while waiting for my own bioinformatics code to run. The version above now is actually tested and works. For 10 million sequences of 5 bases each, it's about 3x faster than an optimized introsort.

dsimcha
Dont't worry about generality – this can be done thanks to metaprogramming. Concerning correctness: your look conditions look fishy because the index `i` remains unchanged (even though the other bounds change). But the basic idea looks very good.
Konrad Rudolph
I think you need an `else i++;` as the last line in both while loops
AShelly
Yeah, I didn't actually test this code. That's a mistake. Corrected.
dsimcha
Well, I'll just go ahead and accept this answer since it seems to do the trick. I'll need more time to evaluate all the answers fairly, though.
Konrad Rudolph
If you can live with a 2x pass approach, this extends to radix-N: pass 1 = just go through and count how many there are of each of the N digits. Then if you are partitioning the array this tells you where each digit starts at. Pass 2 does swaps to the appropriate position in the array.
Jason S
(e.g. for N=4, if there are 90000 A, 80000 G, 100 C, 100000 T, then make an array initialized to the cumulative sums = [0, 90000, 170000, 170100] which is used in place of your APos, CPos, etc. as a cursor for where the next element for each digit should be swapped to.)
Jason S
I'm not sure what the relation between the binary representation and this string representation is going to be, apart from using at least 4 times as much memory as needed
Stephan Eggermont
How is the speed with longer sequences? You don't have enough different ones with a length of 5
Stephan Eggermont
This radix sort looks to be a special case of the American Flag sort - a well known in-place radix sort variant.
Edward Kmett
Completely unrelated, but what IDE do you use for D? Been trying to find a nice one...
Mark
CodeBlocks. It sucks (it's basically just an editor plus basic build automation) and I've been looking for something better, but there are no good IDEs for D2 (the bleeding edge version of the language) yet, and D has enough language level features to eliminate boilerplate code that you don't need a good IDE for it as much as you do for some other languages.
dsimcha
+4  A: 

If your data set is so big, then I would think that a disk-based buffer approach would be best:

sort(List<string> elements, int prefix)
    if (elements.Count < THRESHOLD)
         return InMemoryRadixSort(elements, prefix)
    else
         return DiskBackedRadixSort(elements, prefix)

DiskBackedRadixSort(elements, prefix)
    DiskBackedBuffer<string>[] buckets
    foreach (element in elements)
        buckets[element.MSB(prefix)].Add(element);

    List<string> ret
    foreach (bucket in buckets)
        ret.Add(sort(bucket, prefix + 1))

    return ret

I would also experiment grouping into a larger number of buckets, for instance, if your string was:

GATTACA

the first MSB call would return the bucket for GATT (256 total buckets), that way you make fewer branches of the disk based buffer. This may or may not improve performance, so experiment with it.

FryGuy
We use memory-mapped files for some applications. However, in general we work under the assumption that the machine provides just barely enough RAM to not require explicit disk backing (of course, swapping still takes place). But we are already developing a mechanism for automatic disk-backed arrays
Konrad Rudolph
+8  A: 

I've never seen an in-place radix sort, and from the nature of the radix-sort I doubt that it is much faster than a out of place sort as long as the temporary array fits into memory.

Reason:

The sorting does a linear read on the input array, but all writes will be nearly random. From a certain N upwards this boils down to a cache miss per write. This cache miss is what slows down your algorithm. If it's in place or not will not change this effect.

I know that this will not answer your question directly, but if sorting is a bottleneck you may want to have a look at near sorting algorithms as a preprocessing step (the wiki-page on the soft-heap may get you started).

That could give a very nice cache locality boost. A text-book out-of-place radix sort will then perform better. The writes will still be nearly random but at least they will cluster around the same chunks of memory and as such increase the cache hit ratio.

I have no idea if it works out in practice though.

Btw: If you're dealing with DNA strings only: You can compress a char into two bits and pack your data quite a lot. This will cut down the memory requirement by factor four over a naiive representation. Addressing becomes more complex, but the ALU of your CPU has lots of time to spend during all the cache-misses anyway.

Nils Pipenbrinck
Two good points; near sorting is a new concept to me, I'll have to read about that. Cache misses is another consideration that haunts my dreams. ;-) I'll have to see about this.
Konrad Rudolph
It's new for me as well (a couple of month), but once you got the concept you start to see performance improvement opportunities.
Nils Pipenbrinck
+4  A: 

I'm going to go out on a limb and suggest you switch to a heap/heap-sort implementation. This suggestion comes with some assumptions:

1) You control the reading of the data 2) You can do something meaningful with the sorted data as soon as you 'start' getting it sorted.

The beauty of the heap/heap-sort is that you can build the heap while you read the data, and you can start getting results the moment you have built the heap.

Let's step back. If you are so fortunate that you can read the data asynchronously (ie: you can post some kind of read request and be notified when some data is ready), then you can build a chunk of the heap while you are waiting for the next chunk of data to come in - even from disk. Often, this approach can bury most of the cost of 1/2 your sorting behind the time spent getting the data.

Once you have the data read, the first element is already available. Depending on where you are sening the data, this can be great. If you are sending it to another async reader, or some parallel 'event' model, or UI, you can send chunks and chunks as you go.

That said - if you have no control over how the data is read, and it is read synchronously, and you have no use for the sorted data until it is entirely written out - ignore all this. :(

http://en.wikipedia.org/wiki/Heapsort http://en.wikipedia.org/wiki/Binary_heap

Joe
Good suggestion. However, I've already tried this and in my particular case the overhead of maintaining a heap is larger than just accumulating the data in a vector and sorting once all the data has arrived.
Konrad Rudolph
+1  A: 

First, think about the coding of your problem. Get rid of the strings, replace them by a binary representation. Use the first byte to indicate length+encoding. Alternatively, use a fixed length representation at a 4-byte boundary. Then the radix sort becomes much easier. For a radix sort, the most important thing is to not have exception handling at the hot spot of the inner loop.

[edit] Ok, I thought a bit more about the 4-nary problem. You want a solution like a Judy Tree for this. The next solution can handle variable length strings, for fixed length just remove the length bits, that actually makes it easier.

Allocate blocks of 16 pointers. The least significant bit of the pointers can be reused, as your blocks will always be aligned. You might want a special storage allocator for it (breaking up large storage into smaller blocks). There are a number of different kinds of blocks:

  • encoding with 7 length bits of variable-length strings. As they fill up, you replace them by:
  • position encodes the next two characters, you have 16 pointers to the next blocks, ending with:
  • bitmap encoding of the last three characters of a string.

For each kind of block you need to store different information in the LSBs. As you have variable length strings you need to store end-of-string too, and the last kind of block can only be used for the longest strings. The 7 length bits should be replaced by less as you get deeper into the structure.

This provides you with a reasonably fast and very memory efficient storage of sorted strings. It will behave somewhat like a trie. To get this working, make sure to build enough unit tests. You want coverage of all block transitions. You want to start with only the second kind of block.

For even more performance, you might want to add different block types and a larger size of block. If the blocks are always the same size and large enough, you can use even less bits for the pointers. With a block size of 16 pointers you already have a byte free in a 32-bit address space. Take a look at the Judy Tree documentation for interesting block types. Basically, you add code and engineering time for a space (and runtime) trade-off

[edit2] You probably want to start with a 256 wide direct radix for the first 4 characters. That provides a decent space/time tradeoff. In this implementation, you get much less memory overhead than with a simple trie, it is approximately three times smaller (haven't measured). O(n) is no problem if the constant is low enough, as you noticed when comparing with the O(n log n) quicksort.

[edit3] Are you interested in handling doubles? With short sequences there are going to be. Adapting the blocks to handle counts is tricky, but can be very space-efficient.

Stephan Eggermont
I don't see how radix sort becomes easier in my case if I use a bit-packed representation. By the way, the framework I use actually provides the possibility of using a bit-packed representation but this is completely transparent for me as a user of the interface.
Konrad Rudolph
Not when you look at your stopwatch :)
Stephan Eggermont
I'll definitely have a look at Judy trees. Vanilla tries don't really bring much to the table though because they behave basically like a normal MSD radix sort with less passes over the elements but require extra storage.
Konrad Rudolph
+4  A: 

You can certainly drop the memory requirements by encoding the sequence in bits. You are looking at permutations so, for length 2, with "ACGT" that's 16 states, or 4 bits. For length 3, that's 64 states, which can be encoded in 6 bits. So it looks like 2 bits for each letter in the sequence, or about 32 bits for 16 characters like you said.

If there is a way to reduce the number of valid 'words', further compression may be possible.

So for sequences of length 3, one could create 64 buckets, maybe sized uint32, or uint64. Initialize them to zero. Iterate through your very very large list of 3 char sequences, and encode them as above. Use this as a subscript, and increment that bucket.
Repeat this until all of your sequences have been processed.

Next, regenerate your list.

Iterate through the 64 buckets in order, for the count found in that bucket, generate that many instances of the sequence represented by that bucket.
when all of the buckets have been iterated, you have your sorted array.

A sequence of 4, adds 2 bits, so there would be 256 buckets. A sequence of 5, adds 2 bits, so there would be 1024 buckets.

At some point the number of buckets will approach your limits. If you read the sequences from a file, instead of keeping them in memory, more memory would be available for buckets.

I think this would be faster than doing the sort in situ as the buckets are likely to fit within your working set.

Here is a hack that shows the technique

#include <iostream>
#include <iomanip>

#include <math.h>

using namespace std;

const int width = 3;
const int bucketCount = exp(width * log(4)) + 1;
      int *bucket = NULL;

const char charMap[4] = {'A', 'C', 'G', 'T'};

void setup
(
    void
)
{
    bucket = new int[bucketCount];
    memset(bucket, '\0', bucketCount * sizeof(bucket[0]));
}

void teardown
(
    void
)
{
    delete[] bucket;
}

void show
(
    int encoded
)
{
    int z;
    int y;
    int j;
    for (z = width - 1; z >= 0; z--)
    {
        int n = 1;
        for (y = 0; y < z; y++)
            n *= 4;

        j = encoded % n;
        encoded -= j;
        encoded /= n;
        cout << charMap[encoded];
        encoded = j;
    }

    cout << endl;
}

int main(void)
{
    // Sort this sequence
    const char *testSequence = "CAGCCCAAAGGGTTTAGACTTGGTGCGCAGCAGTTAAGATTGTTT";

    size_t testSequenceLength = strlen(testSequence);

    setup();


    // load the sequences into the buckets
    size_t z;
    for (z = 0; z < testSequenceLength; z += width)
    {
        int encoding = 0;

        size_t y;
        for (y = 0; y < width; y++)
        {
            encoding *= 4;

            switch (*(testSequence + z + y))
            {
                case 'A' : encoding += 0; break;
                case 'C' : encoding += 1; break;
                case 'G' : encoding += 2; break;
                case 'T' : encoding += 3; break;
                default  : abort();
            };
        }

        bucket[encoding]++;
    }

    /* show the sorted sequences */ 
    for (z = 0; z < bucketCount; z++)
    {
        while (bucket[z] > 0)
        {
            show(z);
            bucket[z]--;
        }
    }

    teardown();

    return 0;
}
EvilTeach
Why compare when you can hash eh?
wowest
Damn straight.Performance is generally an issue with any DNA processing.
EvilTeach
+1  A: 

You might try using a trie. Sorting the data is simply iterating through the dataset and inserting it; the structure is naturally sorted, and you can think of it as similar to a B-Tree (except instead of making comparisons, you always use pointer indirections).

Caching behavior will favor all of the internal nodes, so you probably won't improve upon that; but you can fiddle with the branching factor of your trie as well (ensure that every node fits into a single cache line, allocate trie nodes similar to a heap, as a contiguous array that represents a level-order traversal). Since tries are also digital structures (O(k) insert/find/delete for elements of length k), you should have competitive performance to a radix sort.

Tom
The trie has the same problem as my naive implementation: it requires O(n) additional memory which is simply too much.
Konrad Rudolph
+1  A: 

dsimcha's MSB radix sort looks nice, but Nils gets closer to the heart of the problem with the observation that cache locality is what's killing you at large problem sizes.

I suggest a very simple approach:

  1. Empirically estimate the largest size m for which a radix sort is efficient.
  2. Read blocks of m elements at a time, radix sort them, and write them out (to a memory buffer if you have enough memory, but otherwise to file), until you exhaust your input.
  3. Mergesort the resulting sorted blocks.

Mergesort is the most cache-friendly sorting algorithm I'm aware of: "Read the next item from either array A or B, then write an item to the output buffer." It runs efficiently on tape drives. It does require 2n space to sort n items, but my bet is that the much-improved cache locality you'll see will make that unimportant -- and if you were using a non-in-place radix sort, you needed that extra space anyway.

Please note finally that mergesort can be implemented without recursion, and in fact doing it this way makes clear the true linear memory access pattern.

j_random_hacker
+5  A: 

Based on dsimcha's code, I've implemented a more generic version that fits well into the framework we use (SeqAn). Actually, porting the code was completely straightforward. Only afterwards did I find that there are actually publications concerning this very topic. The great thing is: they basically say the same as you guys. A paper by Andersson and Nilsson on Implementing Radixsort is definitely worth the read. If you happen to know German, be sure to also read David Weese's diploma thesis where he implements a generic substring index. Most of the thesis is devoted to a detailed analysis of the cost of building the index, considering secondary memory and extremely large files. The results of his work have actually been implemented in SeqAn, only not in those parts where i needed it.

Just for fun, here's the code I've written (I don't think anyone not using SeqAn will have any use for it). Notice that it still doesn't consider radixes greater 4. I expect that this would have a huge impact on performance but unfortunately I simply don't have the time right now to implement this.

The code performs more than twice as fast as Introsort for short strings. The break-even point is at a length of about 12–13. The type of string (e.g. whether it has 4, 5, or 16 different values) is comparatively unimportant. Sorting > 6,000,000 DNA reads from chromosome 2 of the human genome takes just over 2 seconds on my PC. Just for the record, that's fast! Especially considering that I don't use SIMD or any other hardware acceleration. Furthermore, valgrind shows me that the main bottleneck is operator new in the string assignments. It gets called about 65,000,000 times – ten times for each string! This is a dead giveaway that swap could be optimized for these strings: instead of making copies, it could just swap all characters. I didn't try this but I'm convinced that it would make a hell of a difference. And, just to say it again, in case someone wasn't listening: the radix size has nearly no influence on runtime – which means that I should definitely try to implement the suggestion made by FryGuy, Stephan and EvilTeach.

Ah yes, by the way: cache locality is a noticeable factor: Starting at 1M strings, the runtime no longer increases linearly. However, this could be fixed quite easily: I use insertion sort for small subsets (<= 20 strings) – instead of mergesort as suggested by the random hacker. Apparently, this performs even better than mergesort for such small lists (see the first paper I linked).

namespace seqan {

template <typename It, typename F, typename T>
inline void prescan(It front, It back, F op, T const& id) {
    using namespace std;
    if (front == back) return;
    typename iterator_traits<It>::value_type accu = *front;
    *front++ = id;
    for (; front != back; ++front) {
        swap(*front, accu);
        accu = op(accu, *front);
    }
}

template <typename TIter, typename TSize, unsigned int RADIX>
inline void radix_permute(TIter front, TIter back, TSize (& bounds)[RADIX], TSize base) {
    for (TIter i = front; i != back; ++i)
        ++bounds[static_cast<unsigned int>((*i)[base])];

    TSize fronts[RADIX];

    std::copy(bounds, bounds + RADIX, fronts);
    prescan(fronts, fronts + RADIX, std::plus<TSize>(), 0);
    std::transform(bounds, bounds + RADIX, fronts, bounds, plus<TSize>());

    TSize active_base = 0;

    for (TIter i = front; i != back; ) {
        if (active_base == RADIX - 1)
            return;
        while (fronts[active_base] >= bounds[active_base])
            if (++active_base == RADIX - 1)
                return;
        TSize current_base = static_cast<unsigned int>((*i)[base]);
        if (current_base <= active_base)
            ++i;
        else
            std::iter_swap(i, front + fronts[current_base]);
        ++fronts[current_base];
    }
}

template <typename TIter, typename TSize>
inline void insertion_sort(TIter front, TIter back, TSize base) {
    typedef typename Value<TIter>::Type T;
    struct {
        TSize base, len;
        bool operator ()(T const& a, T const& b) {
            for (TSize i = base; i < len; ++i)
                if (a[i] < b[i]) return true;
                else if (a[i] > b[i]) return false;
            return false;
        }
    } cmp = { base, length(*front) }; // No closures yet. :-(

    for (TIter i = front + 1; i != back; ++i) {
        T value = *i;
        TIter j = i;
        for ( ; j != front && cmp(value, *(j - 1)); --j)
            *j = *(j - 1);
        if (j != i)
            *j = value;
    }
}

template <typename TIter, typename TSize, unsigned int RADIX>
inline void radix(TIter top, TIter front, TIter back, TSize base, TSize (& parent_bounds)[RADIX], TSize next) {
    if (back - front > 20) {
        TSize bounds[RADIX] = { 0 };
        radix_permute(front, back, bounds, base);

        // Sort current bucket recursively by suffix.
        if (base < length(*front) - 1)
            radix(front, front, front + bounds[0], base + 1, bounds, static_cast<TSize>(0));
    }
    else if (back - front > 1)
        insertion_sort(front, back, base);

    // Sort next buckets on same level recursively.
    if (next == RADIX - 1) return;
    radix(top, top + parent_bounds[next], top + parent_bounds[next + 1], base, parent_bounds, next + 1);
}

template <typename TIter>
inline void radix_sort(TIter front, TIter back) {
    typedef typename Container<TIter>::Type TStringSet;
    typedef typename Value<TStringSet>::Type TString;
    typedef typename Value<TString>::Type TChar;
    typedef typename Size<TStringSet>::Type TSize;

    TSize const RADIX = ValueSize<TChar>::VALUE;
    TSize bounds[RADIX];

    radix(front, front, back, static_cast<TSize>(0), bounds, RADIX - 1);
}

} // namespace seqan
Konrad Rudolph
+1  A: 

It looks like you've solved the problem, but for the record, it appears that one version of a workable in-place radix sort is the "American Flag Sort". It's described here: Engineering Radix Sort. The general idea is to do 2 passes on each character - first count how many of each you have, so you can subdivide the input array into bins. Then go through again, swapping each element into the correct bin. Now recursively sort each bin on the next character position.

AShelly
Actually, the solution I use is very closely related to the Flag Sorting algorithm. I don't know if there's any relevant distinction.
Konrad Rudolph
+2  A: 

I would burstsort a packed-bit representation of the strings. Burstsort is claimed to have much better locality than radix sorts, keeping the extra space usage down with burst tries in place of classical tries. The original paper has measurements.

Darius Bacon
+1  A: 

Radix-Sort is not cache conscious and is not the fastest sort algorithm for large sets. You can look at:

You can also use compression and encode each letter of your DNA into 2 bits before storing into the sort array.

bill
bill: could you explain what advantages this `qsort` function has over the `std::sort` function provided by C++? In particular, the latter implements a highly sophisticated introsort in modern libraries and inlines the comparison operation. I don't buy the claim that it performs in O(n) for most cases, since this would require a degree of introspection not available in the general case (at least not without *a lot* of overhead).
Konrad Rudolph
I'm not using c++, but in my tests the inline QSORT can be 3 times faster than the qsort in stdlib. The ti7qsort is the fastest sort for integers (faster than inline QSORT). You can also use it to sort small fixed size data. You must do the tests with your data.
bill
+1  A: 

Performance-wise you might want to look at a more general string-comparison sorting algorithms.

Currently you wind up touching every element of every string, but you can do better!

In particular, a burst sort is a very good fit for this case. As a bonus, since burstsort is based on tries, it works ridiculously well for the small alphabet sizes used in DNA/RNA, since you don't need to build any sort of ternary search node, hash or other trie node compression scheme into the trie implementation. The tries may be useful for your suffix-array-like final goal as well.

A decent general purpose implementation of burstsort is available on source forge at http://sourceforge.net/projects/burstsort/ - but it is not in-place.

For comparison purposes, The C-burstsort implementation covered at http://www.cs.mu.oz.au/~rsinha/papers/SinhaRingZobel-2006.pdf benchmarks 4-5x faster than quicksort and radix sorts for some typical workloads.

Edward Kmett
I'll definitely have to look at burst sort – although at the moment I don't see how the trie could be built in-place. In general suffix arrays have all but replaced suffix trees (and thus, tries) in bioinformatics because of superior performance characteristics in practical applications.
Konrad Rudolph
+1  A: 

You'll want to take a look at Large-scale Genome Sequence Processing by Drs. Kasahara and Morishita.

Strings comprised of the four nucleotide letters A, C, G, and T can be specially encoded into Integers for much faster processing. Radix sort is among many algorithms discussed in the book; you should be able to adapt the accepted answer to this question and see a big performance improvement.

Rudiger
The radix sort presented in this book isn’t in-place so it’s not usable for this purpose. As for the string compaction, I am (of course) already doing this. My (more or less) final solution (posted below) doesn’t show this because the library allows me to treat them like normal strings – but the `RADIX` value used can (and is) of course be adapted to larger values.
Konrad Rudolph
+1  A: 

"Radix sorting with no extra space" is a paper addressing your problem.

eig
Looks promising, though the problem has actually already been solved. Still, this goes into my reference library.
Konrad Rudolph