I have a problem: I need space-efficient lookup of file-system data based of file path prefix. Prefix searching of sorted text, in other words. Use a trie, you say, and I thought the same thing. Trouble is, tries are not space-efficient enough, not without other tricks.
I have a fair amount of data:
- about 450M in a plain-text Unix-format listing on disk
- about 8 million lines
- gzip default compresses to 31M
- bzip2 default compresses to 21M
I don't want to be eating anywhere close to 450M in memory. At this point I'd be happy to be using somewhere around 100M, since there's lots of redundancy in the form of prefixes.
I'm using C# for this job, and a straightforward implementation of a trie will still require one leaf node for every line in the file. Given that every leaf node will require some kind of reference to the final chunk of text (32 bits, say an index into an array of string data to minimize string duplication), and CLR object overhead is 8 bytes (verified using windbg / SOS), I'll be spending >96,000,000 bytes in structural overhead with no text storage at all.
Let's look at some of the statistical attributes of the data. When stuffed in a trie:
- total unique "chunks" of text about 1.1 million
- total unique chunks about 16M on disk in a text file
- average chunk length is 5.5 characters, max 136
- when not taking into account duplicates, about 52 million characters total in chunks
- Internal trie nodes average about 6.5 children with a max of 44
- about 1.8M interior nodes.
Excess rates of leaf creation is about 15%, excess interior node creation is 22% - by excess creation, I mean leaves and interior nodes created during trie construction but not in the final trie as a proportion of the final number of nodes of each type.
Here's a heap analysis from SOS, indicating where the most memory is getting used:
[MT ]--[Count]----[ Size]-[Class ]
03563150 11 1584 System.Collections.Hashtable+bucket[]
03561630 24 4636 System.Char[]
03563470 8 6000 System.Byte[]
00193558 425 74788 Free
00984ac8 14457 462624 MiniList`1+<GetEnumerator>d__0[[StringTrie+Node]]
03562b9c 6 11573372 System.Int32[]
*009835a0 1456066 23297056 StringTrie+InteriorNode
035576dc 1 46292000 Dictionary`2+Entry[[String],[Int32]][]
*035341d0 1456085 69730164 System.Object[]
*03560a00 1747257 80435032 System.String
*00983a54 8052746 96632952 StringTrie+LeafNode
The Dictionary<string,int>
is being used to map string chunks to indexes into a List<string>
, and can be discarded after trie construction, though GC doesn't seem to be removing it (a couple of explicit collections were done before this dump) - !gcroot
in SOS doesn't indicate any roots, but I anticipate that a later GC would free it.
MiniList<T>
is a replacement for List<T>
using a precisely-sized (i.e. linear growth, O(n^2)
addition performance) T[]
to avoid space wastage; it's a value type and is used by InteriorNode
to track children. This T[]
is added to the System.Object[]
pile.
So, if I tot up the "interesting" items (marked with *
), I get about 270M, which is better than raw text on disk, but still not close enough to my goal. I figured that .NET object overhead was too much, and created a new "slim" trie, using just value-type arrays to store data:
class SlimTrie
{
byte[] _stringData; // UTF8-encoded, 7-bit-encoded-length prefixed string data
// indexed by _interiorChildIndex[n].._interiorChildIndex[n]+_interiorChildCount[n]
// Indexes interior_node_index if negative (bitwise complement),
// leaf_node_group if positive.
int[] _interiorChildren;
// The interior_node_index group - all arrays use same index.
byte[] _interiorChildCount;
int[] _interiorChildIndex; // indexes _interiorChildren
int[] _interiorChunk; // indexes _stringData
// The leaf_node_index group.
int[] _leafNodes; // indexes _stringData
// ...
}
This structure has brought down the amount of data to 139M, and is still an efficiently traversable trie for read-only operations. And because it's so simple, I can trivially save it to disk and restore it to avoid the cost of recreating the trie every time.
So, any suggestions for more efficient structures for prefix search than trie? Alternative approaches I should consider?