views:

57

answers:

2

I would like a fairly efficient way to condense an entire table to a hash value.

I have some tools that generate entire data tables, which can then be used to generate further tables, and so on. I'm trying to implement a simplistic build system to coordinate build runs and avoid repeating work. I want to be able to record hashes of the input tables so that I can later check whether they have changed. Building a table takes minutes or hours, so spending several seconds building hashes is acceptable.

A hack I have used is to just pipe the output of pg_dump to md5sum, but that requires transferring the entire table dump over the network to hash it on the local box. Ideally I'd like to produce the hash on the database server.

http://stackoverflow.com/questions/3878499/finding-the-hash-value-of-a-row-in-postgresql gives me a way to calculate a hash for a row at a time, which could then be combined somehow.

Any tips would be greatly appreciated.

Edit to post what I ended up with: tinychen's answer didn't work for me directly, because I couldn't use 'plpgsql' apparently. When I implemented the function in SQL instead, it worked, but was very inefficient for large tables. So instead of concatenating all the row hashes and then hashing that, I switched to using a "rolling hash", where the previous hash is concatenated with the text representation of a row and then that is hashed to produce the next hash. This was much better; apparently running md5 on short strings millions of extra times is better than concatenating short strings millions of times.

create function zz_concat(text, text) returns text as 
    'select md5($1 || $2);' language 'sql';

create aggregate zz_hashagg(text) (
    sfunc = zz_bm_concat,
    stype = text,
    initcond = '');
+1  A: 

As for the algorithm, you could XOR all the individual MD5 hashes, or concatenate them and hash the concatenation.

If you want to do this completely server-side you probably have to create your own aggregation function, which you could then call.

select my_table_hash(md5(CAST((f.*)AS text)) from f order by id 

As an intermediate step, instead of copying the whole table to the client, you could just select the MD5 results for all rows, and run those through md5sum.

Either way you need to establish a fixed sort order, otherwise you might end up with different checksums even for the same data.

Thilo
"you need to establish a fixed sort order". That is if you want to rehash the hashes. For XOR this is not necessary. Makes me think that XOR may not be such a good idea.
Thilo
@Thilo: You're right; XOR-aggregating the hashes means that if you have two identical rows, and they both change the same way, the final hash value will be the same as the original. Identical rows probably shouldn't be there, but I'd bet there are other properties of XOR that increase the chance of a collision too.
Ben
Thanks for the pointer; I'll take a look at doing this. Unfortunately I use lots of different DBs (and new ones are created all the time), so I'll have to script the creation of the aggregation function as part of the build system. I'll come back and accept this answer if I don't get anything else.
Ben
+2  A: 

just do like this to create a hash table aggregation function.

create function pg_concat( text, text ) returns text as '
begin
    if $1 isnull then
        return $2;
    else
       return $1 || $2;
    end if;
end;' language 'plpgsql';

create function pg_concat_fin(text) returns text as '
begin
    return $1;
end;' language 'plpgsql';

create aggregate pg_concat (
    basetype = text,
    sfunc = pg_concat,
    stype = text,
    finalfunc = pg_concat_fin);

then you could use the pg_concat function to caculate the table's hash value.

select md5(pg_concat(md5(CAST((f.*)AS text))) from f order by id
tinychen