views:

301

answers:

3

I am maintaining a data warehouse with multiple sources of data about a class of entities that have to be merged. Each source has a natural key, and what is supposed to happen is that one and only one surrogate key is created for each natural key for all time. If one record from one source system with a particular natural key represents the same entity as another record from another source system with a different natural key, the same surrogate key will be assigned to both.

In other words, if source system A has natural key ABC representing the same entity as source system B's natural key DEF, we would assign the same surrogate key to both. The table would look like this:

SURROGATE_KEY   SOURCE_A_NATURAL_KEY    SOURCE_B_NATURAL_KEY
 1               ABC                     DEF

That was the plan. However, this system has been in production for a while, and the surrogate key assignment is a mess. Source system A would give natural key ABC on one day, before source system B knew about it. The DW assigned surrogate key 1 to it. Then source system B started giving natural key DEF, which represents the same thing as source system A's natural key ABC. The DW incorrectly gave this combo surrogate key 2. The table would look like this:

SURROGATE_KEY   SOURCE_A_NATURAL_KEY    SOURCE_B_NATURAL_KEY
 1               ABC                     NULL
 2               ABC                     DEF

So the warehouse is a mess. There's much more complex situations than this. I have a short timeline for a cleanup that requires figuring out a clean set of surrogate key to natural key mappings.

A little Googling reveals that this can be modeled as a matching problem in a non-bipartite graph:

Wikipedia - Matching

MIT 18.433 Combinatorial Optimization - Lecture Notes on Non-Bipartite Matching

I need an easy to understand implementation (not optimally performing) of Edmond's paths, trees, and flowers algorithm. I don't have a formal math or CS background, and what I do have is self-taught, and I'm not in a math-y headspace tonight. Can anyone help? A well written explanation that guides me to an implementation would be deeply appreciated.

EDIT:

A math approach is optimal because we want to maximize global fitness. A greedy approach (first take all instances of A, then B, then C...) paints you into a local maxima corner.

In any case, I got this pushed back to the business analysts to do manually (all 20 million of them). I'm helping them with functions to assess global match quality. This is ideal since they're the ones signing off anyways, so my backside is covered.

Not using surrogate keys doesn't change the matching problem. There's still a 1:1 natural key mapping that has to be discovered and maintained. The surrogate key is a convenient anchor for that, and nothing more.

+1  A: 

I think you would be better off by establishing a set of rules and attacking your key mapping table with a set of simple queries that enforce each rule, in an iterative fashion. Maybe I am oversimplifying because your example is simple.

The following are examples of rules - only you can decide which ones apply:

  • if there are duplicates, use the lowest (oldest) surrogate key
  • use the natural keys from the row with the highest (latest) surrogate key
  • use the natural keys from the most complete mapping row
  • use the most recent occurence of every natural key
  • ... ?

Writing queries that rebuild your key mapping is trivial, once you have established the rules. I am not sure how this could be a math problem?

cdonner
+1  A: 

I get the impression you're going about this the wrong way; as cdonner says, there are other ways to just rebuild the key structure without going through this mess. In particular, you need to guarantee that natural keys are always unique for a given record (violating this condition is what got you into this mess!). Having both ABC and DEF identify the same record is disastrous, but ultimately repairable. I'm not even sure why you need surrogate keys at all; while they do have many advantages, I'd give some consideration to going pure-relational and just gutting them from your schema, a la Celko; it might just get you out of this mess. But that's a decision that would have to be made after looking at your whole schema.

To address your potential solution, I've pulled out my copy of D. B. West's Introduction to Graph Theory, second edition, which describes the blossom algorithm on page 144. You'll need some mathematical background, with both mathematical notation and graph theory, to follow the algorithm, but it's sufficiently concise that I think it can help (if you decide to go this route). If you need explanation, first consult a resource on graph theory (Wikipedia, your local library, Google, wherever), or ask if you're not finding what you need.

3.3.17. Algorithm. (Edmonds' Blossom Algorithm [1965a]---sketch).

Input. A graph G, a matching M in G, an M-unsaturated vertex u.

Idea. Explore M-alternating paths from u, recording for each vertex the vertex from which it was reached, and contracting blossoms when found. Maintain sets S and T analogous to those in Algorithm 3.2.1, with S consisting of u and the vertices reached along saturated edges. Reaching an unsaturated vertex yields an augmentation.

Initialization. S = {u} and T = {} (empty set).

Iteration. If S has no unmarked vertex, stop; there is no M-augmenting path from u. Otherwise, select an unmarked v in S. To explore from v, successively consider each y in N(v) such that y is not in T.

If y is unsaturated by m, then trace back from y (expanding blossoms as needed) to report an M-augmenting (u, y)-path.

If y is in S, then a blossom has been found. Suspend the exploration of v and contract the blossom, replacing its vertices in S and T by a single new vertex in S. Continue the search from this vertex in the smaller graph.

Otherwise, y is matched to some w by M. Include y in T (reached from v), and include w in S (reached from y).

After exploring all such neighbors of v, mark v and iterate.

The algorithm as described here runs in time O(n^4), where n is the number of vertices. West gives references to versions that run as fast as O(n^5/2) or O(n^1/2 m) (m being the number of edges). If you want these references, or citations to Edmonds' original paper, just ask and I'll dig them out of the index (which kind of sucks in this book).

kquinn
A: 

If you are looking for an implementation, Eppsteins PADS library has a matching algorithm, this should be fast enough for your purposes, the general matching algorithm is in CardinalityMatching.py. The comments in the implementation explain what is going on. The library is easy to use, to supply a graph in Python you can represent the graph using a dictionary G, such that G[v] gives a list (or set) of neighbors of the vertex v.

Example:

G = {1: [1], 2:[1,3], 3: [2,4], 4:[3]}

gives a line graph with 4 vertices.

Pall Melsted