tags:

views:

62

answers:

1

I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function.

I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key':


join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
        defaults=None, usemask=True, asrecarray=False)

Join arrays r1 and r2 on key key.

The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays.

Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm.

source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html


Any pointers or help will be most appreciated!

A: 

Suppose you represent the equivalent of a SQL table, in Python, as a list of dicts, all dicts having the same (assume string) keys (other representations, including those enabled by numpy, can be logically boiled down to an equivalent form). Now, an inner join is (again, from a logical point of view) a projection of their cartesian product -- in the general case, taking a predicate argument on (which takes two arguments, one "record" [[dict]] from each table, and returns a true value if the two records need to be joined), a simple approach would be (using per-table prefixes to disambiguate, against the risk that the two tables might otherwise have homonimous "fields"):

def inner_join(tab1, tab2, prefix1, prefix2, on):
  for r1 in tab1:
    for r2 in tab2:
      if on(r1, r2):
        row = dict((prefix1 + k1, v1) for k1, v1 in r1.items())
        row.update((prefix2 + k2, v2) for k2, v2 in r2.items())
        yield row

Now, of course you don't want to do it this way, because performance is O(M * N) -- but, for the generality you've specified ("simulate SQL's join by clause (inner, outer, full)") there is really no alternative, because the ON clause of a JOIN is pretty unrestricted.

For outer and full joins, you need in addition to keep info identifying which records [[from one or both tables]] have not been yielded yet, and otherwise yield -- e.g. for a left join you'd add a bool, reset to yielded = False before the for r2 inner loop, set to True if the yield executes, and after the inner loop, if not yielded:, produce an artificial joined record (presumably using None to stand for NULL in place of the missing v2 values, since there's no r2 to actually use for the purpose).

To get any substantial efficiency improvements, you need to clarify what constraints you're willing to abide on regarding the on predicate and the tables -- we already know from your question that you can't live with a unique constraint on either table's keys, but there are many other constraints that could potentially help, and to have us guessing at what such constraints actually apply in your case would be a pretty unproductive endeavor.

Alex Martelli
Thank you, the above is most interesting. And helpful.The primary use case and hence core constraint is an 'inner' join on two datasets. Multiple datasets could be successive uses. And a nonconstraint is the post join order of records. Thus sorting the data and using a b tree for performance is perfectly fine. I am 'relatively' sure I could code this myself, but before I embark on that path I want to ensure I am not missing something premade.PS I've also triedresult = [r1+r2 for r1 in t1 for r2 in t2 if r1[0]==r2[1]] but the lack of column names makes it messy. And perf is poor.
danmat
@danmat, somewhere between the line you appear to believe or take for granted that the only possible form for the `on` condition is equality between a column of the first table and a column of the second table -- are you mind-reading you correctly? Because the reality of SQL is so many light-years away from this (`on` can be **any** predicate whatsoever in SQL!) that I suspect my crystal ball is probably clouded, since you DO mention "replicating SQL joins in Python" as your Q's title. If my crystal ball happens to be right, there are several ways to optimize this super-special case!
Alex Martelli
Thanks I will dig deeper to optimize my use case.
danmat
@danmat, consider preprocessing each table into a dict mapping the value of the relevant key into the set of rows with that value (or indices thereof, etc) -- walking on both those dicts in sorted keys order "in sync" (a typical case of ""merge sorted lists"") gives you a well-performing way to do any kind of join. You can do it w/o the auxiliary dicts, just on the sorted tables, but sorting just the dicts' keys might prove faster than sorting the entire tables.
Alex Martelli