views:

414

answers:

5

I have property management application consisting of tables:

tenants
landlords
units
properties
vendors-contacts

Basically I want one search field to search them all rather than having to select which category I am searching. Would this be an acceptable solution (technology wise?)

Will searching across 5 tables be OK in the long run and not bog down the server? What's the best way of accomplishing this?

Using PostgreSQL

+4  A: 

Why not create a view which is a union of the tables which aggregates the columns you want to search on into one, and then search on that aggregated column?

casperOne
Upvoting to compensate the unexplained and peculiar downvote -- this answer, while concise, presents a perfectly reasonable idea (of course that view will also have primary keys for the actual tables and those keys should be kept distinct so that identifying what tables the various search hits came from is easy).
Alex Martelli
@Alex Martelli: Thanks for the support =)
casperOne
you can add a constant and distinct value into the sub-queries of the union to avoid needing unique PKs.
David Schmitt
+1  A: 

You should be fine, and there's really no other good (easy) way to do this. Just make sure the fields you are searching on are properly indexed though.

Eric Petroelje
+4  A: 

You want to use the built-in full text search or a separate product like Lucene. This is optimised for unstructured searches over heterogeneous data.

Also, don't forget that normal indices cannot be used for something LIKE '%...%'. Using a full text search engine will also be able to do efficient substring searches.

David Schmitt
+3  A: 

I would suggest using a specialized full-text indexing tool like Lucene for this. It will probably be easier to get up and running, and the result is faster and more featureful too. Postgres full text indexes will be useful if you also need structured search capability on top of this or transactionality of your search index is important.

If you do want to implement this in the database, something like the following scheme might work, assuming you use surrogate keys:

  1. for each searchable table create a view that has the primary key column of that table, the name of the table and a concatenation of all the searchable fields in that table.
  2. create a functional GIN or GiST index on the underlying over the to_tsvector() of the exact same concatenation.
  3. create a UNION ALL over all the views to create the searchable view.

After that you can do the searches like this:

SELECT id, table_name, ts_rank_cd(body, query) AS rank
    FROM search_view, to_tsquery('search&words') query
    WHERE query @@ body
    ORDER BY rank DESC
    LIMIT 10;
Ants Aasma
+2  A: 

Serialize your tables and use Xapian and python-xapian binding (wrapper).

zdmytriv