views:

459

answers:

5

I have a project that requires user-defined attributes for a particular object at runtime (Lets say a person object in this example). The project will have many different users (1000 +), each defining their own unique attributes for their own sets of 'Person' objects.

(Eg - user #1 will have a set of defined attributes, which will apply to all person objects 'owned' by this user. Mutliply this by 1000 users, and that's the bottom line minimum number of users the app will work with.) These attributes will be used to query the people object and return results.

I think these are the possible approaches I can use. I will be using C# (and any version of .NET 3.5 or 4), and have a free reign re: what to use for a datastore. (I have mysql and mssql available, although have the freedom to use any software, as long as it will fit the bill)

Have I missed anything, or made any incorrect assumptions in my assessment?

Out of these choices - what solution would you go for?

  1. Hybrid EAV object model. (Define the database using normal relational model, and have a 'property bag' table for the Person table).

    Downsides: many joins per / query. Poor performance. Can hit a limit of the number of joins / tables used in a query.

    I've knocked up a quick sample, that has a Subsonic 2.x 'esqe interface:

    Select().From().Where  ... etc
    

    Which generates the correct joins, then filters + pivots the returned data in c#, to return a datatable configured with the correctly typed data-set.

    I have yet to load test this solution. It's based on the EA advice in this Microsoft whitepaper: SQL Server 2008 RTM Documents Best Practices for Semantic Data Modeling for Performance and Scalability

  2. Allow the user to dynamically create / alter the object's table at run-time. This solution is what I believe NHibernate does in the background when using dynamic properties, as discussed where

    http://bartreyserhove.blogspot.com/2008/02/dynamic-domain-mode-using-nhibernate.html

    Downsides:

    As the system grows, the number of columns defined will get very large, and may hit the max number of columns. If there are 1000 users, each with 10 distinct attributes for their 'Person' objects, then we'd need a table holding 10k columns. Not scalable in this scenario.

    I guess I could allow a person attribute table per user, but if there are 1000 users to start, that's 1000 tables plus the other 10 odd in the app.

    I'm unsure if this would be scalable - but it doesn't seem so. Someone please correct me if I an incorrect!

  3. Use a NoSQL datastore, such as CouchDb / MongoDb

    From what I have read, these aren't yet proven in large scale apps, based on strings, and are very early in development phase. IF I am incorrect in this assessment, can someone let me know?

    http://www.eflorenzano.com/blog/post/why-couchdb-sucks/

  4. Using XML column in the people table to store attributes

    Drawbacks - no indexing on querying, so every column would need to be retrieved and queried to return a resultset, resulting in poor query performance.

  5. Serializing an object graph to the database.

    Drawbacks - no indexing on querying, so every column would need to be retrieved and queried to return a resultset, resulting in poor query performance.

  6. C# bindings for berkelyDB

    From what I read here: http://www.dinosaurtech.com/2009/berkeley-db-c-bindings/

    Berkeley Db has definitely proven to be useful, but as Robert pointed out – there is no easy interface. Your entire wOO wrapper has to be hand coded, and all of your indices are hand maintained. It is much more difficult than SQL / linq-to-sql, but that’s the price you pay for ridiculous speed.

    Seems a large overhead - however if anyone can provide a link to a tutorial on how to maintain the indices in C# - it could be a goer.

  7. [EDIT - just added this one] SQL / RDF hybrid. Odd I didn't think of this before. Similar to option 1, but instead of an "property bag" table, just XREF to a RDF store? Querying would them involve 2 steps - query the RDF store for people hitting the correct attributes, to return the person object(s), and use the ID's for these person object in the SQL query to return the relational data. Extra overhead, but could be a goer.

I'd really appreciate any input here!

A: 

My recommendation:

Allow properties to be marked as indexable. Have a smallish hard limit on number of indexable properties, and on columns per object. Have a large hard limit on total column types in all objects.

Implement indexes as separate tables (one per index) joined with main table of data (main table has large unique key for object). (Index tables can then be created/dropped as required).

Serialize the data, including the index columns, plus put the index propertoes in first class relational columns in their dedicated index tables. Use JSON instead of XML to save space in the table. Enforce short column name policy (or long display name and short stored name policy) to save space and increase performance.

Use quarks for field identifiers (but only in the main engine to save RAM and speed some read operations -- don't rely on quark pointer comparison in all cases).

My thought on your options:

1 is a possible. Performance clearly will be lower than if field ID columns not stored.

2 is a no in general DB engines not all happy about dynamic schema changes. But a possible yes if your DB engine is good at this.

3 Possible.

4 Yes though I'd use JSON.

5 Seems like 4 only less optimized??

6 Sounds good; would go with if happy to try something new and also if happy about reliability and performance but usually would want to go with more mainstream technology. I'd also like to reduce the number of engines involved in coordinating a transaction to less then would be true here.

Edit: But of course though I've recommened something there can be no general right answer here -- profile various data models and approaches with your data to see what runs best for your application.

Edit: Changed last edit wording.

martinr
James
martinr
I mean by "better not to store KEY" to use another solution (not property bag) where you don't have a SQL KEY field.
martinr
Smashing - makes perfect sense. Thank you for the clarification!
James
:-) Ideally schema changes are not going on frequently. Ideally all schemas are specified once. But we are talking about a system where the user schema can change if it needs to. It may make sense to put all indexes on the one main table, with a USERID. Mapping between index fields and user fields would then be in the app code. Maybe the one main table is broken down into several tables based on USERID value. Great question James.
martinr
A: 

Assuming you an place a limit, N, on how many custom attributes each user can define; just add N extra columns to the Person table. Then have a separate table where you store per-user metadata to describe how to interpret the contents of those columns for each user. Similar to #1 once you've read in the data, but no joins needed to pull in the custom attributes.

rwhit
Sounds good - however won't this be limiting objects to 'single attributes'? If I wanted to store say, a collection of top 10 books for a person, in the hybrid EAV model, I could set multiple "favourite_book" attributes, and query like: "WHERE pp1.PropertyName = 'favourite_book' and pp1.PropertyValue = 'catch22' AND pp2.PropertyName = 'favourite_book' and pp2.PropertyValue = 'bible'" If I am to set a limit on the number of custom attributes (use table columns), I wouldnt be able to store this data. (I could but would run out of columns, and queryies would be hard to generate dynamically)
James
+2  A: 

In a EAV model you don't have to have many joins, as you can just have the joins you need for the query filtering. For the resultset, return property entries as a separate rowset. That is what we are doing in our EAV implementation.

For example, a query might return persons with extended property 'Age' > 18:

Properties table:

1        Age
2        NickName

First resultset:

PersonID Name
1        John
2        Mary

second resultset:

PersonID PropertyID Value
1        1         24
1        2         'Neo'
2        1         32
2        2         'Pocahontas'

For the first resultset, you need an inner join for the 'age' extended property to query the basic Person object entity part:

select p.ID, p.Name from Persons p
join PersonExtendedProperties pp
on p.ID = pp.PersonID
where pp.PropertyName = 'Age'
and pp.PropertyValue > 18 -- probably need to convert to integer here

For the second resultset, we are making an outer join of the first resultset with PersonExtendedProperties table to get the rest of the extended properties. It's a 'narrow' resultset, we do not pivot the properties in sql, so we don't need multiple joins here.

Actually we use separate tables for different types to avoid data type conversion, to have extended properties indexed and easily queriable.

George Polevoy
very interesting... What RDMS are you using as a backend? I'm now (time permitting -ie, on weekend) going to modify my original test (option 1), to reflect this method (only join per queried attribute), and pivot the table with the contents of the outerjoin (with some filtering etc). I require this final pivot, so I can plug any EAV query results into anything that accepts a datatable... Hmmnnn... If only I had more time.. My gut feeling is this method will outperform the original method (due to less joins), as long as we are dealing with relativly small attribute collections / data.
James
MSSQLServer. The query itself (without feeding the resultset) actually outperforms some native wide tables in my tests. (4 parameters involved in filtering from 20 available). Maybe it's due to logistics of 'wide' table's indexes on disk.
George Polevoy
+1  A: 

The ESENT database engine on Windows is used heavily for this kind of semi-structured data. One example is Microsoft Exchange which, like your application, has thousands of users where each user can define their own set of properties (MAPI named properties). Exchange uses a slightly modified version of ESENT.

ESENT has a lot of features that enable applications with large meta-data requirements: each ESENT table can have about ~32K columns defined; tables, indexes and columns can be added at runtime; sparse columns don't take up any record space when not set; and template tables can reduce the space used by the meta-data itself. It is common for large applications to have thousands of tables/indexes.

In this case you can have one table per user and create the per-user columns in the table, creating indexes on any columns that you want to query. That would be similar to the way that some versions of Exchange store their data. The downside of this approach is that ESENT doesn't have a query engine so you will have to hand-craft your queries as MakeKey/Seek/MoveNext calls.

A managed wrapper for ESENT is here:

http://managedesent.codeplex.com/

Laurion Burchall
Wow! Yes I certainly missed this out in my research. It *seems* to good to be true. I wonder if there is anyone using this to run a web-app (other than exchange)...Hmnnn...
James
Sorry for late acceptance - esent wins hands down, even with the slightly verbose querying api!
James
A: 

Check out my site MyEDB.com for a good idea for model to solve this problem. Let me know what you think about the idea, specifically the use of a single master property table, a single entity-property table, and a table for each datatype, indexed on value and therefore pretty fast.

awgtek