I am just starting to model the data for a new C# project, which must be persistable.
Looks like the most natural OO model will have lots of nested .Net generics. Lists of objects, and these objects will also contain Lists of other generic types, and so on, nested at least three levels deep.
Ideally, I'd like to just design the data model in an OO fashion, and let an Object Relational Mapping tool handle the persistance - for me, OO design comes much more easily than relational design.
But my initial research into ORMs indicates that they don't neccessarily work this way. It's not clear to me yet if I can just create any arbitrarily complex object model, and expect the ORM to persist it, "automagically". (I asked that question yesterday).
It's my impression that I may have to limit my OO data model to constructs that I know the ORM can handle, which seems like a big limitation, especially when I haven't decided on an ORM yet.
I want to keep the project design moving forward, and don't want to get bogged down in researching ORMs right now.
My instinct is to just design my object model around what I know I can do in C#, in the way that seems most natural. This will help me get a better feel for what is really required for the application. It's my hope I can then re-factor the design based on ORM limitations, if that proves to be necessary.
Is this a good approach, or am I setting myself up for a world of hurt? Is it better to deal with known ORM contraints up front, and "dumb down" the object model around those constraints?
Edit: Unsolicited, Stefan Steinegger listed most of the contraints that NHibernate imposes on the OO code. Can anyone else provide a similar list for other ORMs - Subsonic, in particular?