views:

707

answers:

6

I've got a rich domain model, where most classes have some behaviour and some properties that are either calculated or expose the properties of member objects (which is to say that the values of these properties are never persisted).

My client speaks to the server only via WCF.

As such, for each domain entity, I have a corresponding DTO -- a simple representation that contains only data -- as well as a mapper class that implements DtoMapper<DTO,Entity> and can convert an entity to its DTO equivalent or vice-versa through a static gateway:

var employee = Map<Employee>.from_dto<EmployeeDto>();

The server side of this application is mostly about persistence, where my DTOs come in from the WCF service, are deserialized, and then an arbitrary ORM persists them to the database, or a query request comes in from WCF and the ORM executes that query against the DB and returns objects to be serialized and sent back by WCF.

Given this scenario, does it make any sense to map my persistence store to the domain entities, or should I just map directly to the DTOs?

If I use domain entities, the flow would be

  1. client requests object
  2. WCF transmits request to server
  3. ORM queries database and returns domain entities
  4. domain entities transformed into DTOs by mapper
  5. WCF serializes DTO and returns to client
  6. client deserializes DTO
  7. DTO transformed into domain entity by mapper
  8. viewmodels created, etc.

similar on the return trip

If I map straight to DTO, I can eliminate one mapping per object, per request. What do I lose by doing this?

The only thing that comes to mind is another opportunity to validate before insert/update, because I have no guarantee that the DTO was ever subject to validation or even existed as a domain entity before being sent across the wire, and I guess a chance to validate on select (if another process might have put invalid values in the database). Are there other reasons? Are these reasons sufficient to warrant the additional mapping steps?

edit:

I did say "arbitrary ORM" above, and I do want things to be as ORM-and-persistence-agnostic as possible, but if you have anything special to add that is specific to NHibernate, by all means do.

+2  A: 

You will need to map the DTOs in the client side anyway, so, for symmetry, it is better to make the inverse mapping in the server side. This way you isolate your conversions into well-separated abstraction layers.

Abstraction layers are good not only for validations, but to insulate your code from changes below/above it, and make your code more testable and with less repetitions.

Also, unless you notice a great performance bottleneck in the extra conversion, remember: early optimization is the root of all evil. :)

e.tadeu
+1  A: 

When you say that your server side app is "mostly" about persistence, I think that this is the key thing to think about. Is there really a server-side domain model that requires some intelligence around the data that it receives or does your WCF service purely act as the gateway between your domain model and the data store?

Also, consider whether your DTO is designed for the client domain.
Is this the only client domain that needs access to that data store via your service?
Are the server-side DTOs flexible or coarse-grained enough to serve a different application domain?
If not, then it's probably worth the effort to keep the external interface implementations abstracted.

(DB->ORM->EmployeeEntity->Client1DTOAssembler->Client1EmployeeDTO).

friedX
+1  A: 

We have a similar application where a WCF service acts primarily as a gateway to the persistent data store.

In our case, our client and server do not reuse the assembly containing "DTOs." This gives us the opportunity to simply add code to the partial classes generated by the service reference, so we often are able to use a DTO as-is on the client side and treat it as a domain object. Other times we may have client-side-only domain objects that serve as facades to a bunch of persistent objects we got from the WCF service.

When you think about the behaviors and computed properties that your domain objects have, how much overlap is there, really between your client and server? In our case, we determined that the division of responsibilities between client and server meant that there was very little, if any, code that needed to be present (and exactly the same) on both the client and server.

To answer your questions directly, if your goal is to remain completely persistence agnostic, I would certainly map your persistence store to your domain objects and then map to DTOs. There are too many persistence implementations that can bleed into your objects and complicate using them as WCF DTOs.

On the client side, it may not be necessary to do an additional mapping if you can just decorate or augment your DTOs and that's a pretty simple solution.

Mike Schenk
Good point about the persistence implementations bleeding into objects.
Jay
A: 

Your architecture seems pretty well thought out. My gut-sense is, if you've already decided to reduce the objects to DTO's to send them through WCF, and you currently don't have a need for additional object functionality on the server-side, why not keep things simple and map your persistence store directly to the DTOs.

What do you lose? I don't think you really lose anything. You're architecture is clean and simple. If you decide in the future that there is a new need for richer functionality on the server-side, you can always re-factor at that point to recreate your domain entities there.

I like to keep it simple and re-factor as needed later, try to avoid the pre-mature optimization thing, etc.

alchemical
+5  A: 

I would personally recommend keeping your mapping on the server side. You've probably done a lot of work building up your design to the point it's at now; don't throw that away.

Consider what a web service is. It is not merely an abstraction over your ORM; it is a contract. It is a public API for your clients, both internal and external.

A public API should have little if any reason to change. Almost any change to an API, aside from adding new types and methods, is a breaking change. But your domain model is not going to be so strict. You will need to change it from time to time as you add new features or discover flaws in the original design. You want to be able to ensure that changes to your internal model do not cause cascading changes through the service's contract.

It's actually a common practice (I won't insult readers with the phrase "best practice") to create specific Request and Response classes for each message for a similar reason; it becomes much simpler to extend the capability of existing services and methods without them being breaking changes.

Clients probably don't want the exact same model that you use internally in the service. If you are your only client, then maybe this seems transparent, but if you have external clients and have seen just how far off their interpretation of your system can often be, then you'll understand the value of not allowing your perfect model to leak out the confines of the service API.


And sometimes, it's not even possible to send your model back through the API. There are many reasons why this can occur:

  • Cycles in the object graph. Perfectly fine in OOP; disastrous in serialization. You end up having to make painful permanent choices about which "direction" the graph must be serialized in. On the other hand, if you use a DTO, you can serialize in whichever direction you want, whatever suits the task at hand.

  • Attempting to use certain types of inheritance mechanisms over SOAP/REST can be a kludge at best. The old-style XML serializer at least supports xs:choice; DataContract doesn't, and I won't quibble over rationale, but suffice it to say that you probably have some polymorphism in your rich domain model and it's damn near impossible to channel that through the web service.

  • Lazy/deferred loading, which you probably make use of if you use an ORM. It's awkward enough making sure it gets serialized properly - for example, using Linq to SQL entities, WCF doesn't even trigger the lazy loader, it'll just put null into that field unless you load it manually - but the problem gets even worse for data coming back in. Something as simple as a List<T> auto-property that's initialized in the constructor - common enough in a domain model - simply does not work in WCF, because it doesn't invoke your constructor. Instead you have to add an [OnDeserializing] initializer method, and you really don't want to clutter up your domain model with this garbage.

  • I also just noticed the parenthetical remark that you use NHibernate. Consider that interfaces like IList<T> cannot be serialized at all over a web service! If you use POCO classes with NHibernate, as most of us do, then this simply won't work, period.


There will also likely be many instances when your internal domain model simply does not match the needs of the client, and it makes no sense to change your domain model to accommodate those needs. As an example of this, let's take something as simple as an invoice. It needs to show:

  • Information about the account (account number, name, etc.)
  • Invoice-specific data (invoice number, date, due date, etc.)
  • A/R-level information (previous balance, late charges, new balance)
  • Product or service information for everything on the invoice;
  • Etc.

This probably fits fine within a domain model. But what if the client wants to run a report that shows 1200 of these invoices? Some sort of reconciliation report?

This sucks for serialization. Now you're sending 1200 invoices with the same data being serialized over and over again - same accounts, same products, same A/R. Internally, your application is keeping track of all the links; it knows the Invoice #35 and Invoice #45 are for the same customer and thus share a Customer reference; all of this information is lost upon serialization and you end up sending a ridiculous amount of redundant data.

What you really want is to send a custom report that includes:

  • All accounts included in the report, and their A/R;
  • All products included in the report;
  • All of the invoices, with Product and Account IDs only.

You need to perform additional "normalization" on your outgoing data before you send it to the client if you want to avoid the massive redundancy. This heavily favours the DTO approach; it does not make sense to have this structure in your domain model because your domain model already takes care of redundancies, in its own way.

I hope those are enough examples and enough rationale to convince you to keep your mappings from Domain <--> Service Contract intact. You've done absolutely the right thing so far, you have a great design, and it would be a shame to negate all that effort in favour of something that could lead to major headaches later on.

Aaronaught
+1  A: 

You should definitely keep your domain entities separate from your DTO's they are different concerns. DTO's are usually heriachal, self-describing models where as your domain entities on the other hand encapsulate your business logic and have a lot of behaviour attached with them.

Having said that I'm not sure where the extra mapping is? You retrieve the data using your ORM (aka domain entities) and you map those objects to your DTO's so there is only 1 mapping there? BTW if you're not already use something like Automapper to do the tedious mapping for you.

These same DTO's are then deserialized onto the client and from there you can map directly to your UIViewModels. So the big picture looks something like:

  • Client requests entity by Id from WCF service
  • WCF Service gets entity from Repository/ORM
  • Uses an AutoMapper to map from entity to DTO
  • Client receives DTO
  • Uses an AutoMapper to map to the UI ViewModel
  • UIViewModel is binded to the GUI
mythz
+1 for the AutoMapper hint
bob