I have seen systems that use Reflection and attributes on Properties or fields to maps DataReaders to objects. (A bit like what LinqToSql does.) They save a bit of typing and may reduce the number of errors when coding for DBNull etc. Once you cache the generated code they can be faster then most hand written code as well, so do consider the “high road” if you are doing this a lot.
See "A Defense of Reflection in .NET" for one example of this.
You can then write code like
class CustomerDTO
{
[Field(“id”)]
public int? CustomerId;
[Field(“name”)]
public string CustomerName;
}
...
using (DataReader reader = ...)
{
List<CustomerDTO> customers = reader.AutoMap<CustomerDTO>()
.ToList();
}
(AutoMap(), is an extension method)
@Stilgar, thanks for a great comment
If are able to you are likely to be better of using NHibernate, EF or Link to Sql, etc However on old project (or for other (sometimes valid) reasons, e.g. “not invented here”, “love of stored procs” etc) It is not always possible to use a ORM, so a lighter weight system can be useful to have “up your sleeves”
If you every needed too write lots of IDataReader loops, you will see the benefit of reducing the coding (and errors) without having to change the architecture of the system you are working on. That is not to say it’s a good architecture to start with..
I am assuming that CustomerDTO will not get out of the data access layer and composite objects etc will be built up by the data access layer using the DTO objects.