views:

1071

answers:

7

I have once tried to use typed DateSets in a relatively small production application. Initially it looked like a pretty good idea, but turned out to be different. It was pretty fine for some basic tasks, but as soon as something more advanced was required, it's limitations kicked in and it failed miserably. Luckily the project got cancelled, and from now on I try to stick to a proper ORM like NHibernate.

But I still wonder - they were created for a reason. Perhaps I just didn't understand how to use them properly? Is anyone out there successfully using them in production systems?

Added:

Could you also quickly explain how you are using them?

I tried to use them as my DAL - it was a Windows Forms applications, and it would fetch data from tables into the DataSet and then manipulate with the data, before calling the TableManager's hierarchial update thing (don't remember the exact name). The DataSet had one table for each of the DB's physical tables. The problems started when I had to do something like a master/details relationship where I had to insert a bunch of records at once (one master record and several details records) into several tables while also keeping foreign keys correct.

Added 2:

Oh, and if you are using them, where do you put your business logic then? (Validations, calculations, etc.)

+3  A: 

They were useful for me to create XML documents to send to a web service. The client defined the expected data as typed DataSets and that made it very simple to create a DataTable in memory and get the XML representation.

Now, for real database access... I use my own DAL, I agree with you - DataSets are incredibly limited.

Otávio Décio
+2  A: 

I'm using them. Perhaps only because I haven't tried NHibernate, but I can't use LINQ, because I'm limited to Windows 2000, which doesn't run .NET 3. At first they seemed ideal, but I've discovered enough weird quirks about them to start turning me off.

recursive
If you are limited to Windows 2K and .Net 2.0, you should seriously look at SubSonic 2.1 - I used it on a number of applications after the pain of typed data sets and never looked back. It works fine with .Net 2.0
Pervez Choudhury
+4  A: 

We use typed data sets all the time, in fact we use them as a matter of best practice. The reason we do this is to catch errors at compile time rather than using string based lookups for column names that could introduce errors at run time.

I guess, as always, it depends on the scenario and whether you gain any advantage for using them or not.

Normal Row Reference:

oRow["row_pk"]

In a typed data set it now becomes:

oRow.row_pk

Our data sets usually match the database scema, we use DataAdapters to update the changes to the database using stored procedures. Output parameters update the data set with keys generated from the database.

Obviously you need to run the adapter updates in the right order for all the keys to generate in the right order. You also have to be careful when deleting parent / child rows, and ensure these deletes take place in the right order to prevent database exceptions.

Back when .NET was still 1.1 I read a book on ADO.NET by David Sceppa, this opened my eyes to what can actually be acheived, and simplified my DAL a lot (using typed data sets). There are a lot of techniques in there that can really help improve your code, I'd highly recommend it.

Ady
Umm... don't they do that under the hood anyway?
Vilx-
They do, but the type casting code is generated from the db schema, and less likely to contain errors.
recursive
Object references are used under the hood, rather than string lookups. This has the result of running (slightly) faster as there are no string comparisons.
Ady
Ady, have you checked out the generated code? I think it had something not much better than string comparison beneath the hood. Don't remember though...
Vilx-
Yep, they are definatly object references.
Ady
Ady- wrong- they do string lookups. It's easy to verify.
boomhauer
+2  A: 

They are exactly what's required if you believe in the benefit of software writing software, and static typing - software catching software errors - at the cost of eomplexity (for sure), development overhead (probably), and efficiency (possible but arguable) - which is the basic premise of languages like C# and java.

That, however, isn't a battle that's been completely won yet.

EDIT (requested explication):

Complexity.

Using either C# or java, you end up with a lot of code for configuration, type declaration, casting, type-checking and verification. Some of it you write yourself, some the IDE writes for you. But it's all code the developer is responsible for. The primary benefits are for the IDE/compiler - pointing out possible software errors, and autocompletion. I can safely say without taking either side that there's an industry discussion whether the sheer volume in lines of code is worth it. It's at least a clear violation of YAGNI. In VS, I commonly come across whole files of code written by the IDE, that has bugs in it that I end up needing to figure out. I don't like figuring out code I didn't write.

Development overhead.

Ditto the above. And java is well-known for all the various XML configuration files you need to write and maintain to make an application fit together. A lot of the appeal of RoR is "configuration by convention", just to avoid this.

Efficiency.

The assertion that interpreted or JIT-compiled languages are horribly inefficient compared to compiled languages has long been accepted as self-evident. For more reasons than I've got time for here, that assumption is being questioned; for example by some of the newer, faster javascript engines.

The obvious things to search for on this site, and google for, are "late binding", "dynamic typing", "JIT compiler".

le dorfier
Could you expand a bit on this, please?
Vilx-
Thanx for the explication. :)
Vilx-
+2  A: 

I use them, but not in the usual sense. It's akin to ocdecio's answer, although in my case it's as a presentation model (without table adapters) in asp.net

The "client services" layer uses them (as part of the contract) for any method that takes or returns data. The underlying business services use more traditional poco approach and the client services just do a bit of mapping to produce a strongly typed dataset for the given ui.

The tooling support makes it much easier for newer developers (who can follow the asp.net/learn videos to get up to speed) to use the services they require. The UI builders can also quickly "design" a new set of data that they'd like to retrieve and have the relevant service's team go from there.

+3  A: 

I use them when: 1) Writing a "1-offs" that are ditched once the project we are working on is complete. 2) Using XML files to swap data with other vendors. 3) Prototyping and testing,

I have had trouble with typed dataset when: 1) Complex queries are required to filter and save the data. (at least for me) 2) Not knowing enough of what the Visual Studio designer writes on my behalf so that I can extend the derived class that is created for typed datasets.

I've never had a speed issue with typed datasets and the users seem happy with the applications built with them. To be honest, I have not benchmarked the different methods that could replace typed datasets. Lastly, I have been saved more than once by the design time type checking that would have been missed by a novice programmer like myself by using typed datasets.

Regards,

DeBug

+1  A: 

I still use them more than I wish I did... They are definitely better than untyped datasets, but you have to be careful of nullable value type fields. If you have an Int field which can contain a null value, you cannot try to access the field when it has a null or it will throw an exception. You have to call IsMyIntFieldNull() first to check if it is/is not null.. This means ANYWHERE your code references one of these fields, you need to do this check first... which easily adds up to a huge amount of extra code. It would be better if typed datasets supports nullable value fields for these columns, but unfortunately they do not.

boomhauer