views:

19

answers:

1

I am using an object persistence framework called ECO for updating data to SQL Server. I've noticed that if I create a TransactionScope and then deliberately throw an exception after my first transaction has committed but before my second has committed the first database is updated and the 2nd is not.

I thought that simply creating the TransactionScope around the numerous updates is all I had to do once the distributed transaction coordinator is running on the main DB?

Can anyone think of any reason why this would per permitting a scenario where the first DB is updated but not the second?

A: 

Got it!

ECO supports the following databases...

  1. BlackFish
  2. DB2
  3. FireBird
  4. Mimer
  5. MySql
  6. NexusDB
  7. Oracle
  8. SQLite
  9. SQLServer
  10. Sybase
  11. Borland data provider
  12. Borland database eXpress (DBX)

I remembered this morning that some of these don't support connection pooling, so on an abstract PersistenceMapper class ECO has implemented its own connection pooling. This is what was happening

  1. App starts
  2. I have opted to store my object mapping info in the DB itself, so ECO gets a connection and reads that mapping info
  3. ECO returns the connection to the pool, but its OWN pool
  4. I later start a distributed transaction
  5. I update my objects to the database
  6. ECO retrieves a connection from its own pool

As a consequence the connection is not enlisted in the current distributed transaction. Considering SqlConnection does its own pooling it was acceptable to set PersistenceMapperSqlServer.MaxPoolSize to ZERO.

Now it uses the SqlConnection component to deal with the creation/disposal of Connections, not only does that component pool the connections but it also deals with distributed transactions properly too!

I've written to the developers to let them know that they should mark this property obsolete.

Peter Morris

related questions