tags:

views:

1224

answers:

1

We randomly get warnings such as below on our WL server. We'd like to better understand what exactly these warnings are and what we should possibly do to avoid them.

Abandoning transaction after 86,606 seconds: Xid=BEA1-52CE4A8A9B5CD2587CA9(14534444), Status=Committing,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=86605, seconds left=0,XAServerResourceInfo[JMS_goJDBCStore]=(ServerResourceInfo[JMS_goJDBCStore]= (state=committed,assigned=go_server),xar=JMS_goJDBCStore,re-Registered = true),XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]= (ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=new,assigned=none),xar= weblogic.jdbc.wrapper.JTSXAResourceImpl@1a8fb80,re-Registered = true),SCInfo[go+go_server]= (state=committed),properties=({weblogic.jdbc=t3://10.6.202.37:18080}),local properties= ({weblogic.transaction.recoveredTransaction=true}),OwnerTransactionManager= ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=go_server+10.6.202.37:18080+go+t3+, XAResources={JMS_goJDBCStore, weblogic.jdbc.wrapper.JTSXAResourceImpl},NonXAResources= {})],CoordinatorURL=go_server+10.6.202.37:18080+go+t3+)

I do understand the BEA explanation:

Error: Abandoning transaction after secs seconds: tx

Description: When a transaction is abandoned, knowledge of the transaction is removed from the transaction manager that was attempting to drive the transaction to completion. The JTA configuration attribute AbandonTimeoutSeconds determines how long the transaction manager should persist in trying to commit or rollback the transaction.

Cause: A resource or participating server may have been unavailable for the duration of the AbandonTimeoutSeconds period.

Action: Check participating resources for heuristic completions and correct any data inconsistencies.

We have observed that you can get rid of these warnings by deleting the *.tlog files but this doesn't seem like the right strategy to deal with the warnings.

The warnings refer to JMS and our JMS store. We do use JMS. We just don't understand why transactions are hanging out there and why they would be "abandoned"??

+1  A: 

I know it's not very satisfying, but we do delete *.tlog files before startup in our app hosted on WLS 7.

Our app is an event-processing back-end, largely driven by JMS. We aren't interested in preserving transactions across WLS restarts. If it didn't complete before the shutdown, it tends not to complete after a restart. So doing this *.tlog cleanup just eliminates some warnings and potential flaky behavior.

I don't think JMS is fundamental to any of this, by the way. At least not that I know.

By the way, we moved from JDBC JMS store to local files. That was said to be better performing and we didn't need the location independence we'd get from using JDBC. If that describes your situation also, maybe moving to local files would eliminate the root cause for you?

John M
Do you know why a standard web app would have transactions that are not completed when the app shuts down? Most of our transactions are pretty quick and infrequent. Is it just the random case where the process is killed at the exact moment when a transaction is under way?
Marcus
I don't know why we'd have leftover transactions. Our app inherently has pretty short running transactions -- a few seconds max. We found that we got less odd behavior if we cleaned out our tranlogs before starting.An ungraceful kill would surely leave a hanging tx. Not sure about graceful.
John M