I had to track down a series of problems this week, all related to transactions and database interaction. Many of the conclusions I had to make are mere guesses, but they seem plausible, so I thought I'd mention them here (as, once again, I found little on the 'net to support my efforts.)
- Prior to .Net 2.0, the easiest way to manage distributed transactions was to implement certain functionality from EnterpriseServices and dump your assembly into COM+ component services. However, it would appear that if a class in COM+ calls a method on a class that does not implement EnterpriseServices, the called method is not (may not be?) included in the distributed transaction.
- If you have an operation that makes one connection to a SQL database, it uses a lightweight transaction. If you have an operation that makes use of two or more connections to a SQL database, the original lightweight transaction is promoted to a distributed transaction when the second connection is opened. Referring to item 1, if this second set of calls is not in COM+, the second DTC transaction will not necessarily join the first. Therefore, any locks and such made by any connection made by any method on any class in COM+ have the possibility of blocking any locks and such made by the connections in the non-COM+ class.
- If you make database calls from a class that someone has marked as not supporting transactions, it will go through the same process of promoting lightweight to distributed transactions in that class regardless. They will not, however, be incorporated into existing DTC transactions, and, once again, you have the possibility of lock contention.
- The new .Net 2.0 TransactionScope is, shall we say, your friend. If you use it in the context of a transactional .Net COM+ component, the TransactionScope object will utilize the existing DTC transaction. If you're wanting to move away from COM+, this is a good start.
- If you have a procedure that writes to temp tables, these operations are transactional. If you're using lightweight transactions, and the number of records is substantial, this can cause unnecessary load on the database. If you're using distributed transactions, you may have real problems. It would appear (although I'm guessing, as I've found no documentation) that information about the operations have to be sent to the DTM or DTC. This is not fast. A procedure that runs in, say, half a second within query analyzer may take a minute or more when run in the context of a distributed transaction. Remember to be aware of the implicit promotion to distributed transactions. And rewrite your procedures to get rid of that many temp table writes!
Oh, and don't forget ... if your connection is timing out, and you're considering increasing the connection timeout (easily done on the connection string), don't forget that there's a timeout on the Command object as well, not to mention global- and component-specific timeouts (timesout?) for the distributed transactions.
Share and enjoy.