tags:

views:

155

answers:

2

In PDO, a connection can be made persistent using the PDO::ATTR_PERSISTENT attribute. According to the php manual, these connections are not closed at the end of the script, but are cached and re-used when another script requests a connection using the same credential. The persistent connection cache allows someone to avoid the overhead of establishing a new connection every time a script needs to talk to a database, resulting in a faster web application.

The manual also recommends not to use persistent connection while using PDO ODBC driver, because it may hamper the ODBC Connection Pooling process.

So apparently there seems to be no drawbacks of using persistent connection in PDO, except in the last case. However., I would like to know if there is any other disadvantages of using this mechanism, i.e., a situation where this mechanism results in performance degradation or something like that.

+2  A: 

seems to me having a persistent connection would eat up more system resources. Maybe a trivial amount, but still...

Crayon Violent
IT's always a trade o fone resource for another...
DGM
+8  A: 

The same drawbacks exist using PDO as with any other PHP database interface that does persistent connections: if your script terminates unexpectedly in the middle of database operations, the next request that gets the left over connection will pick up where the dead script left off. The connection is held open at the Apache level, not at the PHP level, and PHP doesn't tell Apache to let the connection die when the script terminates abnormally.

If the dead script locked tables, those tables will remain locked until the connection dies or the next script that gets the connection unlocks the tables itself.

If the dead script was in the middle of a transaction, that can block a multitude of tables until the a deadlock timer kicks in, and even then, the deadlock timer can kill the newer request instead of the older request that's causing the problem.

If the dead script was in the middle of a transaction, the next script that gets that connection also gets the transaction state. It's very possible (depending on your application design) that the next script might not actually ever try to commit the existing transaction, or will commit when it should not have, or roll back when it should not have.

This is only the tip of the iceberg. It can all be mitigated to an extent by always trying to clean up after a dirty connection on every single script request, but that can be a pain depending on the database. Unless you have identified creating database connections as the one thing that is a bottleneck in your script (this means you've done code profiling using xdebug and/or xhprof), you should not consider persistent connections as a solution to anything.

Further, most modern databases (including PostgreSQL) have their own preferred ways of performing connection pooling that don't have the immediate drawbacks that plain vanilla PHP-based persistent connections do.


Edit: To clarify a point, we use persistent connections at my workplace, but not by choice. We were encountering weird connection behavior, where the initial connection from our app server to our database server was taking exactly three seconds, when it should have taken a fraction of a fraction of a second. We think it's a kernel bug. We gave up trying to troubleshoot it because it happened randomly and could not be reproduced on demand, and our outsourced IT didn't have the concrete ability to track it down.

Regardless, when the folks in the warehouse are processing a few hundred incoming parts, and each part is taking three and a half seconds instead of a half second, we had to take action before they kidnapped us all and made us help them. So, we flipped a few bits on in our home-grown ERP/CRM/CMS monstrosity and experienced all of the horrors of persistent connections first-hand. It took us weeks to track down all the subtle little problems and bizarre behavior that happened seemingly at random. It turned out that those once-a-week fatal errors that our users diligently squeezed out of our app were leaving locked tables, abandoned transactions and other unfortunate wonky states.

This sob-story has a point: It broke things that we never expected to break, all in the name of performance. The tradeoff wasn't worth it, and we're eagerly awaiting the day we can switch back to normal connections without a riot from our users.

Charles
It's a very good answer.....learned a lot of things....+1
Night Shade
Added an anecdote to drive the point home. Enjoy!
Charles
one of the best answers on SO.
DGM
I hope I had read this answer before running `SELECT orders.* FROM orders LEFT JOIN items USING(item_id)`
Ast Derek