views:

192

answers:

1

In the process of trying to help out an app dev team with performance issues on a SQL 2000 server (from a bunch of Java applications on separate app servers), I ran a SQL trace and discovered that all calls to the database are full of API Server Cursor statements (sp_cursorprepexec, sp_cursorfetch, sp_cursorclose).

Looks like they're specifying some connection string properties that force the use of server-side cursors, retrieving only 128 rows of data at a time: (From http://msdn.microsoft.com/en-us/library/Aa172588)

When the API cursor attributes or properties are set to anything other than their defaults, the OLE DB provider for SQL Server and the SQL Server ODBC driver use API server cursors instead of default result sets. Each call to an API function that fetches rows generates a roundtrip to the server to fetch the rows from the API server cursor.

UPDATE: The connection string at issue is a JDBC connection string parameter, selectMethod=cursor (which enables the server-side cursors we discussed above) vs the alternative selectMethod=direct. They have been using selectMethod=cursor as their standard connection string from all apps.

From my DBA perspective, that's just annoying (it clutters the trace up with useless junk), and (I would speculate) is resulting in many extra app-to-SQL server round trips, reducing overall performance.

They apparently did test changing (just one of about 60 different app connections) to selectMethod=direct but experienced some issues (of which I have no details) and are concerned about the application breaking.

So, my questions are:

  • Can using selectMethod=cursor lower application performance, as I have tried to argue? (by increasing the number of round trips necessary on a SQL server that already has a very high queries/sec)
  • Is selectMethod= an application-transparent setting on a JDBC connection? Could this break their app if we change it?
  • More generally, when should you use cursor vs direct?

Also cross-posted to SF.

EDIT: Received actual technical details that warrant a significant edit to title, question, and tags.

EDIT: Added bounty. Also added bounty to the SF question (this question is focused on application behavior, the SF question is focused on SQL performance.) Thanks!!

+1  A: 

Briefly,

  1. selectMethod=cursor
    • theoretically requires more server-side resources than selectMethod=direct
    • only loads at most batch-size records into client memory at once, resulting in a more predictable client memory footprint
  2. selectMethod=direct
    • theoretically requires less server-side resources than selectMethod=cursor
    • will read the entire result set into client memory (unless the driver natively supports asynchronous result set retrieval) before the client application can iterate over it; this can reduce performance in two ways:
      1. reduced performance with large result sets if the client application is written in such a way as to stop processing after traversing only a fraction of the result set (with direct it has already paid the cost of retrieving data it will essentially throw away; with cursor the waste is limited to at most batch-size - 1 rows -- the early termination condition should probably be recoded in SQL anyway e.g. as SELECT TOP or window functions)
      2. reduced performance with large result sets because of potential garbage collection and/or out-of-memory issues associated with an increased memory footprint

In summary,

  • Can using selectMethod=cursor lower application performance? -- either method can lower performance, for different reasons. Past a certain resultset size, cursor may still be preferable. See below for when to use one or the other
  • Is selectMethod= an application-transparent setting on a JDBC connection? -- it is transparent, but it can still break their app if memory usage grows significantly enough to hog their client system (and, correspondingly, your server) or crash the client altogether
  • More generally, when should you use cursor vs direct? -- I personally use cursor when dealing with potentially large or otherwise unbounded result sets. The roundtrip overhead is then amortized given a large enough batch size, and my client memory footprint is predictable. I use direct when the size of the result set I expect is known to be inferior to whatever batch size I use with cursor, or bound in some way, or when memory is not an issue.

Cheers, V.

vladr
Thanks, vladr. Sounds like its a setting that would take some testing. I know the app server is pretty busy, so perhaps it was a memory issue that caused some of the problems when they turned this off.
BradC

related questions