I want to clarify how to do connection pooling with SQL Server from C, using ODBC. I know this question is, like.... sooooo 1998, but... I've never done it in C, so... here goes:
First I think I have to set the attribute to enable pooling:
rc = SQLSetEnvAttr( NULL, // make process level cursor pooling
SQL_ATTR_CONNECTION_POOLING,
(SQLPOINTER)SQL_CP_ONE_PER_DRIVER,
SQL_IS_INTEGER);
Then allocate the HENV and set it for ODBC3.0:
rc = SQLAllocHandle(SQL_HANDLE_ENV, NULL, &henv1);
rc = SQLSetEnvAttr(henv1, SQL_ATTR_ODBC_VERSION, (void *)SQL_OV_ODBC3, 0);
Questions
- Is it correct that I can use the HENV allocated above, in multiple concurrent threads within a single process, as I call SQLAllocHandle to allocate db connections (HDBC)?
- When I want to use connection from the pool, is it correct that the typical sequence is:
- SQLAllocHandle to get connection handle (HDBC)
- SQLDriverConnect/[SQLConnect to connect on that handle
- SQLExecute/SQLExecDirect + a series of SQLFetch, to use the connection
- SQLDisconnect
- SQLFreeConnect
- Is there a significant latency benefit if I save the allocated HDBC handle, and re-use it across multiple SQLDriverConnect + SQLDisconnect calls? In other words, I'd skip steps 2.1 and 2.5, for each use of the connection. Or are steps 2.1 and 2.5 basically just malloc/free? (in which case, I don't think I care).
In this particular scenario, the C app will likely be the only application accessing the SQL Server from this box. But it's going to run within a IIS environment, and that means it will be potentially multi-process and each process will be multi-threaded.
I'll be getting and using that connection within the scope of a HTTP Request, so I'll want it to be as fast, efficient, and scalable as possible.