tags:

views:

12216

answers:

9

I'm trying to select a column from a single table (no joins) and I need the count of the number of rows, ideally before I begin retrieving the rows. I have come to two approaches that provide the information I need.

Approach 1:

SELECT COUNT( my_table.my_col ) AS row_count
  FROM my_table
 WHERE my_table.foo = 'bar'

Then

SELECT my_table.my_col
  FROM my_table
 WHERE my_table.foo = 'bar'

Or Approach 2

SELECT my_table.my_col, ( SELECT COUNT ( my_table.my_col )
                            FROM my_table
                           WHERE my_table.foo = 'bar' ) AS row_count
  FROM my_table
 WHERE my_table.foo = 'bar'

I am doing this because my SQL driver (SQL Native Client 9.0) does not allow me to use SQLRowCount on a SELECT statement but I need to know the number of rows in my result in order to allocate an array before assigning information to it. The use of a dynamically allocated container is, unfortunately, not an option in this area of my program.

I am concerned that the following scenario might occur:

  • SELECT for count occurs
  • Another instruction occurs, adding or removing a row
  • SELECT for data occurs and suddenly the array is the wrong size.
    -In the worse case, this will attempt to write data beyond the arrays limits and crash my program.

Does Approach 2 prohibit this issue?

Also, Will one of the two approaches be faster? If so, which?

Finally, is there a better approach that I should consider (perhaps a way to instruct the driver to return the number of rows in a SELECT result using SQLRowCount?)

For those that asked, I am using Native C++ with the aforementioned SQL driver (provided by Microsoft.)

A: 

Why don't you put your results into a vector? That way you don't have to know the size before hand.

jonnii
I should have mentioned that your solution occurred to me but I do not like the idea of copying my information from the database, to a vector, getting the row count, then copying everything in the vector into an array.I am not able to change the use of a simple array in this case.
antik
The result set from a database query could be huge - it may not even fit into memory - so it is inadvisable to force a result set into memory before you know whether it is going to fit.
Burly
If the result set is so huge you should probably be paging it anyway.
jonnii
The point is, you don't know how large the result set is yet. It could be huge or it could be empty. There are many cases where knowing the size will change how you handle the results (e.g. how much memory you allocate client size, if you do in-memory or paged handling, etc.).
Burly
You only don't know the size of the result set if you have no business knowledge of the system or knowledge of the data to begin with. I would hope that the poster has some idea of what the data is like. Admittedly, it's dangerous to tie yourself into a limited solution like this though.
Tom H.
Don't confuse the /rowset/ with the /result set/ here. You should know the relative size of the /rowset/ (i.e. varchar(30), int, blob) but you can't expect to know the size of the result set (i.e. the number rows in the result of the query).
Burly
For example, how big of an array do you need to hold a select of all the ids and titles of every question in StackOverflow today? Next week? Next year?Say an ID 4bytes and a title is 300bytes. 27K rows will take about 7.8M. 1M rows will take about 300M. The number of results changes over time.
Burly
Again, it depends on the situation. It's very possible to have a situation where you have a very good idea of about how many rows you will get back. What if the array was to hold countries in the world? That number varies very slightly year from year. We don't know the specifics of this situation.
Tom H.
Even so, the underlying storage mechanism for a vector is (normally) an array which doubles in size when it runs out of space. Converting a Vector<T> to a T[] shouldn't be that big a deal. I doubt the performance difference of using a vector would be that great.
jonnii
For things that would fit into memory, if you can't use a dynamic array (like he stated) and you don't know the /exact/ size, then you can't safely write any code to handle the incoming result set as a single chunk. If you don't even know if it will fit into memory, Vector<T> vs T[] is a mute.
Burly
+1  A: 

Here are some ideas:

  • Go with Approach #1 and resize the array to hold additional results or use a type that automatically resizes as neccessary (you don't mention what language you are using so I can't be more specific).
  • You could execute both statements in Approach #1 within a transaction to guarantee the counts are the same both times if your database supports this.
  • I'm not sure what you are doing with the data but if it is possible to process the results without storing all of them first this might be the best method.
Robert Gamble
A: 

You might want to think about a better pattern for dealing with data of this type.

No self-prespecting SQL driver will tell you how many rows your query will return before returning the rows, because the answer might change (unless you use a Transaction, which creates problems of its own.)

The number of rows won't change - google for ACID and SQL.

le dorfier
Good info on the ACID, not quite on the "self-respecting" comment. Many SQL drivers execute the query server side but don't return the entire result set in the same routine call (i.e. first call SQLExecute then SQLFetch to get the results). This is often hidden from the end-user (e.g. .NET Dataset)
Burly
I believe the isolation principle within the ACID concept addresses my concerns about approach #2 sufficiently. If I can count on those results to be unaffected by queries on the database by other users, I am willing to use that approach. Thank you.
antik
+1  A: 

If you are really concerned that your row count will change between the select count and the select statement, why not select your rows into a temp table first? That way, you know you will be in sync.

BoltBait
+3  A: 

Approach 2 will always return a count that matches your result set.

I suggest you link the sub-query to your outer query though, to guarantee that the condition on your count matches the condition on the dataset.

SELECT 
  mt.my_row,
 (SELECT COUNT(mt2.my_row) FROM my_table mt2 WHERE mt2.foo = mt.foo) as cnt
FROM my_table mt
WHERE mt.foo = 'bar';
JosephStyons
That might make it a correlated subquery, which means it'll probably execute the subquery for each row of the result set. A non-correlated subquery may be optimized so it only need to be run once.
Bill Karwin
Very interesting; I didn't know that. In that case, I'd suggest using a parameter shared by the main query and the subquery.
JosephStyons
+3  A: 

If you're concerned the number of rows that meet the condition may change in the few milliseconds since execution of the query and retrieval of results, you could/should execute the queries inside a transaction:

BEGIN TRAN bogus

SELECT COUNT( my_table.my_col ) AS row_count
FROM my_table
WHERE my_table.foo = 'bar'

SELECT my_table.my_col
FROM my_table
WHERE my_table.foo = 'bar'
ROLLBACK TRAN bogus

This would return the correct values, always.

Furthermore, if you're using SQL Server, you can use @@ROWCOUNT to get the number of rows affected by last statement, and redirect the output of real query to a temp table or table variable, so you can return everything altogether, and no need of a transaction:

DECLARE @dummy INT

SELECT my_table.my_col
INTO #temp_table
FROM my_table
WHERE my_table.foo = 'bar'

SET @dummy=@@ROWCOUNT
SELECT @dummy, * FROM #temp_table
Joe Pineda
+6  A: 

There are only two ways to be 100% certain that the COUNT(*) and the actual query will give consistent results:

  • Combined the COUNT(*) with the query, as in your Approach 2. I recommend the form you show in your example, not the correlated subquery form shown in the comment from kogus.
  • Use two queries, as in your Approach 1, after starting a transaction in SNAPSHOT or SERIALIZABLE isolation level.

Using one of those isolation levels is important because any other isolation level allows new rows created by other clients to become visible in your current transaction. Read the MSDN documentation on SET TRANSACTION ISOLATION for more details.

Bill Karwin
Without asking, this addressed another curiosity I had in your first bullet: obviously, I would prefer not to have the count query executed repeatedly if it can be optimized out.
antik
Right; I'm not an expert on the MS SQL Server optimizer, but I'd be surprised if it could optimize out that kind of correlated subquery.
Bill Karwin
+2  A: 

If you're using SQL Server, after your query you can select the @@RowCount function (or if your result set might have more than 2 billion rows use the BIGROW_COUNT( ) function). This will return the number of rows selected by the previous statement or number of rows affected by an insert/update/delete statement.

SELECT my_table.my_col
  FROM my_table
 WHERE my_table.foo = 'bar'

SELECT @@Rowcount

Or if you want to row count included in the result sent similar to Approach #2, you can use the the OVER clause (see http://msdn.microsoft.com/en-us/library/ms189461.aspx[1]).

SELECT my_table.my_col,
    count(*) OVER(PARTITION BY my_table.foo) AS 'Count'
  FROM my_table
 WHERE my_table.foo = 'bar'

Using the OVER clause will have much better performance than using a subquery to get the row count. Using the @@RowCount will have the best performance because the there won't be any query cost for the select @@RowCount statement

Update in response to comment: The example I gave would give the # of rows in partition - defined in this case by "PARTITION BY my_table.foo". The value of the column in each row is the # of rows with the same value of my_table.foo. Since your example query had the clause "WHERE my_table.foo = 'bar'", all rows in the resultset will have the same value of my_table.foo and therefore the value in the column will be the same for all rows and equal (in this case) this the # of rows in the query.

Here is a better/simpler example of how to include a column in each row that is the total # of rows in the resultset. Simply remove the optional Partition By clause.

SELECT my_table.my_col, count(*) OVER() AS 'Count'
  FROM my_table
 WHERE my_table.foo = 'bar'
Adam Porad
I would prefer to have the result in my result set. However, it does not appear that using OVER as you've described works when I try to run your query on my table in SQL.
antik
A: 
IF (@@ROWCOUNT > 0)
BEGIN
SELECT my_table.my_col
  FROM my_table
 WHERE my_table.foo = 'bar'
END
Deepfreezed