asp.net and sql server, have sqls for selecting a subset of rows, I need the count* frequently
Of course I can have a select count(*) for each of these sqls in each roundtrip but soon it will become too slow.
-How do you make it really fast?
asp.net and sql server, have sqls for selecting a subset of rows, I need the count* frequently
Of course I can have a select count(*) for each of these sqls in each roundtrip but soon it will become too slow.
-How do you make it really fast?
Are you experiencing a problem that can't be solved by adding another index to your table? COUNT(*) operations are usually O(log n) in terms of total rows, and O(n) in terms of returned rows.
Edit: What I mean is (in case I misunderstood your question)
Given this structure:
CREATE TABLE emails (
id INT,
.... OTHER FIELDS
)
CREATE TABLE filters (
filter_id int,
filter_expression nvarchar(max) -- Or whatever...
)
Create the table
CREATE TABLE email_filter_matches (
filter int,
email int,
CONSTRAINT pk_email_filter_matches PRIMARY KEY(filter, email)
)
The data in this table would have to be updated every time a filter is updated, or when a new email is received.
Then, a query like
SELECT COUNT(*) FROM email_filter_matches WHERE filter = @filter_id
should be O(log n) with regard to total number of filter matches, and O(n) in regard to number of matches for this particular filter. Since your example shows only a small number of matches (which seems realistic when it comes to email filters), this could very well be OK.
If you really want to, of course you could create a trigger on the email_filter_matches table to keep a cached value in the filters table in sync, but that can be done the day you hit performance issues. It's not trivial to get these kinds of things right in concurrent systems.
Here are a few ideas for speeding up count(*) at the data tier:
As an alternative, if only the filters change frequently and not the data itself, you might consider building a cube using Analysis Services, and run your queries against that.