Hi,
I have an sql query (MyQSL DB, using .Net's SqlClient) that returns a dataset. If the dataset's size is 4000 rows long or it's 20, is there a significant difference in runtime?
Also, how much of the query's execution time is spent in opening a connection etc. comparing to the time to fill the results' table?
Thanks.
Clarification (edit):
Each of the 4000 rows represent a folder's name. I want to filter them so that users won't have to see them all but only those of interest for them.
I'm not sure what's better: Filtering before getting the names from the DB (it is on a different computer), which might make my query more complicated, or only filter the view (the tree that the user sees).
Those 4,000 rows might turn to 40,000, but I'm not sure it will be relevant for this issue.
Thanks again.
Editing again:
The DB is on network, but the connection is quite fast, let's say 100Mbit.