I am using MySql (and PHP).
It's best if I give an example:
I am making a query, say SELECT * FROM a_table WHERE type='visible'
then returning the count. Then later I query again but WHERE type='hidden'
, etc. I may query again to get the full contents. All queries are unique, but are from the same table. I also query multiple other tables with unique queries. (PLUS execution time of mysql_real_escape_string() sprinkled in the queries)
I'm not sure how many queries I perform altogether. But I am worried that if I keep increasing queries, I will start getting into large execution times.
My question is: are large amounts of queries noticeable on performance? If yes, is there a better solution on how to store that data? Is performing an initial query and storing all the rows in an array (large array could be 1000's of rows) and then manipulating the array faster (and/or feasible)? How many queries does this gray area become black and white?
Results from 'optimizing my queries'.
These are execution times in seconds:
- No optimization:
.358 s
- Return
COUNT(*)
:.321 s
- Reducing my column selection:
.266 s
Although, the biggest was not opening a connection during every query. I was idiotically opening and closing a db connection each poll. By changing it to one continuous connection, I got execution time down to .085 s
!
Lesson learned. Thanks for all the input!