tags:

views:

59

answers:

5

Hi,

I'm creating a application that will fetch data from an SQLite database and display it in a table.

I want the table to be updated in real-time as the user makes a selection (via multiple dropdown box). Every time the user selects an option from the dropdown boxes, the application will have to create a new SELECT query with a new WHERE clause either added, removed or changed. The table will show the query results as an item is selected from the dropdown box.

My question is, in order to make the fetching process faster should I/can I index every field in every table? I'm not sure if this is even possible.

I don't need to worry about INSERT, ALTER etc performance issues as new data will be added very rarely.

Thanks

+4  A: 

I think you should first see if the performance of the SELECT queries will actually be an issue. Indexes can take up a lot of space (sometimes even more than the actual data) so don't try to optimize prematurely (remember that you can add indexes any time you want without changing anything else).

If you in fact see a problem you can try adding indexes on the fields used in the WHERE clause, starting from the fields that are queried the most.

pablochan
+1  A: 

You're after making every column searchable on? Really? And the query is too slow without indexes? Oh well, if you are and data changes are as rare as you say, build the indexes (assuming they're of a type it's sensible to index on, naturally). The space cost will be quite high though, and once you've got to the size where the indexes can't fit in memory (along with other important things like the program and the OS) then you'll be going out to disk rather a lot and everything will slow down anyway.

But don't optimize until you've measured a problem on real data. Premature optimization is the cube root of all evil.

Donal Fellows
A: 

No adding an index on every field won't make it faster - the system can only use one index at a time. If the column only holds a few different values (e.g. customers year of birth) then using an index will be les efficient than reading through every record in the table and discarding the ones which don't match. OTOH if the user filters by the primary key of the table then the index will be very, very efficient.

Adding an index on every combination of fields will make it faster - but that's (N+1)! indexes. This is going to require a lot of storage and slow down any DML massively.

The best compromise is

  1. require some filtering by default
  2. build indexes matching common selection criteria (including the 'default' filtering)
  3. log selection criteria and query times to identify how it can be improved

C.

symcbean
There are "only" 2^N-1 possible combinations of fields.
dan04
No - with 2 fields the possible indexes are (a), (b), (ab), (ba). With 3 (a), (b), (c), (ab), (ba), (ac), (bc), (ca), (cb), (abc), (bca), (cba), (cab). etc. Which is actually increasing faster than (N+1)! Certainly a LOT more than 2^N-1. Or are you suggesting that there's a version of SQLite supporting skip-scan operations?
symcbean
+1  A: 

My question is, in order to make the fetching process faster should I/can I index every field in every table? I'm not sure if this is even possible.

Yes, it's possible. Whether you should depends on how much disk space you have; indexes can be huge.

dan04
A: 

The best possible way to implement this is to pull all the data down and fill the table when the table loads. Then just add filters to the table columns and have your drop down contact the filters instead of going to the database each time.

Scott