views:

923

answers:

11

As a follow up to: http://stackoverflow.com/questions/105400/what-are-indexes-and-how-can-i-use-them-to-optimize-queries-in-my-database where I am attempting to learn about indexes, what columns are good index candidates?, specifically for an mssql database?

After some googling, everything I have read suggests that columns that are generally increasing and unique make a good index (things like mysql's auto_increment), I understand this, but I am using mssql and I am using GUIDS for primary keys, so it seems that indexes would not benefit GUID columns...

Thanks.

+1  A: 

In general (I don't use mssql so can't comment specifically), primary keys make good indexes. They are unique and must have a value specified. (Also, primary keys make such good indexes that they normally have an index created automatically.)

An index is effectively a copy of the column which has been sorted to allow binary search (which is much faster than linear search). Database systems may use various tricks to speed up search even more, particularly if the data is more complex than a simple number.

My suggestion would be to not use any indexes initially and profile your queries. If a particular query (such as searching for people by surname, for example) is run very often, try creating an index over the relevate attributes and profile again. If there is a noticeable speed-up on queries and a negligible slow-down on insertions and updates, keep the index.

(Apologies if I'm repeating stuff mentioned in your other question, I hadn't come across it previously.)

Zooba
A: 

A GUID column is not the best candidate for indexing. Indexes are best suited to columns with a data type that can be given some meaningful order, ie sorted (integer, date etc).

It does not matter if the data in a column is generally increasing. If you create an index on the column, the index will create it's own data structure that will simply reference the actual items in your table without concern for stored order (a non-clustered index). Then for example a binary search can be performed over your index data structure to provide fast retrieval.

It is also possible to create a "clustered index" that will physically reorder your data. However you can only have one of these per table, whereas you can have multiple non-clustered indexes.

Ash
Well, that's not totally accurate that way. You can easily create a regular, non-clustered index on a GUID column - why not? The GUID has a big drawback if you use it as the clustering key (e.g. for the CLUSTERED INDEX) - then it's a desaster to use.
marc_s
A: 

It should be even faster if you are using a GUID. Suppose you have the records

  1. 100
  2. 200
  3. 3000
  4. ....

If you have an index(binary search, you can find the physical location of the record you are looking for in O( lg n) time, instead of searching sequentially O(n) time. This is because you dont know what records you have in you table.

Milhous
+2  A: 

It really depends on your queries. For example, if you almost only write to a table then it is best not to have any indexes, they just slow down the writes and never get used. Any column you are using to join with another table is a good candidate for an index.

Also, read about the Missing Indexes feature. It monitors the actual queries being used against your database and can tell you what indexes would have improved the performace.

jwanagel
+3  A: 

Some folks answered a similar question here: http://stackoverflow.com/questions/79241/how-do-you-know-what-a-good-index-is

Basically, it really depends on how you will be querying your data. You want an index that quickly identifies a small subset of your dataset that is relevant to a query. If you never query by datestamp, you don't need an index on it, even if it's mostly unique. If all you do is get events that happened in a certain date range, you definitely want one. In most cases, an index on gender is pointless -- but if all you do is get stats about all males, and separately, about all females, it might be worth your while to create one. Figure out what your query patterns will be, and access to which parameter narrows the search space the most, and that's your best index.

Also consider the kind of index you make -- B-trees are good for most things and allow range queries, but hash indexes get you straight to the point (but don't allow ranges). Other types of indexes have other pros and cons.

Good luck!

SquareCog
A: 

The ol' rule of thumb was columns that are used a lot in WHERE, ORDER BY, and GROUP BY clauses, or any that seemed to be used in joins frequently. Keep in mind I'm referring to indexes, NOT Primary Key

Not to give a 'vanilla-ish' answer, but it truly depends on how you are accessing the data

curtisk
A: 

Best index depends on the contents of the table and what you are trying to accomplish.

Taken an example A member database with a Primary Key of the Members Social Security Numnber. We choose the S.S. because the application priamry referes to the individual in this way but you also want to create a search function that will utilize the members first and last name. I would then suggest creating a index over those two fields.

You should first find out what data you will be querying and then make the determination of which data you need indexed.

Joseph
A: 

Your primary key should always be an index. (I'd be surprised if it weren't automatically indexed by MS SQL, in fact.) You should also index columns you SELECT or ORDER by frequently; their purpose is both quick lookup of a single value and faster sorting.

The only real danger in indexing too many columns is slowing down changes to rows in large tables, as the indexes all need updating too. If you're really not sure what to index, just time your slowest queries, look at what columns are being used most often, and index them. Then see how much faster they are.

Eevee
+3  A: 

It all depends on what queries you expect to ask about the tables. If you ask for all rows with a certain value for column X, you will have to do a full table scan if an index can't be used.

Indexes will be useful if:

  • The column or columns have a high degree of uniqueness
  • You frequently need to look for a certain value or range of values for the column.

They will not be useful if:

  • You are selecting a large % (>10-20%) of the rows in the table
  • The additional space usage is an issue
  • You want to maximize insert performance. Every index on a table reduces insert and update performance because they must be updated each time the data changes.

Primary key columns are typically great for indexing because they are unique and are often used to lookup rows.

Plasmer
string searches where the value can be anywhere inside the string might make it not use those index in that case.
Arthur Thomas
+1  A: 

Any column that is going to be regularly used to extract data from the table should be indexed.

This includes: foreign keys - select * from tblOrder where status_id=:v_outstanding descriptive fields - select * from tblCust where Surname like "O'Brian%"

The columns do not need to be unique. Infact you can get really good performance from a binary index when searching for exceptions.

select * from tblOrder where paidYN='N'

pappes
A: 

Numeric data types which are ordered in ascending or descending order are good indexes for multiple reasons. First, numbers are generally faster to evaluate than strings (varchar, char, nvarchar, etc). Second, if your values aren't ordered, rows and/or pages may need to be shuffled about to update your index. That's additional overhead.

If you're using SQL Server 2005 and set on using uniqueidentifiers (guids), and do NOT need them to be of a random nature, check out the sequential uniqueidentifier type.

Lastly, if you're talking about clustered indexes, you're talking about the sort of the physical data. If you have a string as your clustered index, that could get ugly.

Ian Suttle