Currently we are having DB of size 10GB and it is growing by around 3GB a month. Often I have come to hear that one should from time to time rebuild the indexes so as the improve the query execution time. So how often should i re-build the indexes in the given senario ?
Given the size of ur DB u can easily rebuild the indexes once in every-month. But as the size increases say to around 500GB you can do it bi-monthly.
"When you need too" and "When you can"!
For example...
Test for fragmentation first and decide whether to do nothing, reorg or rebuild. SQL Fool's script does this, for example, has @minFragmentation and @rebuildThreshold parameters
Do statistics daily, say, but indexes at weekends. What is your maintenance window?
There's a general consensus that you should reorganize ("defragment") your indices as soon as index fragmentation reaches more than 5 (sometimes 10%), and you should rebuild them completely when it goes beyond 30% (at least that's the numbers I've heard advocated in a lot of places).
Michelle Ufford (a.k.a. "SQL Fool") has an automated index defrag script, which uses those exact limits for deciding when to reorganize or rebuild an index.
Also see Brad McGehee's tips on rebuild indexes with some good thoughts and tips on how to deal with index rebuilding.
I use this script here (can't remember when I got this from - whoever it was: many thanks! Really helpful stuff) to display the index fragmentation on all your indices in a given database:
SELECT
t.NAME 'Table name',
i.NAME 'Index name',
ips.index_type_desc,
ips.alloc_unit_type_desc,
ips.index_depth,
ips.index_level,
ips.avg_fragmentation_in_percent,
ips.fragment_count,
ips.avg_fragment_size_in_pages,
ips.page_count,
ips.avg_page_space_used_in_percent,
ips.record_count,
ips.ghost_record_count,
ips.Version_ghost_record_count,
ips.min_record_size_in_bytes,
ips.max_record_size_in_bytes,
ips.avg_record_size_in_bytes,
ips.forwarded_record_count
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') ips
INNER JOIN
sys.tables t ON ips.OBJECT_ID = t.Object_ID
INNER JOIN
sys.indexes i ON ips.index_id = i.index_id AND ips.OBJECT_ID = i.object_id
WHERE
AVG_FRAGMENTATION_IN_PERCENT > 0.0
ORDER BY
AVG_FRAGMENTATION_IN_PERCENT, fragment_count
You should rebuild indexes often enough so that productions is not detrimentally affected by index degradation. I understand that this seems vague, but all databases are different and are used in different ways. You only need to regularly rebuild/defrag indexes that incur write operations (inserts / updates) – your static or mostly read only tables will not need much reindexing.
You will need to use "dbcc showcontig([Table])" to check the fragmentation level of your indexes, determine how often they become fragmented and as to what level the fragmentation actually is.
Use "dbcc dbreindex([Table])" to totally rebuild the indexes when they become too fragmented (above 20%-30% or so) but if you cannot find a large enough downtime window and the fragmentation level is relatively low (1%-25%), you should use “dbcc indexdefrag([Database], [Table], [Index])" to defrag the index in an "online" fassion. Also keep in mind, that you can stop the index defrag operation and start it again at a later time without losing any work.
Keeping a database and its indexes "in tune" takes a bit of monitoring to really get a feel for when and what to reindex.
Although this link is for SQL Server 2000, the underlying concepts are still true for newer versions and still give you a good understanding. Only the ways to determine fragmentation have changed significantly from SQL Server 2005 onwards. But for this, others have given you already some links.
Microsoft SQL Server 2000 Index Defragmentation Best Practices