views:

341

answers:

2

I have a very large table with a fulltext indexed column. If I partition this table sensibly (for me, sensibly is by date), will it speed up queries? Or will the full text clause still search through the whole table, even if the query limits to a single partition?

From what I've seen so far, I think the answer is partitioning won't help. So best alternatives are valued answers. E.g., create tables for each date range and maintain them easily by doing [???].

EDIT: Very large is currently 4.5 million rows, but will grow over time in spurts (it could be 20 million tomorrow, so I want to plan for that). In terms of hardware, I'm pretty clueless. I do know that the query is slow when the fulltext query returns a large number of rows, even if the query as a whole does not. Not sure if that means it's compute bound or IO bound or if it's even enough info to tell.

+1  A: 

I don't think it will.

The full text index resides on a single full text catalog.

This is very different to partitioning data based on a date range onto data filegroups, using views and constraints to direct queries to the correct partition.

My idea would be to make sure your full text catalog and index are on their own LUN/disk set.

gbn
A: 

GBN's right - it won't help.

Usually my recommendation is to avoid changing your schema to solve a hardware problem. Follow best practices for FTS setup, and you can scale really well. If you can clarify what you mean by a "very large table" and what kind of hardware it is, we can probably help give better answers. For example, is it a 1 million row table on a 2-cpu 16gb ram box with 6 drives in a slow raid 5, or is it a 10 million row table on a 4-cpu 64gb box with a 100-drive SAN in RAID 10?

Brent Ozar