I have a table with 158 columns in SQL Server 2005.any disdvantage of keeping so many columns.
Also I have to keep those many columns, how can i improve performance - like using SP's, Indexes?
I have a table with 158 columns in SQL Server 2005.any disdvantage of keeping so many columns.
Also I have to keep those many columns, how can i improve performance - like using SP's, Indexes?
There is nothing inherantly wrong with wide tables, the main case for normalization is database size, where lots of null columns take up alot of space.
A lot will depend on how many rows are stored in those columns. 158 rows does seem rather large.
As your enquiring about improving performance it sounds like there is a lot of data in that table, so I would revisit the design to see if you can normalise the table.
If you still want to place more indexes on that table then identifity if there is a standard SELECT statement that's always being called, if so see what field(s) are being used in the WHERE clause. (I expect you at least have a primary key on that table?)
Targeting indexes on fields in the WHERE clause is normally a good start.
But without seeing your Table structure or SQL used to access the data its all guess work.
The more columns you have, the slower your queries will be.
That's just a fact. That isn't to say you aren't justified in having many columns. The above does not give one carte blanche to split one entity's worth of table with many columns into multiple tables with fewer columns. The administrative overhead of such a solution would most probably outweigh any perceived performance gains.
My number one recommendation to you, based off of my experience with abnormally wide tables (denormalized schemas of bulk imported data) is to keep the columns as thin as possible. I had to work with a lot of crazy data and left most of the columns as VARCHAR(255). I recommend against this. Although convenient for development purposes, performance would spiral out of control, especially when working with Perl. Shrinking the columns to their bare minimum (VARCHAR(18) for instance) helped immensely.
Stored procedures are just batches of SQL commands; they don't have any direct on speed other than that regular use of certain types of stored procedures will end up using cached query plans (which is a performance boost).
You can use indexes to speed up certain queries, but there's no hard and fast rule here. Good index design depends entirely on the type of queries you're running. Indexing will, by definition, make your writes slower; they exist only to make your reads faster.
The problem with having that many columns in a table is that finding rows using the clustered primary key can be expensive. If it were possible to change the schema, breaking this up into many normalized tables will be the best way to improve efficiency. I would strongly recommend this course.
If not, then you may be able to use indices to make some SELECT queries faster. If you have queries that only use a small number of the columns, adding indices on those columns could mean that the clustered index will not need to be scanned. Of course, there is always a price to pay with indices, in terms of storage space and INSERT, UPDATE and DELTETE time, so this may not be a good idea for you.
Wide tables can be quite performant when you usually want all the fields for a particular row. Have you traced your user's usage patterns? If they're usually pulling just one or two fields from multiple rows then your performance will suffer. The main issue is when your total row size hits the 8k page mark. That means SQL has to hit the disk twice for every row (first page + overflow page), and thats not counting any index hits.
The guys at Universal Data Models will have some good ideas for refactoring your table. And Red Gate's SQL Refactor makes splitting a table heaps easier.