I have a few huge tables on a production SQL 2005 DB that need a schema update. This is mostly an addition of columns with default values, and some column type change that require some simple transformation. The whole thing can be done with a simple "SELECT INTO" where the target is a table with the new schema.
Our tests so far show that even this simple operation, done entirely inside the server (Not fetching or pushing any data), could take hours if not days, on a table with many millions of rows.
Is there a better update strategy for such tables?
edit 1: We are still experimenting with no definitive conclusion. What happens if one of my transformations to a new table, involve merging every five lines to one. There is some code that has to run on every transformation. The best performance we could get on this got us at a speed that will take at least a few days to convert a 30M rows table
Will using SQLCLR in this case (doing the transformation with code running inside the server) give me a major speed boost?