tags:

views:

80

answers:

5

Edit: Im running SQL Server 2008

I have about 400,000 rows in my table. I would like to duplicate these rows until my table has 160 million rows or so. I have been using an statement like this:

INSERT INTO [DB].[dbo].[Sales]
           ([TotalCost]
           ,[SalesAmount]
           ,[ETLLoadID]
           ,[LoadDate]
           ,[UpdateDate])
SELECT [TotalCost]
      ,[SalesAmount]
      ,[ETLLoadID]
      ,[LoadDate]
      ,[UpdateDate]
  FROM [DB].[dbo].[Sales]

This process is very slow. and i have to re-issue the query some large number of times Is there a better way to do this?

A: 

You don't state your SQL database, but most have a bulk loading tool to handle this scenario. Check the docs. If you have to do it with INSERTs, remove all indexes from the table first and reapply them after the data is INSERTed; this will generally be much faster than indexing during insertion.

Larry Lustig
+1  A: 

Since each time you run that command it will double the size of your table, you would only need to run it about 9 times (400,000 * 29 = 204,800,000). Yes, it might take a while because copying that much data takes some time.

Greg Hewgill
+3  A: 

To do this many inserts you will want to disable all indexes and constraints (including foreign keys) and then run a series of:

INSERT INTO mytable
SELECT fields FROM mytable

If you need to specify ID, pick some number like 80,000,000 and include in the SELECT list ID+80000000. Run as many times as necessary (no more than 10 since it should double each time).

Also, don't run within a transaction. The overhead of doing so over such a huge dataset will be enormous. You'll probably run out of resources (rollback segments or whatever your database uses) anyway.

Then re-enable all the constraints and indexes. This will take a long time but overall it will be quicker than adding to indexes and checking constraints on a per-row basis.

cletus
+1  A: 

The speed of the insert will depend on a number of things...the physical disk speed, indexes, etc. I would recommend removing all indexes from the table and adding them back when you're done. If the table is heavily indexed then that should help quite a bit.

You should be able to repeatedly run that query in a loop until the desired number of rows is achieved. Every time you run it you'll double the data, so you'll end up with:

400,000
800,000
1,600,000
3,200,000
6,400,000
12,800,000
25,600,000
51,200,000
102,400,000
204,800,000

After nine executions.

Adam Robinson
Right. the problem is just that its taking like 10 minutes for the first few itterations. I imaging the time will scale proportionally, so it really takes like hours to duplicate a table of that size?
abudker
Did you try removing indexes? At some point it will *just take that long* to write that much data.
Adam Robinson
A: 

this may still take a while to run... you might want to turn off logging while you create your data.

INSERT INTO [DB].[dbo].[Sales] (
             [TotalCost] ,[SalesAmount] ,[ETLLoadID] 
            ,[LoadDate] ,[UpdateDate]
) 
SELECT s.[TotalCost] ,s.[SalesAmount] ,s.[ETLLoadID] 
      ,s.[LoadDate] ,s.[UpdateDate] 
FROM [DB].[dbo].[Sales] s (NOLOCK)
CROSS JOIN (SELECT TOP 400 totalcost FROM [DB].[dbo].[Sales] (NOLOCK)) o
Matthew Whited
What does that line of the command do?
abudker
I used it as a Cartesian join http://en.wikipedia.org/wiki/Cartesian_product basiscly it would end up with 400 times the number of records in Sales. I said top 400 becasue 400 * 400,000 is 160Million.
Matthew Whited
That simplest and cleanest method would to just keep doubling the input as others have suggested.
Matthew Whited