Another way that we have used in the past is to create a temp table with the primary keys we want to move and use while loop. This way you can do it in a kind of block fashion so you avoid the large transaction overhead if you canceled and it had to roll back.
Basically what you end up doing is a insert into tablename (...) select (...) from table name where primary key in (select top 10000 key from temptable)
the top 10000 you want in a secondary result set so you can remove them from the temp table so they don't get processed again.
Yet another way would be to use cursors to reduce the number of records you process at a time.
Another loop method would be to do something like this in a while loop.
declare @stop as int
set @stop = (select count(primaryKey) from tableName where primaryKey not in destinstiontable)
while (@stop > 0)
begin transaction
insert into destinationTable (...)
select (...) from sourcetable where primaryKey not in (select primarykey from destinationtable)
commit
set @stop = (select count(primaryKey) from tableName where primaryKey not in destinstiontable)
end
Not the most effecient but it would work and should allow you to keep the transaction log down. Unless you need it also make sure to use the no lock keyword so that you don't block other transactions when doing this large move (unless you use BCP or DTS as they are much much faster).
Some of what has been said is probably your best bet though. Use BCP, DTS, or some other bulk tool. If you can drop indexes, it will make things go a lot faster.