Personally, I have never heard of such a means of optimization, and if the division into chunks of 10k is completely arbitrary, then i think it would be less effective to run that 10 times, than to run it across the whole set once, because dealing with temp tables here would only be overhead, and if you do it all in one chunk, you give the database a fair chance to get an accurate idea of what you want to do, and select a proper execution plan based upon that.
If the 10-or-so-k records are not arbitrarily selected, however, but actually logically divisible into a couple of different groups (say you have a huge table 'images', which could actually be divided into 'gallery photos', 'profile photos', 'cms images', 'screenshots', or whatev), and if your process is doing that distinction at some point, then you may help the selection out by always storing these records in distinct tables. So the use of tables would help the database find the interesting rows, sort of the way an index does. But that's rather besides the point, i guess...
If you want performance, though, make sure that you drop your statistics every 24 hours or so, to give the database an accurate idea of what it's up against