It doesn't usually make any difference
I suppose in the old days, running both a compiler and an assembler at the same time might have started paging and may have bogged down or made interactive performance terrible.
These days, I imagine gcc looks like a fairly small application, never mind the assembler, and we certainly have lots of RAM, so it's harmless and may run a bit faster.
But by the same token the CPU is so fast that it can create that temporary file and read it back without you even noticing...
Now, there are some large projects out there. You can check out a single tree that will build all of Firefox, or NetBSD, or something like that, something that is really big. Something that includes all of X, say, as a minor subsystem component. You may or may not notice a difference when the job involves millions of lines of code in thousands and thousands of C files. As I'm sure you know, people normally work on only a small part of something like this at one time. But if you are a release engineer or working on a build server, or changing something in stdio.h, you may well want to build the whole system to see if you broke anything. And now, every drop of performance probably counts...