The best optimization you can do is to use large buffers for the copy. If that is not enough then restructure your data to be a single file instead of two files in a directory. Next step is to get faster hardware.
There are many file systems in common use for Unix/Linux and you would need to write a custom copy algorithm for each. There is rarely a guarantee of contiguous blocks for even a single file, let alone two. Odds are also good that your block copy routine would bypass and be less efficient than existing file system optimizations.
Reading an entire file into memory before writing it out will give more benefit in terms of minimizing seek times than opening fewer files would, at least for files over a certain size. And not all hardware suffers from seek times.