My advice would be to get away from moving files from one environment to another and begin implementing release candidate packaging.
These packages can be achieved from the simple using an archiving tool (tar, winzip) or more sophisiticated such as Wise Installer or InstallShield.
The cycle would be something like the following:
- build the release candidate from a release candidate tag branch that includes the merged changesets ready to go through the testing gauntlet,
- package ALL of the files from the build into a tar/zip/setup.exe
- deploy to the various testing environments via the same package
- if the release candidate passes testing, the same package can be used to deploy to production; if not, go back to square one and put together another candidate.
If the release candidate fails, then the candidate is designated as a failed baseline, the fixes are implemented, and another release candidate is built and packaged.
While I am generally not in favor of putting built objects into a source code repository, from a convenience and control perspective, the package can be placed under control to ensure that no changes are made to it between the time package is used to deploy from one environment to another.
The release candidate version ID should be used in the naming conventions for the package and the associated code branch to ensure the obvious relationships. If possible, putting the version ID information into the resource files helps to ensure that the files from the correct build exist in the correct place.
My preference is to build everything and deploy everything, even if only one file changed. Building, packaging, and deploying everything each time keeps scripts and processes simple and repeatable.
Basically...build once, deploy often.