I would agree with banzaimonkey, except I would start with the crawler. Working with the webserver logs assumes that the problem images, stylesheets, etc. are being accessed with some regularity. Pages or links located deep in the site or on rarely visited pages could easily be missed. Crawling the pages should locate them reliably.
I am by no means an expert, but I've been working on a somewhat similar problem. My solution was to use Perl and the WWW::Mechanize module to crawl entire sites and record various aspects of the pages. In my case, I wanted a list of bad links, specific forms, multimedia objects, and about five other things. I was able to build the script so it would treat specific hosts as "local" (there are about 80 sites over a number of domains). You should be able to do the same thing in reverse by identifying the "bad" links. This assumes you're doing the testing AFTER you've deployed the production site. You could probably do some sort of variation that would allow checking before deployment.
Another alternative would be to look at an already written crawler, and see what its results are. The Internet Archive has produced Heritrix which crawls, archives, and reports on websites. It's probably a bit of overkill. An option like LinkChecker could be used with the verbose option, then the output grepped for instances of the development server name / ip address. I'm sure there are lots of other options along this line.
I mention these primarily because I think you want something that automates the process more than someone manually checking each page. These tools can take some time to complete since they traverse the entire site, but they can give a pretty complete picture. The main things mine does not handle well are javascript and forms. Heritrix actually handles some JavaScript links, but still doesn't handle forms.
That said, WWW::Mechanize and other modules can programmatically submit forms, but they need to be given specific values. In your case, if you've got a large database, you may only need to submit one or two form values to verify images, etc. aren't from the development server. On the plus side, you can also check returned content to make sure the forms are working correctly. I had an issue today with paged navigation - the page was serving the same 20 results regardless of the selected page. Checking for that could be automated by testing for specific strings in the result sets (this is getting into the realm of test-driven development).
One other thing - Heritrix actually creates archives. It's the basis for the WayBack Machine at the Internet Archive. You might get a side benefit if keeping multiple versions of websites is of interest to you or your organization.