I have several web pages on several different sites that I want to mirror completely. This means that I will need images, CSS, etc, and the links need to be converted. This functionality would be similar to using Firefox to "Save Page As" and selecting "Web Page, complete". I'd like to name the files and corresponding directories as something sensible (e.g. myfavpage1.html,myfavpage1.dir).
I do not have access to the servers, and they are not my pages. Here is one sample link: Click Me!
A little more clarification... I have about 100 pages that I want to mirror (many from slow servers), I will be cron'ing the job on Solaris 10 and dumping the results every hour to a samba mount for people to view. And, yes, I have obviously tried wget with several different flags but I haven't gotten the results for which I am looking. So, pointing to the GNU wget page is not really helpful. Let me start with where I am with a simple example.
wget --mirror -w 2 -p --html-extension --tries=3 -k -P stackperl.html "http://stackoverflow.com/tags/perl"
From this, I should see the http://stackoverflow.com/tags/perl page in the stackper.html file, if I had the flags correct.