must not be made available for
download in any way.
is at odds with:
The client should see a downskaled
version of the image and be able to
select an area of it to be viewed in
the full scale mode (100%).
... at the point you allow all areas of the image to be viewed at full res, the entire image could be stitched together. so you're effectively (if very inconveniently) making the full size image available.
none of this helps you achieve the goal though.
the way i'd do it would be to provide a 72dpi watermarked copy for use in selecting the area of the image to download. you could scale this to a % of the original if screen real estate was an issue. have the user choose top-left and bottom-right coordinates. then use something like imagemagick to copy this area out of the original to be served to the user.
if you need to conserve resources, you could have the users download from a predefined grid, so the first time grid coord 14:11 is chosen, image_1411_crop.jpg gets written to the file system, and the next time that coord is selected, the file already exists.
edit: read some of your comments on other answers...
no matter what way you go about generating and serverside caching, you're going to use the same amount of bandwidth and traffic. a 300dpi jpeg is a 300dpi jpeg no matter if it's just been generated or is sitting on the filesystem.
you have to find out whether you need to conserve CPU or disk space. if you've got a million gigs of images and only forty users, you can afford the CPU hit. if you've got forty gigs of images and a million users, go for the HDD.