I'm curious about whether there are approaches or algorithms one might use to downscale an image based on the amount of detail or entropy in the image such that the new size is determined to be a resolution at which most of the detail of the original image would be preserved.
For example, if one takes an out-of-focus or shaky image with a camera, there would be less detail or high frequency content than if the camera had taken the image in focus or from a fixed position relative to the scene being depicted. The size of the lower entropy image could be reduced significantly and still maintain most of the detail if one were to scale this image back up to the original size. However, in the case of the more detailed image, one wouldn't be able to reduce the image size as much without losing significant detail.
I certainly understand that many lossy image formats including JPEG do something similar in the sense that the amount of data needed to store an image of a given resolution is proportional to the entropy of the image data, but I'm curious, mostly for my own interest, if there might be a computationally efficient approach for scaling resolution to image content.