views:

382

answers:

5

What is the simplest/cleanest way to rescale the intensities of a PIL Image?

Suppose that I have a 16-bit image from a 12-bit camera, so only the values 0–4095 are in use. I would like to rescale the intensities so that the entire range 0–65535 is used. What is the simplest/cleanest way to do this when the image is represented as PIL's Image type?

The best solution I have come up with so far is:

pixels = img.getdata()
img.putdata(pixels, 16)

That works, but always leaves the four least significant bits blank. Ideally, I would like to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits. I don't know how to do that fast.

+1  A: 

What you need to do is Histogram Equalization. For how to do it with python and pil:

EDIT: Code to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits...

def f(n):
   return  n<<4 + int(bin(n)[2:6],2)

print(f(0))
print(f(2**12))

# output
>>> 0
    65664 # Oops > 2^16
TheMachineCharmer
Both links are exactly the same
Nadia Alramli
@Nadia Thanks its corrected now.
TheMachineCharmer
No, histogram equalization stretches different parts of the histogram differently. What I want to do is simply to stretch the histogram evenly so as to use the entire intensity range.
Vebjorn Ljosa
+2  A: 

Why would you want to copy the 4 msb back into the 4 lsb? You only have 12 significant bits of information per pixel. Nothing you do will improve that. If you are OK with only having 4K of intensities, which is fine for most applications, then your solution is correct and probably optimal. If you need more levels of shading, then as David posted, recompute using a histogram. But, this will be significantly slower.

But, copying the 4 msb into the 4 lsb is NOT the way to go :)

The reason I thought to copy the 4 MSB is that it would use the entire range 0–(2^16-1), so saturated pixels in the original image will appear saturated in the rescaled image as well.
Vebjorn Ljosa
Indeed, this is the standard way of upscaling to a higher number of bits per channel.
bobince
+2  A: 
Ivan
A: 

Maybe you should pass 16. (a float) instead of 16 (an int). I was trying to test it, but for some reason putdata does not multiply at all... So I hope it just works for you.

Paul
+2  A: 

Since you know that the pixel values are 0-4095, I can't find a faster way than this:

new_image= image.point(lambda value: value<<4 | value>>8)

According to the documentation, the lambda function will be called at most 4096 times, whatever the size of your image.

EDIT: Since the function given to point must be of the form argument * scale + offset for in I image, then this is the best possible using the point function:

new_image= image.point(lambda argument: argument*16)

The maximum output pixel value will be 65520.

A second take:

A modified version of your own solution, using itertools for improved efficiency:

import itertools as it # for brevity
import operator

def scale_12to16(image):
    new_image= image.copy()
    new_image.putdata(
        it.imap(operator.or_,
            it.imap(operator.lshift, image.getdata(), it.repeat(8)),
            it.imap(operator.rshift, image.getdata(), it.repeat(4))
        )
    )
    return new_image

This avoids the limitation of the point function argument.

ΤΖΩΤΖΙΟΥ
That is a very good idea, but because 16-bit images are if PIL type "I", the argument to `point` must be a function of the form `argument * scale + offset`.
Vebjorn Ljosa
You are correct, of course. Hopefully the "second take" should be efficient enough for you.
ΤΖΩΤΖΙΟΥ