views:

384

answers:

5

Hi guys,

I have written my own software in C# for performing microscopy imaging. See this screenshot.

The images that can be seen there are of the same sample but recorded through physically different detectors. It s crucial for my experiments that these images be exactly aligned. I thought the easiest would be to somehow blend/substract the two bitmaps but this doesn't give me good results. Therefore I am looking for a better way to do this.

It might be useful to point out that the images exist as arrays of intensities in memory and are converted to bitmaps for on-screen painting to my self written image control.

I would greatly appreciate any help!

A: 

So the detectors are different, so the alignment will be slightly wrong, in that pixel (256,512) in image 1 could be a feature represented by pixel (257,513) in image 2. Is that the problem? What about magnification? If the detector is different, couldn't the magnification be slightly different as well?

If you mean something like the above, and judging from your screenshot, it shouldn't be too difficult to find the centers of the 4 or 5 areas of highest intensity - normalize the data and go through the entire image looking for blocks of 9 neighboring pixels with the highest average intensity. Note the center pixel of four or five of these features for each image. Then calculate the distance between each set of pixels between the two images.

If the distance is 0 for all sets, the two images should be in alignment. If the distance is constant, all you have to do is move one image that distance. If the distance varies, you will need to resize one image until it is constant, and then slide it to match up the features. Then you can average the intensity values of the two images, since they should be in alignment.

That's how I would start, anyway.

R Ubben
See below for a further description!
Kris
A: 

If the images are generated from different sensors then the problem will be difficult, in general. Particularly for you since one of your images seems to have a lot of noise.

Assuming there's no warping or rotation in he sensors, then I would suggest that you first normalize the intensities of each image. Then find the shift that minimizes the error between the images. The error can be euclidean (i.e the total sum of squared differences of each pixel). That, to me at least, is the definition of alignment.

Ray
See below for a further description!
Kris
A: 

The only way you can align is if there is some feature in the images that is known to be identical (or with a known transformation). A common approach is to put something in the image -- for instance have the image capture add an alignment artifact -- something easy to detect and figure out the transformation required to normalize the image.

A common example is to put + markers at the corners. You might also see barcodes used for this purpose sometimes.

Without this artifact, there has to be something in the image whose size and orientation is known (and that exists in both images).

Lou Franco
Hi guys, Thanks already for the replies. I think I should clarify a bit: The images are recorded on the same physical setup and the photon counter that is used as a detector is also fixed. The only variables are the tow laser beams I use to illuminate the sample. The should illuminate exactly the same volume of the sample and for this purpose need to be aligned using optics. The way to check this is to record the image using the the two lasers separately and somehow combine them. One laser should give a much higher resolution than the other... I was thinking to normalize both images.
Kris
If I normalize each one in a different color space, for example green and red and then add the images... I can expect to see yellow in the regions of overlap. If the little yellow discs are in the center of the big ones of the other channel, then both lasers are aligned. See this Screenshot: http://picasaweb.google.com/lh/photo/gQBuodNtXFIy8BniPpLtjw?feat=directlinkI saw a similar implementation in a commercial package but I don't know enough GDI to handle the normalisation properly. So if someone can help with that I think my problem is solved!
Kris
Just wanted to add that I really need to normalize the images over either a red or a green channel space. Just extracting Red or Green from either images and adding those doesn't give nice results. I will probably also need some form of thresholding for my images since they can be noisy.
Kris
+3  A: 

If the images are the same orientation and same size, but slightly shifted vertically or horizontally, can you use cross-correlation to find the best alignment?

If you know that features in the yellow channel need to line up, for instance, just feed the yellow channels into the cross-correlation algorithm, and then find the peak in the result. The peak will occur at the offset where the two images line up best.

It will work even with noisy images, and I suspect it will work even for images that are significantly different, like in your screenshot.

MATLAB example: Registering an Image Using Normalized Cross-Correlation

Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:

The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.

endolith
A: 

I got around solving this some time ago... Since I only need to verify that two images from two detectors are perfectly aligned and since I do not have to try and align them if they are not I solved it like this:

1) Use the Aforge Framework and apply a grayscale filter to both images. This will average the RGB values for each pixel. 2) On one image apply a ChannelFilter to retain only the red channel. 3) On the other image, apply a ChannelFilter to retain only the green channel. 4) Add Both images.

Here are the filters I used, I leave it to the reader to apply them if needed (it's trivial and there are examples on the Aforge website).

AForge.Imaging.Filters.IFilter filterR = new AForge.Imaging.Filters.ChannelFiltering(new AForge.IntRange( 0, 255 ), new AForge.IntRange( 0, 0 ), new AForge.IntRange( 0, 0 ));
AForge.Imaging.Filters.IFilter filterG = new AForge.Imaging.Filters.ChannelFiltering(new AForge.IntRange( 0, 0 ), new AForge.IntRange( 0, 255 ), new AForge.IntRange( 0, 0 ));
AForge.Imaging.Filters.GrayscaleRMY FilterGray= new AForge.Imaging.Filters.GrayscaleRMY();
AForge.Imaging.Filters.Add filterADD = new AForge.Imaging.Filters.Add();

When significant features are present in both images I want to check, they will show up in Yellow thus doing exactly what I need.

Thanks for all the input!

Kris