views:

3

answers:

0

Here's the problem:

We have various microscopes in our lab that we use to image entire slide surfaces. Occasionally we like to run various test on our different scanners to compare the performance of the scanners and do some QC. The problem is, to do some of these comparisons we need very accurate registration of the datasets collected between scanners so we can monitor differences in individual cells between scans. Some numbers for scale:

A dataset consists of 24X96 images, about 1.2MP each, collected at a magnification of 10X. The typical slide contains between 2-4 million cells.

Now the problem is, the original design of the system wasn't particularly stringent with regards to calibrating the imaging or movement systems, so while close, there are slight differences in the magnifications between microscopes, the x/y axis of the camera might not align perfectly with the x/y movement axis on the two scanners, and the slide might be slightly rotated or shifted when placed in the other scanner. Also, they tried to pre-calculate the overlap between tiles, but it's not that accurate, so the images are cropped and have no overlap between tiles in the same set (there might be gaps between tiles).

Now I'm having a hard time trying to grasp how to model this problem and perform the registration (I need to get cells registered to within 4 pixels or so to feel like I can reliably match cells on duplicate scans, that part I already have working). My initial instinct was to try something like this:

(Xtile1 + Xpixel1/Xsize1)*a + (Ytile1 + Ypixel1/Ysize1)*b + c = Xtile2 + Xpixel2/Xsize2 (Ytile1 + Ypixel1/Ysize1)*d + (Xtile1 + Xpixel1/Xsize1)*e + f = Ytile2 + Ypixel2/Ysize2

so Xtile1 + Xsize1/Xsize1 is an attempt to convert a tile + pixel position into a legitimate x-coordinate. So a cell in tile 2,3 at pixel 234,676 has an x/y coordinate of 2+234/Xsize,3+676/Ysize. This model assumes that the tiles are laid out on a consistent grid. The coefficients a,b,c,d,e,f then perform an affine mapping from the points in dataset1 to dataset2. There are 12 unknowns, the affine mapping (which I hope will account for variations in rotation of the camera/specimen, specimen translation and microscope scaling differences), and the Xsize and YSize parameters which give some indication of how much area the image actually covers (if Xsize < image cols then there is a gap between adjacent tiles).

Now I'm trying to figure out if this model is sufficient, and how to convert it into a form for which I can solve for these unknowns. I plan on getting the 12 initial correspondences by registering the corners of three images... I have a log-polar phase correlation working that seems to reliably match images... it's just slow. I'm hoping to run this registration on three images from dataset1 and place the corners of these three images in the tiles from the other dataset. I'm really hoping I can figure this out, otherwise I might have to just use brute force run the log-polar correlation for all the images in a dataset, that could be up to 4X correlations on 2304 images. BTW, the log-polar registration on one of my test datasets finds a magnification change of about 0.9% and a rotation of about 0.8º + a very large shift, not sure if I can trust this single image and feed it into a more global mapping though....

I'd greatly appreciate any help getting this working. Most specifically commenting on wether my model for the mapping is good enough, or maybe even too complicated? Also on how best to reformulate the problem to best solve for these unknown parameters (and take into account degenerate situations?)

Many, many thanks! -Craig