Hello,
I am looking for some suggestions on how to approach the following computer vision problem. Below are 4 samples of an eye tracking dataset that I am working with. I would like to write code takes one such image and calculates the (x,y) position of the center of the pupil. I am currently using MATLAB, but I am open to using other software too.
Can someone recommend an approach I could use for this task? Here are some things I already tried but didn't work TOO well.
- I tried to use circle hough transform, but that requires me to guess the radius of the pupil, which is a bit problematic. Also, due to distortions, the pupil is not always exactly a circle, which may make this approach harder still.
- I tried thresholding the image based on pixel brightness and using regionprops MATLAB function to look for a region of roughly (say) 200 pixel area with very low eccentricity (i.e. as circular as possible). However, this is very sensitive to the threshold value, and some images of the eye are brighter than others based on the lighting conditions. (Note the 4 samples below are mean-normalized already, and still one of the images is brighter than others overall probably because of some very dark random pixel somewhere)
Any comments/suggestions would be appreciated!
EDIT: thanks for the comment Stargazer. The algorithm should ideally be able to determine that the pupil is not in the image, as is the case for the last sample. It's not a big deal if I lose track of it for a while. It's much worse if it gives me wrong answer though.