views:

73

answers:

6

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.

Thanks in advance

+1  A: 

Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.

Of course using the iPhone 4's camera light could be considered "night vision" ...

Joshua Nozzi
Most digicams can see further into the infrared than the human eye unless they have filters to prevent it. I wonder if anyone has tried infrared illumination with the iPhone. Either way, this question is about dark image processing, not true night vision.
Peter DeWeese
Hmmm ... good point. I do remember wondering at that, since I know camcorders can pick up the flickering of an IR remote control (it usually appears bluish-white in the recorded movie and viewfinder). You would definitely need an infrared light, though, since I doubt it's sensitive enough for passive IR.
Joshua Nozzi
A: 

Your real problem is the camera and not the algorithm.

You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^

But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html

Vinzius
A: 

I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....

Shahab
A: 

With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:

http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm

Many OpenGL implementations can be found if you find a standard name for an algorithm you need.

Peter DeWeese
A: 

The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.

If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.

I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.

hotpaw2
A: 

I suggest conducting a simple test before trying to actually implement this:

  1. Save a photo made in a dark room.
  2. Open in GIMP (or a similar application).
  3. Apply "Stretch HSV" algorithm (or equivalent).
  4. Check if the resulting image quality is good enough.

This should give you an idea as to whether your camera is good enough to try it.

Rafał Dowgird