Hi, I want to upsample an array of captured (from webcam) OpenCV images or corresponding float arrays (Pixel values don't need to be discrete integer). Unfortunately the upsampling ratio is not always integer, so I cannot figure myself how to do it with simple linear interpolation. Is there an easier way or a library to do this?
I am not 100% familiar with video capture, so I'm not sure what you mean by "pixel values don't need to be discrete integer". Does this mean the color information per pixel may not be integers?
I am assuming that by "the upsampling ratio is not always integer", you mean that you will upsample from one resolution to another, but you might not be doubling or tripling. For example, instead of 640x480 -> 1280x960, you may be doing, 640x480 -> 800x600.
A simple algorithm might be:
For each pixel in the larger grid
- Scale the x/y values to lie between 0,1 (divide x by width, y by height)
- Scale the x/y values by the width/height of the smaller grid -> xSmaller, ySmaller
- Determine the four pixels that contain your point, via floating point floor/ceiling functions
- Get the x/y values of where the point lies within that rectangle,
between 0,1 (subtract the floor/ceiling values xSmaller, ySmaller) -> xInterp, yInterp - Start with black, and add your four colors, scaled by the xInterp/yInterp factors for each
You can make this faster for multiple frames by creating a lookup table to map pixels -> xInterp/yInterp values
I am sure there are much better algorithms out there than linear interpolation (bilinear, and many more). This seems like the sort of thing you'd want optimized at the processor level.
The ImageMagick MagickWand library will resize images using proper filtering algorithms - see the MagickResizeImage()
function (and use the Sinc filter).
Well, I dont know a library to to do framerate scaling.
But I can tell you that the most appropriate way to do it yourself is by just dropping or doubling frames.
Blending pictures by simple linear pixel interpolation will not improve quality, playback will still look jerky and even also blurry now.
To proper interpolate frame rates much more complicated algorithms are needed. Modern TV's have build in hardware for that and video editing software like e.g. After-Effects has functions that do it.
These algorithms are able to create in beetween pictures by motion analysis. But that is beyond the range of a small problem solution.
So either go on searching for an existing library you can use or do it by just dropping/doubling frames.
Use libswscale
from the ffmpeg project. It is the most optimized and supports a number of different resampling algorithms.