hello,
i have a working Core Video
setup (a frame captured from a USB camera via QTKit
) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView
. so far so good but i would like to use some Core Image
filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage
is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef
) with Core Image
and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!