views:

299

answers:

2

Hi.

We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.

Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.

What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.

First, is this a good idea or a bad idea?

Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.

How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?

Thank you.

edit: i am willing to accept suggestions of something different rather than what is listed above.

+1  A: 

Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).

Edit:

You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.

Caleb Huitt - cjhuitt
I understand where you are coming from with this, but that local hardware acceleration might be on a different device entirely, or it might be on the same device. Right now, it comes down over USB to the display. If we toss the idea of sockets out, which is fine, do you know of any LGPL or proprietary cross-platform libraries that will help accomplish such a thing in Qt?
San Jacinto
Also, please add a space or something to your answer so i can +1 it. it says the vote is too old to change.
San Jacinto
+1  A: 

Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.

plinth
looks interesting, thanks.
San Jacinto