views:

568

answers:

2

I'm developing an OpenGL application for Windows XP. The target machine has 2 NVIDIA GeForce 9800GT video cards, which are needed because the application needs to have output 2 streams of analog video.

The application itself has two OpenGL windows, one for each video card. Each video card is connected to one monitor. As for the code, it's based on a minimal OpenGL example.

How can I know if the application is utilizing both video cards for rendering?

At the moment, I don't care if the application only runs on Windows XP or only with NVIDIA video cards, I just need to know how the two are working.

+1  A: 

I beleive you can gain such information from the gDEBugger for OpenGL based applications.

If it turns out your not using both cards, you can check out Equalizer for parallel rendering, it's a great project.

Brian Gianforcaro
+1  A: 

I think you need to read up on the WGL_nv_gpu_affinity extension. You create affinity masks and use wglMakeCurrent() in conjunction with them. Here are some pointers:

http://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txt

Pdf from NVidia.com

Cheers !

Magnus Skog
GPU affinity is not supported on Geforce. The options are Linux, or creating one window on each attached screen and taking the performance hit of not 'masking' the GPU.Edit: Or activate SLI, if your application scales with it. The nVidia website has documentation on which type of workloads scale with SLI.
eile