views:

1127

answers:

3

I'm writing a 3d modeling application in D3D9 that I'd like to make as broadly compatible as possible. This means using few hardware-dependent features, i.e. multisampling. However, while the realtime render doesn't need to be flawless, I do need to provide nice-looking screen captures, which without multisampling, look quite aliased and poor.

To produce my screen captures, I create a temporary surface in memory, render the scene to it once, then save it to a file. My first thought of how I could achieve an antialiased capture was to create my off-screen stencilsurface as multisampled, but of course DX wouldn't allow that since the device itself had been initialized with D3DMULTISAMPLE_NONE.

To start off, here's a sample of exactly how I create the screencapture. I know that it'd be simpler to just save the backbuffer of an already-rendered frame, however I need the ability to save images of dimension different than the actual render window - which is why I do it this way. Error checking, code for restoring state, and releasing resource are ommitted here for brevity. m_d3ddev is my LPDIRECT3DDEVICE9.

//Get the current pp
LPDIRECT3DSWAPCHAIN9 sc;
D3DPRESENT_PARAMETERS pp;
m_d3ddev->GetSwapChain(0, &sc);
sc->GetPresentParameters(&pp);

//Create a new surface to which we'll render
LPDIRECT3DSURFACE9 ScreenShotSurface= NULL;
LPDIRECT3DSURFACE9 newDepthStencil = NULL;
LPDIRECT3DTEXTURE9 pRenderTexture   = NULL;
m_d3ddev->CreateDepthStencilSurface(_Width, _Height, pp.AutoDepthStencilFormat, pp.MultiSampleType, pp.MultiSampleQuality, FALSE, &newDepthStencil, NULL );
m_d3ddev->SetDepthStencilSurface( newDepthStencil );
m_d3ddev->CreateTexture(_Width, _Height, 1, D3DUSAGE_RENDERTARGET, pp.BackBufferFormat, D3DPOOL_DEFAULT, &pRenderTexture, NULL);
pRenderTexture->GetSurfaceLevel(0,&ScreenShotSurface);

//Render the scene to the new surface
m_d3ddev->SetRenderTarget(0, ScreenShotSurface);
RenderFrame();

//Save the surface to a file
D3DXSaveSurfaceToFile(_OutFile, D3DXIFF_JPG, ScreenShotSurface, NULL, NULL);

You can see the call to CreateDepthStencilSurface(), which is where I was hoping I could replace pp.MultiSampleType with i.e. D3DMULTISAMPLE_4_SAMPLES, but that didn't work.

My next thought was to create an entirely different LPDIRECT3DDEVICE9 as a D3DDEVTYPE_REF, which always supports D3DMULTISAMPLE_4_SAMPLES (regardless of the video card). However, all of my resources (meshes, textures) have been loaded into m_d3ddev, my HAL device, thus I couldn't use them for rendering the scene under the REF device. Note that resources can be shared between devices under Direct3d9ex (Vista), but I'm working on XP. Since there are quite a lot of resources, reloading everything to render this one frame, then unloading them, is too time-inefficient for my application.

I looked at other options for antialiasing the image post-capture (i.e. 3x3 blur filter), but they all generated pretty crappy results, so I'd really like to try and get an antialiased scene right out of D3D if possible....

Any wisdom or pointers would be GREATLY appreciated...

Thanks!

+1  A: 

You could always render to a texture that is twice the width and height (ie 4x the size) and then supersample it down.

Admittedly you'd still get problems if the card can't create a texture 4x the size of the back buffer ...

Edit: There is another way that comes to mind.

If you repeat the frame n-times with tiny jitters to the view matrix you will be able to generate as many images as you like which you can then add together afterwards to form a very highly anti-aliased image. The bonus is, it will work on any machine that can render the image. It is, obviously, slower though. Still 256xAA really does look good when you do this!

Goz
+1: supersampling is the only thing that comes to my mind that this can be done w/o GPU support. I'm not sure about changing the view matrix. I wold think you would get other artifacts due to perspective changes. It would be interesting to see what happens though. I guess it depends on the scene being rendered.
gatorfax
Yeah...I'd thought of this too, and it looks like that might be the way I have to go. Somehow it just seemed weird that it had no built-in facilities that were more widely supported by MULTISAMPLE, which wasn't available on more than half the laptops I've tried!
Metal450
Why would the low end kit have money spent on hardware that they don't really need (ITs nice, but not necessary)?
Goz
I wasn't suggesting that they should support multisampling. I said it seemed weird that there wasn't some other facility that's more widely supported than multisampling - i.e. software-driven, not requiring a powerful video card, which maybe slow for renders but would be fine for the purposes of i.e. a screen capture.
Metal450
A: 

This article http://msdn.microsoft.com/en-us/library/bb172266(VS.85).aspx seems to imply that you can use the render state flag D3DRS_MULTISAMPLEANTIALIAS to control this. Can you create your device with antialiasing enabled but turn it off for screen rendering and on for your offscreen rendering using this render state flag?

I've not tried this myself though.

John Burton
This is the videocard-dependent mechanism I'm talking about...enabling that renderstate has no effect unless D3DPRESENT_PARAMETERS.MultiSampleType is set to something other than D3DMULTISAMPLE_NONE when initializing the window
Metal450
+2  A: 

Supersampling by either rendering to a larger buffer and scaling down or combining jittered buffers is probably your best bet. Combining multiple jittered buffers should give you the best quality for a given number of samples (better than the regular grid from simply rendering an equivalent number of samples at a multiple of the resolution and scaling down) but has the extra overhead of multiple rendering passes. It has the advantage of not being limited by the maximum supported size of your render target though and allows you to choose pretty much an arbitrary level of AA (though you'll have to watch out for precision issues if combining many jittered buffers).

The article "Antialiasing with Accumulation Buffer" at opengl.org describes how to modify your projection matrix for jittered sampling (OpenGL but the math is basically the same). The paper "Interleaved Sampling" by Alexander Keller and Wolfgang Heidrich talks about an extension of the technique that gives you a better sampling pattern at the expense of even more rendering passes. Sorry about not providing links - as a new user I can only post one link per answer. Google should find them for you.

If you want to go the route of rendering to a larger buffer and down sampling but don't want to be limited by the maximum allowed render target size then you can generate a tiled image using off center projection matrices as described here.

mattnewport
Now THAT was a useful reply!! Thank you! I'll definitely look into this :)
Metal450