views:

35

answers:

2

Hello world!

I am wondering if it is possible to render a scene to multiple render targets in a single pass (or anything faster than drawing it several times through client code). I want to optimize some code that is being rendered to several textures of varying dimensions (512 by 512, 256 by 256, 128 by 128 and 64 by 64 for example).

I realize I could use mip maps derived from the original target, but I don't want to blend colors. I suspect that mip mapping will take average from a few texels, but for my purposes I just want the unretouched rendering, as rendered by the graphics card. If there is any setting for mipmaps that allow this, then it's also considerable as a feasible solution.

On a side note, does anyone know if mipmaps are generated on the CPU or GPU?

Thanks for reading.

+1  A: 

There is no possible solution for your question. MRTs can't work with different resolutions. Mipmaping can't produces smaller textures that look exacly the same as the original. Actually, there is absolutely no way to do that AFAIK. With new graphics cards, mipmaps are generated on GPU. One question: why are you rendering to square targets?

Quang Anh
The square targets were examples, I meant only to illustrate that I want different dimensions. Actually, in my case I want each texture to be half the size of the predecessor, where the predecessor should be able to have any width and height.
Statement
A: 

As Quang Anh says - there's no way to do exactly what you are asking for. So what are you actually trying to achieve?

If you are concerned about the blending that takes place when scaling down for mipmaps, why not take your render target, and render it (on a "full-screen" quad) to another render target using point sampling? (And repeat that for each size you want.)


Added following comments:

If what you are doing involves depth (or data similar to depth), and you need to know the maximum (or minimum) depth per pixel, and you are halving the resolution of your image each time:

You could use the same technique of scaling by rendering one render-target to another with point sampling (as described in my original answer, above). And in a pixel shader sample the four pixels that will become one pixel in the output image. Simply select the maximum (or minimum) valued pixel from the four as the one to output.

(I will leave the maths required to get samples lined up as an exercise.)

Andrew Russell
I have found a theoretical concept in optimization utilizing primitives to generate culling data. The texture color values represent the data, and I want the culling data to be rasterized in varying detail while preserving the output from the rasterizer. That is, if I would scale down a 2 by 2 texture to a 1 by 1 texture, I should have the 1 by 1 texture set with data if the 2 by 2 contains it. This is where mipmaps become a problem, I guess, since mipmapping blend colors. For this to work, the colors must not blend. They must be returned as if rendered to a 1by 1 texture.
Statement
This should apply to any variable sized texture, not just 2x2 and 1x1 (Hence the examples with 512x512 and 256x256). I guess then the solution is to render the frame multiple times with regard to texture sizes.
Statement
@Statement: Basically you need to render the scene multiple times to get pixel-accurate results. Point-sampling will prevent you getting in-between values not in the original image - but the result will not necessarily be pixel-accurate to what the rasterizer would output... Now, that being said - if your technique is based on depth (or some other value where having a value too big or too small is still ok), then my technique can be adapted - see the addition I have made to my answer.
Andrew Russell
No, it's not based on depth. At first I was trying to avoid the "in-between" (interpolated) values, but coming to think more about the problem, I might actually want them. So I guess I would want the mip map generated anyhow. The basic idea was to only store a single flag in the output texture, sort of a stencil map. If the smaller texture had any "visible-flag" texel, then that texel would have to expand to determine if the whole block would be visible or not through another rendering pass, with a larger texture to zoom into the "visible-flag".
Statement
Since rendering is rasterized, I think I'm guaranteed that any supersampling of the subsampled area contain any further "visible-flags". Note that these doesn't mean that an object occupies the entire area, just indicating that it might do that, and that there is at least one object in this area that is covering screen space.
Statement