views:

36

answers:

1

I need to calculate the minimum and maximum UV values assigned to the pixels produced when a given object is drawn onscreen from a certain perspective. For example, if I have a UV-mapped cube but only the front face is visible, min(UV) and max(UV) should be set to the minimum and maximum UV coordinates assigned to the pixels of the visible face.

I'd like to do this using Direct3D 9 shaders (and the lowest shader model possible) to speed up processing. The vertex shader could be as simple as taking each input vertex's UV coordinates and passing them on, unmodified, to the pixel shader. The pixel shader, on the other hand, would have to take the values produced by the vertex shader and use these to compute the minimum and maximum UV values.

What I'd like to know is:

  1. How do I maintain state (current min and max values) between invocations of the pixel shader?
  2. How do I get the final min and max values produced by the shader into my program?

Is there any way to do this in HLSL, or am I stuck doing it by hand? If HLSL won't work, what would be the most efficient way to do this without the use of shaders?

+1  A: 

1) You don't. 2) You would have to do a read back at some point. This will be a fairly slow process and cause a pipeline stall.

In general I can't think of a good way to do this. What exactly are you trying to acheieve with this? There may be some other way to achieve the result you are after.

You "may" be able to get something going using multiple render targets and writing the UVs for each pixel to the render target. Then you'd need to pass the render target back to main memory and then parse it for your min and max values. This is a realy slow and very ugly solution.

If you can do it as a couple of seperate pass you may be able to render to a very small render target and use 1 pass with a Max and 1 pass with a Min alpha blend op. Again ... not a great solution.

Goz
I need to know the extents of UV space being sampled so I can take a rectangular section from a large texture image (too large for the video card to handle) and copy those pixels onto a smaller texture that's small enough to fit into video memory.
Adrian Lopez
Why can't you carve up the texture and adjust the UVs, accordingly, in advance?
Goz
I don't understand what you mean. How would I know which subset of the texture to copy if I don't know which subset of UV space is being rendered?
Adrian Lopez
You aren't the only one having an understanding failure ;) But, because you surely know what the UVs on your model are already?
Goz
I didn't explain myself properly. Since I can't use the high-resolution texture directly due to memory constraints, I need to figure out at runtime what part of the texture is exposed to the camera. The idea is to maintain a reasonable pixel-to-texel ratio when zooming into an object. OTOH, I realized yesterday that min and max UVs wouldn't yield satisfactory results at UV discontinuities since min and max UVs might yield a rectangle larger than the texture area being sampled. I now intend to use simplified meshes and CPU processing to determine which UV region to use.
Adrian Lopez
I would definitely think this is a better plan. Would it be worth pre-computing the max and min texture rectangles for each face of an axis aligned bounding box? This way you can easily check the axis aligned bounding box on the CPU and simplify your checks that way.
Goz
I'm not sure. I think the precomputed values would become less accurate as the camera's angle to the AABB changes. I'll have to think about it to see if there's some way to precompute things in a way that provides reasonably accurate results regardless of the object's orientation with respect to the camera.
Adrian Lopez