views:

151

answers:

1

I use XNA as a nice easy basis for some graphics processing I'm doing on the CPU, because it already provides a lot of the things I need. Currently, my "rendertarget" is an array of a custom Color struct I've written that consists of three floating point fields: R, G, B.

When I want to render this on screen, I manually convert this array to the Color struct that XNA provides (only 8 bits of precision per channel) by simply clamping the result within the byte range of 0-255. I then set this new array as the data of a Texture2D (it has a SurfaceFormat of SurfaceFormat.Color) and render the texture with a SpriteBatch.

What I'm looking for is a way to get rid of this translation process on the CPU and simply send my backbuffer directly to the GPU as some sort of texture, where I want to do some basic post-processing. And I really need a bit more precision than 8 bits there (not necessarily 32-bits, but since what I'm doing isn't GPU intensive, it can't hurt I guess).

How would I go about doing this?

I figured that if I gave Color an explicit size of 32 bytes (so 8 bytes padding, because my three channels only fill 24 bits) through StructLayout and set the SurfaceFormat of the texture that is rendered with the SpriteBatch to SurfaceFormat.Vector4 (32 bytes large) and filled the texture with SetData<Color> that it would maybe work. But I get this exception:

The type you are using for T in this method is an invalid size for this resource.

Is it possible to use any arbitrarily made up struct and interpret it as texture data in the GPU like you can with vertices through VertexDeclaration by specifying how it's laid out?

+1  A: 

I think I have what I want by dumping the Color struct I made and using Vector4 for my color information. This works if the SurfaceFormat of the texture is also set to Vector4.

JulianR