tags:

views:

183

answers:

1

Hello and sorry for the obscure title :} I`ll try to explain the best i can.

First of all, i am new to HLSL but i understand about the pipeline and stuff that are from the fairy world. What i`m trying to do is use the gpu for general computations (GPGPU).

What i don`t know is: how can i read* the vertices (that have been transformed using vertex shaders) back to my xna application? I read something about using the texture memory of the gpu but i can't find anything solid...

Thanks in advance for any info/tip! :-)

*not sure if possible bacause of the rasterizer and the pixel shader (if any), i mean, in the end it's all about pixels, right?

+2  A: 

As far as I know this isn't generally possible.

What exactly are you trying to do? There is probably another solution

EDIT:: Taking into account the comment. If all you want to do is general vector calculations on the GPU try doing them in the pixel shader rather than the vertex shader.

So for example, say you want to do cross two vectors, first we need to write the data into a texture

//Data must be in the 0-1 range before writing into the texture, so you'll need to scale everything appropriately
Vector4 a = new Vector4(1, 0, 1, 1);
Vector4 b = new Vector4(0, 1, 0, 0);

Texture2D dataTexture = new Texture2D(device, 2, 1);
dataTexture.SetData<Vector4>(new Vector4[] { a, b });

So now we've got a 2*1 texture with the data in, render the texture simply using spritebatch and an effect:

Effect gpgpu;
gpgpu.Begin();
gpgpu.CurrentTechnique = gpgpu.Techniques["DotProduct"];
gpgpu.CurrentTechnique.Begin();
spriteBatch.Begin();
gpgpu.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(dataTexture, new Rectangle(0,0,2,1), Color.White);
spriteBatch.end();
gpgpu.CurrentTechnique.Passes[0].End();
gpgpu.CurrentTechnique.End();

All we need now is the gpgpu effect I've shown above. That's just a standard post processing shader, looking something like this:

sampler2D DataSampler = sampler_state
{
    MinFilter = Point;
    MagFilter = Point;
    MipFilter = Point;
    AddressU = Clamp;
    AddressV = Clamp;
};

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{
    float4 A = tex2D(s, texCoord);
    float4 B = tex2D(s, texCoord + float2(0.5, 0); //0.5 is the size of 1 pixel, 1 / textureWidth
    float d = dot(a, b)
    return float4(d, 0, 0, 0);
}

technique DotProduct
{
    pass Pass1
    {
        PixelShader = compile ps_3_0 PixelShaderFunction();
    }
}

This will write out the dot product of A and B into the first pixel, and the dot product of B and B into the second pixel. Then you can read these answers back (ignoring the useless ones)

Vector4[] v = new Vector4[2];
dataTexture.GetData(v);
float dotOfAandB = v[0];
float dotOfBandB = v[1];

tada! There are a whole load of little issues with trying to do this on a larger scale, comment here and I'll try to help you with any you run into :)

Martin
I am pretty sure, that the "another solution" to my problem is CUDA :-)In general, i want to use the gpu (using shaders) for vector calculations (dot, cross, magnitude, unit, etc).
makism
thanks a lot :-) ! i will post again if i encounter any problems.
makism
It occurred to me on the train that a better way to feed the data in might be to put the first vectors into texture A, the second vectors into texture B, and then render an "answer" texture, with A and B as parameters to the shader. That way you get no useless answers (so long as the number of vectors you wish to dot product fits perfectly into a rectangular texture).
Martin