views:

418

answers:

3

For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card.

One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats.

Your typical set up would be like this:

#define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)

In DirectX 10, I believe the equivalent is the input vertex description:

D3D10_INPUT_ELEMENT_DESC layout[] = {
    {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0,
        D3D10_INPUT_PER_VERTEX_DATA, 0},
    {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12,
        D3D10_INPUT_PER_VERTEX_DATA, 0}
};

I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?

A: 

FVFs were (kind-of) deprecated in favour of D3DVERTEXELEMENT9 (aka Vertex Declarations) - which is remarkably similar to D3D10_INPUT_ELEMENT_DESC - anyway. In fact, most of what's in DirectX 10 is remarkably similar to what was in DirectX 9 minus the fixed-function pipeline.

The biggest change between DirectX9 and DirectX10 was the cleaning up of the API (in terms of the separation of concerns, making it much clearer what goes with what stage of the pipeline, etc).

Dean Harding
I didn't know directx9 had fixed functions. I thought fixed functions was the incapability to create your own unique shaders. directx 9 allows you to create your own shaders.
numerical25
I've ran into zero tutorials on even mention the D3DVERTEXELEMENT9
numerical25
Fixed function is the ability to render without using shaders at all. Shaders are optional in DirectX 9.
Alan
It's a shame that most DirectX tutorials you find on the internet are pretty poor quality, actually... the DirectX documentation is actually not too bad for describing how to use those vertex declarations, though.
Dean Harding
your right, I did notice this and I was confused at first on why I had to set up a hlsl in 10 and not on 9. Now I know.
numerical25
+1  A: 

The biggest change I've noticed between DX9 and DX10 is the fact that under DX10 you need to set an entire renderstate block where in DX9 you could change individual states. This broke my architecture somewhat because I was rather relying on being able to make a small change and leave all the rest of the states the same (This only really becomes a problem when you set states from a shader).

The other big change is the fact that under DX10 vertex declarations are tied to a compiled shader (in CreateInputLayout). Under DX9 this wasn't the case. You just set a declaration and set a shader. Under DX10 you need to create a shader then create an input layout attached to a given shader.

As codeka points out the D3DVERTEXELEMENT9 has been the recommended way to create shader signatures since DX9 was introduced. FVF was already depreciated and through FVF you are unable to do things like set up a tangent basis. Vertex layours are far far more powerful and don't cause you to get fixed to a layout. You can put the vertex elements wherever you like.

If you want to know more about DX9 input layouts then i suggest you start with MSDN.

Goz
I notice this as well
numerical25
Your second point actually exists in most implementations of DX9, it just sits at the driver level. That means every time you assign a different shader to a different vertex declaration, it would have to reparse the binary assembly code to match up the input registers and where they are used in the shader (which is exactly what happens in d3d10). They basically just allowed you to manage it yourself so you don't get an extra performance hit (usually without realizing it).
Grant Peters
+3  A: 

I would say there's no radical changes in the HLSL syntax itself between DX9 and DX10 (and by extension DX11).

As codeka said, changes are more a matter of cleaning the API and a road toward generalization (for the sake of GPGPU). But there are indeed noticable differences:

Noticable differences:

  • To pass constant to the shaders, you now have to go through Constant Buffers.

  • A Common-Shader Core: all types of shader have access to the same set of intrinsic functions (with some exceptions like for GS stage). Integer and bitwise operations are now fully IEEE-compliant (and not emulated via floating point). You have now access to binary casts to interpret an int as a float, a float as an uint etc..

  • Textures and Samplers have been dissociated. You now use syntax g_myTexture.Sample( g_mySampler, texCoord ) instead of tex2D( g_mySampledTexture, texCoord )

  • Buffers: a new kind of resource for accessing data that need no filtering in a random access way, using the new Object.Load function.

  • System-Value Semantics: a generalization and extensions of POSITION, DEPTH, COLOR semantics, that are now SV_Position, SV_Depth, SV_Target and add of per stage new semantics like SV_InstanceID, SV_VertexId, etc.

That's all what I see for now. If something new pops up of my mind I will update my answer.

Stringer Bell