views:

954

answers:

4

What is Vertex and Pixel shaders?

What is the difference between them? Which one is the best?

Thanks!

+8  A: 

A Pixel Shader is a GPU (Graphic Processing Unit) component that can be programmed to operate on a per pixel basis and take care of stuff like lighting and bump mapping.

A Vertex Shader is also GPU component and is also programmed using a specific assembly-like language, like pixel shaders, but are oriented to the scene geometry and can do things like adding cartoony silhouette edges to objects, etc.

Neither is better than the other, they each have their specific uses. Most modern graphic cards supporting DirectX 9 or better include these capabilities.

There are multiple resources on the web for gaining a better understand of how to use these things. NVidia and ATI especially are good resources for documents on this topic.

Scott Evernden
+5  A: 

Vertex and Pixel shaders provide different functions within the graphics pipeline. Vertex shaders take and process vertex-related data (positions, normals, texcoords).

Pixel (or more accurately, Fragment) shaders take values interpolated from those processed in the Vertex shader and generate pixel fragments. Most of the "cool" stuff is done in pixel shaders. This is where things like texture lookup and lighting take place.

geofftnz
+1  A: 

In terms of development a Pixel shader is a small program that operates on each pixel individually, similarly a Vertex shader operates on each vertex individually.

These can be used to create special effects, shadows, lighting, etc...

Since each Pixel/Vertex is operated on individually these shaders lend themselves to the highly parallel architecture of modern graphics processors.

Stinomus
+6  A: 

DirectX 10 and OpenGL 3 introduced the Geometry Shader as a third type.

In rendering pipeline order -

Geometry Shader - Takes a primitive shape (point, line, triangle or a list of each type) and can perform calculations on it. This can add new points, take them away or move them as required. This can be used to add or remove levels of detail dynamically from a single base mesh, create mathematical meshes based on a point (for complex particle systems) and other similar tasks.

each resulting point gets passed to the

Vertex Shader - Takes a single point and can adjust it. Can be used to work out complex vertex lighting calcs as a setup for the next stage and/or warp the points around (wobble, scale, etc).

each resulting primitive gets scanline converted and each pixel the span covers gets passed through the

Pixel Shader (Fragment Shader in OpenGL) - Calculates the colour of a pixel on the screen based on what the vertex shader passes in, bound textures and user-added data. This cannot read the current screen at all, just work out what colour/transparency that pixel should be for the current primitive.

those pixels then get put on the current draw buffer (screen, backbuffer, render-to-texture, whatever)

All shaders can access global data such as the world view matrix and the developer can pass in simple variables for them to use for lighting or any other purpose. Shaders are processed in an assembler-like language, but modern DirectX and OpenGL versions have built in high-level c-like language compilers built in called HLSL and GLSL respectively. NVidia also have a shader compiler called CG that works on both APIs.

Dan Brown