tags:

views:

175

answers:

4

I have tried so many different strategies to get a usable noise function and none of them work. So, how do you implement perlin noise on an ATI graphics card in GLSL?

Here are the methods I have tried: I have tried putting the permutation and gradient data into a GL_RGBA 1D texture and calling the texture1D function. However, one call to this noise implementation leads to 12 texture calls and kills the framerate.

I have tried uploading the permutation and gradient data into a uniform vec4 array, but the compiler won't let me get an element in the array unless the index is a constant. For example:

int i = 10;
vec4 a = noise_data[i];

will give a compiler error of this:

ERROR: 0:43: Not supported when use temporary array indirect index.

Meaning I can only retrieve the data like this:

vec4 a = noise_data[10];

I also tried programming the array directly into the shader, but I got the same index issue. I hear NVIDIA graphics cards will actually allow this method, but ATI will not.

I tried making a function that returned a specific hard coded data point depending on the input index, but the function, being called 12 times and having 64 if statements, made the linking time unbearable.

ATI does not support the "built in" noise functions for glsl, and I cant just precompute the noise and import it as a texture, because I am dealing with fractals. This means I need the infinite precision of calculating the noise at run time.

So the overarching question is...

How?

A: 

This SimpleX noise stuff might do what you want.

genpfault
I've already looked through that site. Their implementation of classic noise also has 12 texture lookups and their implementation of simplex noise requires 8 texture lookups. 8 is a little bit better. However, its still a big framerate killer. I have also taken a look at this site: http://www.gamedev.net/community/forums/topic.asp?topic_id=538517 Which lead me to try the array strategy.
Ned
its "simplex" btw, like the n-dimensional geometric figure analogous to triangles and tetrahedra
jheriko
A: 

Try adding #version 150 to the top of your shader.

The Fiddler
I get this message:Fragment shader failed to compile with the following errors:ERROR: 0:1: '' : Version number not supported by GL2ERROR: compilation errors. No code generated.Because I am trying to do this on linux, I am stuck with ATI's fglrx for OpenGL support. This driver only supports up to these versions:Status: Using OpenGL 2.1.8087 ReleaseStatus: Using GLSL 1.20Right now, I am moving my development environment to a windows machine :( It also has an ATI graphics card, but the driver supports the latest version of OpenGL and GLSL. I am still interested in a pre 1.5 answer.
Ned
To use version 1.50 of GLSL you need an OpenGL 3.2 context. Not that it matters for his problem anyway. Variable indexing has always been allowed for arrays. This is AMD's fault.
Mads Elvheim
+1  A: 

noise() is well-known for not beeing implemented...

roll you own :

int c;
int Xn;
srand(int x, int y, int width){// in pixel
    c = x+y*width;
};

int rand(){
    Xn = (a*Xn+c)%m;
    return Xn;
}

for a and m values, see wikipedia

It's not perfect, but often good enough.

Calvin1602
I like how this answer is looking. Give me time to implement it.
Ned
If you find too much visual correlation between pixels, try a more sophisticated method for initializing c. Maybe c = x+y*width; c=rand();. Choice of a and m can make a big difference, too.
Calvin1602
+1  A: 

For better distribution of random values I suggest these very good articles:

  1. Pseudo Random Number Generator in GLSL
  2. Lumina noise GLSL tutorial

Have random fun !!!

0x69
Hey, this is pretty cool! This is great when generating random numbers, but howabout normals to the noise? Essentially, the derivatives?
Ned