I have a fragment shader with the following:
const lowp float colors[8] = float[8]( // line 12
0,0,0,1,
1,0,0,1
);
but it fails to compile:
ERROR: 0:12: 'array of float' : array type not supported here in glsl < 120
ERROR: 0:12: 'array of float' : constructor not supported for type
ERROR: 0:15: 'array of float' : no matchin...
I'm looking at options for GPGPU work. Say I write OpenCL kernel code or GLSL shader code and embed that in my executable. There's nothing to stop somebody grep-ing the binary and stealing my hard work. I could obscure or encrypt the strings and decrypt them just-in-time, but somebody can always go in with a debugger and intercept that j...
I was learning about how to use textures with GLSL (in LWJGL) to create a simple fragment shader that blurs a texture. The first attempt (for testing purposes) was a very simple shader which just takes the original color values:
uniform sampler2D rectTexture;
void main(){
vec4 color = texture2D(rectTexture, gl_FragCoord.st);
gl_Fra...
I am currently learning GLSL. It would seem that once you learn one of the shader languages, learning one of the others would not be too difficult. Is it something analogous to learning a widget toolset like wxWidgets and then switching to Qt? Once you get the idea of what is happening within one widget toolset, another toolset will d...
I have a texture with a 3x9 repeating section. I don't want to store the tesselated 1920x1080 image that I have for the texture, I'd much rather generate it in code so that it can be applied correctly at other resolutions. Any ideas on how I can do this? The original texture is here: http://img684.imageshack.us/img684/6282/matte1.png
I ...
Hello!
I'm writing an iPhone app which uses GLSL shaders to preform transformations on textures, but one point that I'm having a little hard time with is passing variables to my GLSL shader.
I've read that it's possible to have a shader read part of the OpenGL state (and I would only need read-only access to this variable), but I'm not...
I'm currently working on a couple of shaders for an iPad game and it seems as if Apple's GLSL compiler isn't doing any optimizations (or very few). I can move a single line in a shader and drop my FPS from 30 to 24 but I really have no idea why this is happening.
Does anyone have any references for the following:
what PowerVR instruc...
Hi everybody,
after one day of trying to figure out how to implement a kd-tree in OpenGL/GLSL i am pretty frustrated ...
I declare my KD-Nodes like this in GLSL:
layout(std140) uniform node{
ivec4 splitPoint;
int dataPtr;
} nodes[1024];
SplitPoint holds the kd-tree splitpoint, the fourth element of the vector holds the splitDire...
Hi, i'm here to get help about a strange behavior of my GLSL code when i cast a float to an int and i never seen such a bug since i started GLSL
Actually i'm trying to achieve mesh skinning on CPU with GLSL
I use an ATI Radeon HD 4850 (Gainward) and i work with OpenGL 2.1 on Windows XP
so on CPU side I gather bones indices and weights...
The problem is that my glsl loader does not work and i dont see what im doing wrong.
void cShader::Load(const char *v_filename,const char *f_filename)
{
char *vs,*fs;
vShaderList = glCreateShader(GL_VERTEX_SHADER);
fShaderList = glCreateShader(GL_FRAGMENT_SHADER);
std::ifstream v_fs;
v_fs.open(v_filename,std::ios::binary);
...
In C, I can debug code like:
fprintf(stderr, "blah: %f", some_var);
in GLSL ... is there anyway for me to just dump out a value in a Vertex or Fragment shader? I don't care if it's slow; I just want to dump out the value. Ideally, I want a setup like the following:
normal state = run GLSL shader normally
I press key 'd'
then next...
I need a shader that starts with a given texture, then each frame of animation operates on the previous output of the shader plus other input.
How do I organize the framebuffers so that each frame has access to the output of the previous frame without having to move the buffer back and forth from the CPU?
...
I know that iPhone uses OpenGL ES 2.0, but I don't know the version of the underlying language GLSL. Is it 1.3, 1.4, 2.0, other?
Thanks.
...
Hi,
I am working on a OpenGL ES 2.0 shader and I have tightly packed data e.g. three 5-bit unsigned integers within a block of two bytes. To unpack this data I obviously need bit-shifting, but this is not supported in OpenGL ES Shading Language (see page 29 http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.20.8.pdf)
Consequently I ...
I'm trying to implement textured points (e.g. point sprites) in OpenGL ES 2.0 for a particle system. Problem I'm having is the points all render as solid black squares, rather than having the texture properly mapped.
I have verified that gl_PointCoord is in fact returning x/y values from 0.0 to 1.0, which would map across the entir...
Hi there,
I try to use GLM vector classes in STL containers. No big deal as long as I don't try to use <algorithm>. Most algorithms rely on the == operator which is not implemented for GLM classes.
Anyone knows an easy way to work around this? Without (re-)implementing STL algorithms :(
Kind Regards,
Florian
GLM is a great math libra...
I'm running some experiments in WebGL, one of them being an XOR effect fragment shader. For some reason all the bitwise operators are reserved in GLSL and cause a compiler error when used. Why are these operators illegal? What can I use instead of | in this case?
...
The OpenGL Superbible 5th Edition was recently released, and it documents OpenGL 3.3. Unfortunately, OS X only supports OpenGL 2.1 and GLSL version 1.20. The very first non-trivial vertex shader they give you fails to compile with the error message:
ERROR: 0:5: '' : Version number not supported by GL2
ERROR: 0:8: 'in' : syntax error...
Hi,
There's been similar threads before, but I could not find a solution in them. My problem is getting more than one texture accessible in a GLSL shader.
Here's what I'm doing:
Shader:
uniform sampler2D sampler0;
uniform sampler2D sampler1;
uniform float blend;
void main( void )
{
vec2 coords = gl_TexCoord[0];
vec4 col = textu...
I have tried so many different strategies to get a usable noise function and none of them work. So, how do you implement perlin noise on an ATI graphics card in GLSL?
Here are the methods I have tried:
I have tried putting the permutation and gradient data into a GL_RGBA 1D texture and calling the texture1D function. However, one call t...