views:

387

answers:

1

This should be really simple, but it's consumed multi-hours of my time, and I have no clue what's going on.

I'm rendering a flat-colored full-screen quad to a texture, then reading back the result with glGetTexImage. It's GPGPU related, so I want the alpha value to behave as if it's any of the other three. I'm using an FBO, texture format GL_RGBA32F_ARB, NVidia card on a MacBook Pro with 10.5, if it matters.

I only get back the correct color if the alpha I specify is one; with any other value it appears to be blending with what's already in the framebuffer, even though I've explicitly disabled GL_BLEND. I also tried enabling blending and using glBlendFunc(GL_ONE, GL_ZERO) but the end result is the same. I can clear the framebuffer to zero before rendering, which fixes it, but I want to understand why that's necessary. As a second test, rendering two overlapping quads gives a blended result, when I just want the original 4-channel color back. Surely the solid color quad should be overwriting pixels in the framebuffer completely? I'm guessing I've misunderstood something fundamental. Thanks.

 const size_t res = 16;
 GLuint tex;
 glGenTextures(1, &tex);
 glBindTexture(GL_TEXTURE_2D, tex);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_ARB,
  res, res, 0, GL_RGBA, GL_FLOAT, 0);
 glBindTexture(GL_TEXTURE_2D, 0);

 GLuint fbo;
 glGenFramebuffersEXT(1, &fbo);
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
 glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
  GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0);
 glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);

 glViewport(0, 0, res, res);
 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glOrtho(0, res, 0, res, -1, 1);

 glClearColor(0,0,0,0);
 glClear(GL_COLOR_BUFFER_BIT);

 //glEnable(GL_BLEND);
 //glBlendFunc(GL_ONE, GL_ZERO);
 glDisable(GL_BLEND);
 glDisable(GL_DEPTH_TEST);

 glColor4f(0.2, 0.3, 0.4, 0.5);

 for (int i=0; i<2; ++i) {
  glBegin(GL_QUADS);
  glVertex2i(0,0);
  glVertex2i(res, 0);
  glVertex2i(res, res);
  glVertex2i(0, res);
  glEnd();
 }

 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

 std::vector<float> tmp(res*res*4);
 glBindTexture(GL_TEXTURE_2D, tex);
 glGetTexImage(GL_TEXTURE_2D, 0,
  GL_RGBA, GL_FLOAT, &tmp.front());
 const float * const x = &tmp.front();
 cerr << x[0] << " " << x[1] << " " << x[2] << " " << x[3] << endl;
 // prints 0.3 0.45 0.6 0.75

 glDeleteTextures(1, &tex);
 glDeleteFramebuffersEXT(1, &fbo);
+1  A: 

Not really a good answer, however, some things to note:

  • What you're observing does not really look like blending. For one, your back-buffer is initially rgba=0, so alpha-blending against it would give 0, not 0.2 0.3 0.4 0.5 like you may observe.
  • my inclination was that you somehow set the same texture buffer as texture and framebuffer attachement. This is undefined in the spec (section 4.4.3). In the code snippet you provide, you do a glBindTexture(GL_TEXTURE_2D, 0) though, which should make sure it is not the case... I'll let it here in case you've missed it.
Bahbar