Do I have to set up my gl context in a certain way to bind textures. I'm following a tutorial. I start by doing:
#define checkImageWidth 64
#define checkImageHeight 64
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];
static GLuint texName;
void makeCheckImage(void)
{
int i, j, c;
for (i = 0; i < checkImageHeig...
I'm trying to get MRT working in OpenGL to try out deferred rendering. Here's the situation as I understand it.
Create 3 render buffers (for example). Two RGBA8 and one Depth32.
Create an FBO.
Attach render buffers to FBO. ColorAttachment0/1 for color buffers, DepthAttachment for depth buffer.
Bind the FBO.
Draw geometry.
Send data to ...
I'm trying to generate textures like so:
#define checkImageWidth 64
#define checkImageHeight 64
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];
static GLubyte otherImage[checkImageHeight][checkImageWidth][4];
static GLuint texName[2];
void makeCheckImages(void)
{
int i, j, c;
for (i = 0; i < checkImageHeight;...
I resize my window like this:
RECT clientRect;
GetClientRect(mainWindow,&clientRect);
glShadeModel(GL_SMOOTH);
MoveWindow(framehWnd,
toolWidth,
tabHeight,
((clientRect.right - clientRect.left) - toolWidth) - rightRemainder ,
(clientRect.bottom - clientRect.top) - tabHeight - paramHeight,
...
I've been having trouble with this for a while now, and I haven't gotten any solutions that work yet. Here is the problem, and the specifics:
I am loading a 256x256 uncompressed TGA into a simple OpenGL program that draws a quad on the screen, but when it shows up, it is shifted about two pixels to the left, with the cropped part appea...
Hello,
I want to implement a paint-like application, which will enable kids to create and work with 3 dimensional objects.
How can I start?
What is the right approach? WPF, OpenGL, or Direct3D?
(I prefer C# solutions, but C++ is OK also).
Thank you all in advance!
--NewB
...
Hi,
I'm writing a pretty simple piece of code which should draw a plane. The plane must have two different textures on its sides, like if it was a book page.
I'm trying to achieve this by doing this:
glFrontFace(GL_CCW);
glBindTexture(GL_TEXTURE_2D, textures[kActiveSideLeft]);
glVertexPointer(3, GL_FLOAT, 0, vertexCoordinat...
I want to start with my webGL project and minimal require is my graphic card support openGL 2.0.
Problem exist because i have intel laptop with integrated intel 965 graphic media accelerator and driver is up to date and it support openGL 1.5.
Is there any solution how to update my graphic carf to support 2.0? Is this possible?
thx,
M...
Lets say I have 4 verticies and their texture coordinates. How could I then figure out the texture coords of a 5th vertex?
Thanks
say I have:
v1 = (0,0) tex coord(1,0)
v2....
v3...
v4...
v5 = (15,15) tex coord = ??
yea linear interpolation I suppose,
To figure out the coords I do:
vec.x / polywidth;
vec.y / polyheight;
...
I've loaded a Wavefront .obj file and drawn it in immediate mode, and it works fine.
I'm now trying to draw the same model with a vertex buffer, but I have a question.
My model data is organized in the following structures:
struct Vert
{
double x;
double y;
double z;
};
struct Norm
{
double x;
double y;
double z;
};
struct ...
I've got a screen-aligned quad, and I'd like to zoom into an arbitrary rectangle within that quad, but I'm not getting my math right.
I think I've got the translate worked out, just not the scaling. Basically, my code is the following:
//
// render once zoomed in
glPushMatrix();
glTranslatef(offX, offY, 0);
glScalef(?wtf?, ?wtf?, 1.0f...
I have some verticies and then I apply a glrotate(). I'd want to now what my verticies became after this transformation. How could I do this?
Thanks
...
I'm sure there's not just 1 answer to this but, do game engines actually change the vectors in memory, or use gltransformations? Because pushing and popping the matrix all the time seems inefficient, but if you keep modifying the verticies you cant make use of display lists. So I'm wondering how it's done in general. Thanks
...
I'm using the GLUTesselator for Polygons. Right now the vertex callback does glvertex2f and gltex2f. Would it be better simply to collect the verticies from the vertex callback in a std::vector then use gldrawarrays()? Or would this actually be less efficient since it has to put the verts and texture coordinates in a vector?
Thanks
...
I'm creating a drawing application with OpenGL. I'v created an algorithm that generates gradient textures. I then map these to my polygons and this works quite well. What I realized is how much memory this requires. Creating 1000 gradients takes about 800MB and that's way too much. Is there an alternative to textures, or a way to compres...
I'm writing an OpenGL program where I compute my own matrices and pass them to shaders. I want to use Boost's uBLAS library for the matrices, but I have little idea how to get a uBLAS matrix into OpenGL's shader uniform functions.
matrix<GLfloat, column_major> projection(4, 4);
// Fill matrix
...
GLuint projectionU = glGetUniformLocat...
I have a considerable (120-240) amount of 640x480 images that will be displayed as textured flat surfaces (4 vertex polygons) in a 3D environment. About 30-50% of them will be visible in a given frame. It is possible for them to crossover. Nothing else will be present in the environment.
The question is - will the modern and/or few-year...
I'm using following code to draw my circles:
double theta = 2 * 3.1415926 / num_segments;
double c = Math.Cos(theta);//precalculate the sine and cosine
double s = Math.Sin(theta);
double t;
double x = r;//we start at angle = 0
double y = 0;
GL.glBegin(GL.GL_LINE_LOOP);
for(int ii = 0; ii < num_segments; ii++)
{
float first = (f...
I dont want to update the whole texture every time i change a small part of it, what is the command for this?
And when i have mipmapping on, set with GL_GENERATE_MIPMAP, how optimized is that internally? will it calculate whole image again, or just the part i updated?
...
Hi all. I'm completely new in Open GL, so have a question.
I need to apply hi quality texturing on the surface rendered via triangles . But on zooming i continue see the triangle's under the skin, it's not smooth. I use OpenGL built-in minMapping. So I wonder (lookig at other products) do i need to implement my own mipMapping algorithm...