Exposition:
1) My viewport is 800x600
2) I have tex1 = frame buffer object; rendered to a texture, 800x600
3) I have tex2 = frame buffer object; rendered to a texture, 800x600
Now, I want to create the following image on the screen:
tex1 _on top of_ tex2.
Where tex1 is black, display tex2's pixel.
Else, display tex1's pixel.
Is th...
i want use a specific customized algorithm to generate mipmaps for some renderable textures (R16F & RGBA16F).
All needed textures' mipmaps are pre-enabled by using glGenerateMipmapEXT().
The biggest problem so far is to render into 1+ mipmap levels.
More precisely,
this works like a charm:
...
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
g...
We have some nightly build machines that have the cuda libraries installed, but which do not have a cuda-capable GPU installed. These machines are capable of building cuda-enabled programs, but they are not capable of running these programs.
In our automated nightly build process, our cmake scripts use the cmake command
find_package(C...
Hi,
I write matlab program(cuda) for generate key.
how to optimize cuda program for get better performance?
...
Hi,
It looks like GL has become mainstream for all gaming platforms (even handheld!)
This has pushed the deployment of modern GPU chipsets to large numbers of consumers.
This is amazing.
With the modern GPU systems out there now, is it possible to do generic old-school graphics
programming (aka - blit from X rect to Y rect using VRAM...
hi,
I'm using OpenGL as the bottom end for a 2D tiling engine.
When everything is 2D, it is simple to optimize certain issues.
For example, scrolling. If I know a certain section of the screen
needs to scroll off the bottom, then I can just blit over that portion.
I'm evening moving more than 1 pixel at a time. Without explicit hardware...
What are the pros and cons in choosing PS3 as a platform for scientific computing in detriment of GPU's? Is It the better choice ?
...
Should i learn OpenCL if i only want to program NVIDIA GPUs ?
...
i have read that there were 100X acceleration on certain problems when you use NVIDIA GPU instead of CPU.
what are the best performance acceleration timings using cuda on different problems.
please state the problem and the acceleration factor along with links for papers if possible.
...
the current GPU threads are somehow limited (memory limit, limit of data structures, no recursion...).
do you think it would be feasible to implement a graph theory problem on GPU. for example vertex cover? dominating set? independent set? max clique?....
is it also feasible to have branch-and-bound algorithms on GPUs? Recursive bac...
Hi,
I am writing a report, and I would like to know, in your opinion, which open source physical simulation methods (like Molecular Dynamics, Brownian Dynamics, etc) and not ported yet, would be worth to port to GPU or another special hardware that can potentially speedup the calculation.
Links to the projects would be really appreciat...
Is there any library in C for Linux to get gpu information for example BIOS Verison DigitalID...
...
I'm trying to write life in F# using accelerator v2, but for some odd reason my output isn't square despite all my arrays being square - It appears that everything but a rectangular area in the top left of the matrix is being set to false. I've got no idea how this could be happening as all my operations should treat the entire array eq...
Hello,
I have to convert several full PAL videos (720x576@25) from YUV 4:2:2 to RGB, in real time, and probably a custom resize for each.
I have thought of using the GPU, as I have seen some example that does just this (except that it's 4:4:4 so the bpp is the same in source and destiny)-- http://www.fourcc.org/source/YUV420P-OpenGL-GLS...
Hi, i'm programming a simple OpenGL program on a multi-core computer that has a GPU. The GPU is a simple GeForce with PhysX, CUDA and OpenGL 2.1 support. When i run this program, is the host CPU that executes OpenGL specific commands or the ones are directly transferred
to the GPU ???
...
does anyone know what part of the opengl es thread on an android device runs on the gpu (if it has one). just the method calls you make with the GL10 adapter class? or the complete onDraw method in your custom Renderer class?
...
Hello.
Can you tell me how cuda runtime chooses GPU device if 2 or more host threads use cuda runtime?
does the runtime choose separate GPU devices for each thread?
does GPU device needs to be set explicitly?
Thanks
...
As far as I know, certain mathematical functions like FFTs and perlin noise, etc. can be much faster when done on the GPU as a pixel shader. My question is, if I wanted to exploit this to calculate results and stream to bitmaps, could I do it without needing to actually display it in Silverlight or something?
More specifically, I was th...
I was shocked when I read this (from the OpenGL wiki):
glTranslate, glRotate, glScale
Are these hardware accelerated?
No, there are no known GPUs that
execute this. The driver computes the
matrix on the CPU and uploads it to
the GPU.
All the other matrix operations are
done on the CPU as well :
glPushMatrix, ...
How is the NVIDIA PhysX engine implemented in the NVIDIA GPUs: It's a co-processor or the physical algorithms are implemented as fragment programs to be executed in the GPU pipeline ?
...