gpu

Sum image intensities in GPU

I have an application where I need take the average intensity of an image for around 1 million images. It "feels" like a job for a GPU fragment shader, but fragment shaders are for per-pixel local computations, while image averaging is a global operation. An image sum will suffice, since it only differs from the average by a constant...

Query Hardware-Specific Information on Windows With C++

Specifically, I want to query a system's GPU for the following: The name of the GPU, the series (e.g. ATI Radion 5800, NVIDIA GeForce 4 MX, etc.), the BIOS version, the driver version, the GPU clock speed, the GPU memory speed, the memory type, the memory size, the bus width, the bandwidth, the type of bus being used, the vendor. Any i...

Parallelizeable jpeg like compression using only DCT, run length encoding stages, what sort of compression/performance possible?

We have to compress a ton o' (monochrome) image data and move it quickly. If one were to just use the parallelizeable stages of jpeg compression (DCT and run length encoding of the quantized results) and run it on a GPU so each block is compressed in parallel I am hoping that would be very fast and still yeild a very significant compress...

GTX 295 vs other nvidia cards for cuda development

what is the best nvidia Video Card for cuda development. a single GTX 295 has 2 GPUs, is it possible to have 2 GTX 295 and use the 4 GPUs in my cuda code? is it better to get two 480 cards rather than two 295? would a fermi be better than both cards? ...

Free VRam on OS X

Hi, does anyone know how to get the free(!) vram on os x? I know that you can query for a registry entry: typeCode = IORegistryEntrySearchCFProperty(dspPort,kIOServicePlane,CFSTR(kIOFBMemorySizeKey), kCFAllocatorDefault, kIORegistryIterateRecursively...

OpenGL Shaders?

I'm writing a game in Java, LJGWL (OpenGL). I'm using a library that handles a lot of messy details for me, but need to find a lot faster way to do this. Basically I want to set every pixel on the screen to say a random color as fast a possible. The "random colors" is just an Array [][] that gets updated every 2-3 seconds. I've trie...

F# with OpenTK example?

Hi! Is anybody aware of a possibility to use C# libraries like OpenTK (http://www.opentk.com/) from F#, too? I'm especially interested in a Math toolkit library to give some scripts extra speed by taking advantage of the GPU from within F#. What's a painless way to do that? :) ...

Fortran interface to call a C function that return a pointer

I have a C function double* foofunc() {...} I don't know how to declare the interface in Fortran to call to this C function. The second question is: if this pointer is supposed to be pointing to GPU device memory. How could I define that in the Fortran interface, i.e. do I need to use DEVICE attribute. Thanks, T. Edit: Use any featu...

NURBS on DirectX 11?

Can you render NURBS on the GPU with DirectX 11? I've been reading up on current trends to rendering surfaces like these, but I don't see anything on NURBS. I found some related references, but nothing solid... like "Approximating Catmull-Clark Subdivision Surfaces with Bicubic Patches" by Charles Loop and Scott Schaefer. - ...

streaming multiprocessor number

how do I know how many streaming multiprocessors(SM) I have on my GTS 250? ...

How to process CIFilter using CPU instead of GPU?

Does anyone know how to tell core image to process a CIImage through a CIFilter using the CPU instead of the GPU? I need to process some very large images and I get strange results using the GPU. I don't care how long it takes to CPU will be fine. ...

.Net Lib\Wrapper that would clear differences between ATI and Nvidea APIs for counting on GPU?

I want to use GPU for counting purposes. I need it to fall on to CPU if no GPU found and provide me with unified api. (interested in any .net for example №4) ...

It is posible to use JOGL (JSR 231) in a web application

I want to do an augmented reality app that runs on mobile devices, but I thing that the best way to do it is with a web application (And I have the advantage that the app will run also on PCs ) so I don't have to care about specific device implementations, I'm a java developer so I'll be so much easier for me if I can use JOGL. ...

Is it possible to perform floating point operations on GPU when using OpenGL?

Hello! At university we have introduction to OpenGL and it's first time I'm working with it. So far I have implemented simple thing like Sierpinski carpet and I noticed that most of (both fixed and floating point) calculations are performed on CPU. Does OpenGL provide some API, which can "forward" these calculations to GPU? I know, tha...

creating a linked list using Cuda

is it possible to create a linked list on a gpu using cuda? I am trying to do this and i am finding some dificulties. If i can't allocate dynamic memory in a cuda kernel, then how can i create a new nod and add it to the linked list? ...

Silverlight Hardware Accelerated Graphics

Is there anyway to create cross platform hardware accelerated games in Silverlight? (OpenGL binding or something similar) Does libraries like Balder usable for writing large 3D games (e.g MMORPGs) in Silverlight? ...

Windows Phone 7 - GPU Acceleration not working

Hi, I'm wondering if someone can help with this or has had a similar problem. I am trying to make a basic game in WP7 using Silverlight and I can't get the GPU acceleration to kick in. The frame rate counters are visible which would indicate the GPU is being used, directx versions is 10 and directx driver versions is WDDM 1.1. I've tr...

CUDA Matrix multiplication breaks for large matrices

I have the following matrix multiplication code, implemented using CUDA 3.2 and VS 2008. I am running on Windows server 2008 r2 enterprise. I am running a Nvidia GTX 480. The following code works fine with values of "Width" (Matrix width) up to about 2500 or so. int size = Width*Width*sizeof(float); float* Md, *Nd, *Pd; cudaError_t err ...

I've got a Nvidia GPU, how can i code on it?

I've never really been into GPUs, not being a gamer but im aware of their parallel ability and wondered how could i get started programming on one? I recall (somewhere) there is a CUDA C-style programming language. What IDE do I use and is it relatively simple to execute code? ...