views:

73

answers:

2

Hi All,

I've managed to create a simple mandelbrot explorer using Open Gl, and the CGFX SDK provided by NVidia. It works well, but is currently float based, and therefore doesn't have much "depth" -- As the distance from the lowest complex number to the largest becomes smaller, the precision is lost, and the resultant image is "pixelated".

Unfortunately, CGFX doesn't seem to support double precision, and even then double precision is limited for my intentions. Because CGFX, because of it's intended design, does not have a bignum class, I thought it would be best to create my own.

I managed to create a prototype in C++ -- which only uses unsigned integers -- but when I tried to move it to CGFX, FX Composer 2.5 couldn't seem to compile it. Because I am only using unsigned integers, the multiplication, and addition, code contains a lot of bitshift operations, which according to FX Composer 2.5 are not available in my profile.

I know that this question contains a lot of queries, but unfortunately I'm not really familiar with numerical analysis, shader programming, or open gl, and at this point I feel overwhelmed -- And quite certain that I am trying to fix a leak with a sledge hammer.

So if anyone has an answer for anyone of these questions, I'd be grateful:

  1. Does CGFX, or any other shader language, support bitshift operators on unsigned integers and floats (Required to convert floats to big floats)?

  2. Does CGFX, or any other shader language, support double precision, or greater, floating point?

  3. Is there a more refined mathemathical way of dealing with my issue rather than create a big float class?

If anyone needs a bit more clarification, or code snippets, feel free to ask.

+1  A: 

If you're strictly doing the Mandelbrot set, you would do well to use a fixed point representation. You may want to start with 5 bits to the left of the radix point plus one sign bit. So for 32 bit you'd have 26 to the right of the decimal. for 64 bit you'd get 58 bits to the right with is better than double precision. Also with fixed point math, all bit shifts will be fixed amounts rather than doing variable shifts needed for floating point.

In short fixed point gives better precision and simpler implementation at the expense of large range which you don't need in this case. I can't speak to shader issues though.

phkahler
A: 

To answer your question regarding bit-operations

OpenGL 3.1 / GLSL 1.40 (core) and the OpenGL extension GL_ARB_gpu_shader4 adds support for unsigned integer arithmetic and bit operations like shifts, bitwise OR and bitwise AND. With this functionality in GLSL, the constructors int(uint) and uint(int) do what you expect if you have a C or C++ background. It treats the most significant bit literally as the sign, so you can use the constructors instead of casting. Later versions of GLSL also allows you to set the precision on floating-point, but not very many graphic cards support double precision in hardware yet.

Further suggestions

I'm with phkahler on this one. What you want is fixedpoint, not a floating point type. Given that the mandelbrot set lives within radius 2 in the complex plane, you only need 3 bits for the integer part. Instead of doing a square-root, check if the magnitude of z squared is greater than 4. You will need to take signed overflow/underflow into account. How to calculate the common arithmetic flags is described below. I haven't tested these in GLSL, but the logic is sound.

Calculating the arithmetic flags

Legend:

C = carry flag(unsigned overflow)
V = overflow flag
Z = zero flag(equalness)
N = negative flag

Algorithms

Given: `dst = src1 + src2` then:
C = (unsigned)dst < (unsigned)src1 || (unsigned)dst < (unsigned)src2
V = ((signed)dst < (signed)src1) != ((signed)src2 < 0)
Z = !dst
N = (unsigned)dst & (1<<31)

Calculating these flags require a lot of branching, so don't be surprised if your GPU hates you for it. When you can extract those, implementing long multiplication and addition with carry should be no problem.

Mads Elvheim