Hi All,
I've managed to create a simple mandelbrot explorer using Open Gl, and the CGFX SDK provided by NVidia. It works well, but is currently float based, and therefore doesn't have much "depth" -- As the distance from the lowest complex number to the largest becomes smaller, the precision is lost, and the resultant image is "pixelated".
Unfortunately, CGFX doesn't seem to support double precision, and even then double precision is limited for my intentions. Because CGFX, because of it's intended design, does not have a bignum class, I thought it would be best to create my own.
I managed to create a prototype in C++ -- which only uses unsigned integers -- but when I tried to move it to CGFX, FX Composer 2.5 couldn't seem to compile it. Because I am only using unsigned integers, the multiplication, and addition, code contains a lot of bitshift operations, which according to FX Composer 2.5 are not available in my profile.
I know that this question contains a lot of queries, but unfortunately I'm not really familiar with numerical analysis, shader programming, or open gl, and at this point I feel overwhelmed -- And quite certain that I am trying to fix a leak with a sledge hammer.
So if anyone has an answer for anyone of these questions, I'd be grateful:
Does CGFX, or any other shader language, support bitshift operators on unsigned integers and floats (Required to convert floats to big floats)?
Does CGFX, or any other shader language, support double precision, or greater, floating point?
Is there a more refined mathemathical way of dealing with my issue rather than create a big float class?
If anyone needs a bit more clarification, or code snippets, feel free to ask.