tags:

views:

266

answers:

4

Hi,

I am an experienced C#/.NET developer (actually this is all irrelevant because FPGA is like another level of complexity). While my level of ability is not expert like in C# as I still sometimes look stuff up (but not very often, though I struggle with some syntax/advanced concepts), my boss does FPGA and recommends I get involved (easing myself in, I am surprised I am not being discouraged as I am a junior developer and it's a complex technology).

Thus my question is, what is the best way to learn FPGA? I am gathering books etc.

I am looking at scalable 3d modelling and rendering (ideally in a windows app where the user is waiting for an instant response) and CUDA is popular but not as fast according to my boss.

Is FPGA the way to go for this sort of project?

Thanks

+6  A: 

Honestly I think your boss is wrong. NVIDIA and AMD are selling real silicon hardware purposed designed for accelerated 3D rendering. Unless your specific problem is one that doesn't map to existing shader/CUDA paradigms, there's no way a configurable hardware device is going to compete. This is for the same reason that even the best FPGA-based CPUs (Xilinx's MicroBlaze, Altera's Nios) are toys compared even to low-end embedded ARM cores. (Often useful toys, mind you, but not competetive except in designs with otherwise unused FPGA gate space.)

But I can definitely recommend learning FPGAs and HDL programming. This is one area where "gathering books" really isn't going to help you. What you have to do is get a cheap development board (there are many on the market in the $100-200US range), download the matching toolchain and start writing and testing code.

Andy Ross
UM... Military + RADAR + FPGA = AEGIS. And you can't even come CLOSE to the level or parallel processing possible on a 5 million gate FPGA. Using CPUs on an FPGA is cool but defeats the purpose of having a configurable circuit in the first place, I'll agree with you that low end ARM CPU would beat an FPGA implementation of the same CPU, but thats why they are in silicon. The power and reason to learn FPGA is not for developing CPUs on.
Spence
Stop flaming. Yes, you can, if your goal is to do accelerated rendering (or any specific task). A $300 GPU at your local retailer has **billions** of gates dedicated to the problem, and they run at GHz speeds instead of the 50-100MHz you can get from modern FPGA synthesis. FPGA's have the advantages of being configurable. They certainly are not faster than a raw transistor.
Andy Ross
A: 

Why not learn how to use the hardware acceleration that comes with modern PCs today? I would bet that using OpenGL or DirectX(whatever it is called these days) with hardware acceleration will perform better.

I guess if your application is going to run on some kind of custom embedded device maybe you want to create your own hardware, but for PC apps, it is probably too expensive and has almost no benefit over a software solution that already has crazy work done to tune for performance.

My opinion: take advantage of all the work that has been put into 3d gaming technology.

Kekoa
A: 

As Andy Ross says, I dout FPGA is the way you want to go for that type of problem - you will also need to interface it with the PC somehow.

I would start by getting a DevKit and play around with that. Make an LED blink - I've always found that to be the hardest part when I start with a new embedded device o.O. Get some form of comms going (RS232 / TCP) which is probably on the DevBoard. Then implement some math functions on it, which get parameters / pass results back via the comms.

Courtney de Lautour
A: 

Well, scalable 3d rendering on an fpga. How would you approach it? FPGAs are great for scaling the classic simd architecture to the datasize of your liking (or limitation), with great parallelism you could process stuff to an acceptable level even with 100mhz, your only limitation in my opinion is memory bandwidth and speed. Dont forget you need a graphics controller to be able to use the data you spit out. You would in essence be making all the hardware to do such a complicated task, are you sure you are capable of making a SIMD processor capable of 3d rendering? What would your hardware design be?

As many others have pointed out ITT; CUDA from nvidia is a great alternative, the new fermi architecture seems promising, but if youre looking for low cost, low size and low power consumption i cant recommend using CUDA. Sure its great for solving the task, but if your task has wheels and a battery it gets complicated.

I would think a task more suited for fpgas than graphics is biological computation, a problem space in need of greater parallelism than graphics.

Tore