views:

292

answers:

5

Currently I am testing some RTL, I am using ncverilog, and it is very ... very slow. I have heard that, if we use some kind of FPGA boards, then things will be faster. Is it for real?

+2  A: 

What kind of RTL are you testing ? If you use FPGA boards, then you can compile your code provided you have the right tool for the right FPGA. Since FPGA are reprograammable, then of course you can test your code on the board, and have the target (FPGA) execute your code (RTL)

But it is no more a simulation, it is a test, with a given hardware, at a given clock speed. And you don't get nice result on the screen, you need to use physical probe and scope. Plus you don't get to see how the internal of your code is working.

verilog or VHDL simulation is sort of like running code using a debugger. FPGA testing is more like debugging with printf. The big difference is that when simulating, your CPU has to simulate the behaviour of all those logic gate that results of your code. On the FPGA, there is no simulation, you just 'run' the code, so it is much faster, but you have less information.

You should use simulation for very small components, and then test your whole program on a FPGA.

shodanex
+2  A: 

You're talking about two different things.

NCVerilog is a simulation tool while an FPGA board is real hardware. So, there will be differences. Real hardware will be generally faster but with a simulator, you can have all sorts of debugging fun. Trying to probe a specific signal is just a matter of adding a line to the testbench. Also, you can easily make changes to the simulated model instead of having to redesign the FPGA board.

If you run simulation on a sufficiently powerful machine, you can sometimes approximate real-world performance (assuming that the FPGA is a slow one).

All in all, you should do both. Use a simulator to do your basic development and evaluation. Move onto your FPGA hardware once your design is sufficiently well defined.

sybreon
Good answer (easy to understand for a newbie like me). Thank you!
Alphaneo
+3  A: 

We've had the same issues with simulation speed too. However, we stick with simulations for the majority of our verification. Each sim checks a specific function and are much quicker than system-level sims. We've also made them self-checking and are useful for regressions tests (unit-tests).

For long system tests on real-world signals that take too much time to simulate, we move these to the FPGA if we can. We need to manually re-check all these testcases again after code changes, so it can be slow in its own way.

Sometimes though, FPGAing a design is just not feasible. Sometimes full designs are too large to fit into an FPGA, or the clock rate is too high. But remember that you don't necessarily have to FPGA your entire design, it may be enough to get the important block you're interested in and check this out fully.

Marty
+2  A: 

You can trace activity on signals in a running FPGA design using "embedded logic analyzer" software tools like Altera SignalTap or Xilinx ChipScope. Before synthesizing/mapping your RTL to the device, you would use these tools to attach soft probes to the signals you want to watch. You can set triggers so that a signal's values only get logged under certain conditions. Then you generate the bitfile and program the device with JTAG. The logic analyzer communicates with your PC over JTAG and logs activity on your probes, which you can then analyze.

It's a bit complicated to set up, as these tools are not especially easy to use, but you will get results much faster than with RTL simulation.

rkb
+1  A: 

You're probably asking about hardware simulation accelerators. Here is one of them : GateRocket

OutputLogic