views:

311

answers:

5

I am writing a very high performance application that handles and processes hundreds of events every millisecond.

Is Unmanaged C++ faster than managed c++? and why?

Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR?

So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

A: 

Isn't C++/CLI a half interpreted language like Java?

Also, didn't someone post a study just yesterday that showed that GC systems are always slower than non GC?

Noah Roberts
+3  A: 

There is no one answer to this. As a really general rule, native code will usually be faster, but 1) that's not always the case, 2) sometimes the difference is too small to care about, and 3) how well the code is written will usually make more difference than managed vs. unmanaged.

Edit: Managed code runs in a virtual machine. Basically, you start with a compiler that produces byte codes as output, then feed that to the virtual machine. The virtual machine then either interprets it or re-compiles it to machine code and executes that. Either way, you've added some overhead.

The VM also uses a garbage collector. Garbage collectors have rather different characteristics from manually managing memory. With most manual managers, allocating memory is fairly expensive. Releasing memory is fairly cheap, but roughly linear on the number of items you release.

With a garbage collector, allocating memory is typically very cheap. With a typical (copying) collector, the cost of releasing memory depends primarily upon the number of objects that have been allocated and are still (at least potentially) in use.

The allocations themselves also differ though. In native C++, you typically create most objects on the stack, where both allocating and releasing memory is extremely cheap. In managed code, you typically allocate a much larger percentage of memory dynamically, where it's garbage collected.

Jerry Coffin
why is it faster and when this will not be the case?
bsobaid
@bsabaid: there's no CLR VM translating.
Paul Nathan
The translation to machine code take place only once. If I am executing the same line of code numerous times then I think this translation does not matter. Yes the garbage collector point you made is an important one and also the dynamic memory allocation. Is there a sample available where you preallocate a large chunk of memory to use in order to avoid dynamic malloc? IS it possible to do in C#?
bsobaid
@bsoaid: I haven't checked the details of the .NET VM, but the JVM interprets the code on the first few iterations, the compiles when/if it determines that it's executing often enough to justify the work. At a guess (but only a guess) .NET probably does something similar. In any case, the user waits while it compiles so the emphasis is more on fast compilation than maximum optimization.
Jerry Coffin
+3  A: 

You can write slow code in any language; conversely, you can use decent algorithms that may well be fast is almost any language.

The common answer here would be to pick a language that you already know, use appropriate algorithms, then profile the heck out of it to determine the actual hot spots.

I am somewhat concerned about the hundreds of events every millisecond statement. That's an awful lot. Are you reasonably going to be able to do the processing you expect in any language?

As a C++ developer on high-performance systems, I tend to trust my ability to profile and optimize the emitted code. That said; there are very high performance .NET applications, where the writer has gone to great lengths to not do dynamic memory allocation inside the critical loops - mostly by using allocated pools of objects created beforehand.

So to repeat my previous comment: pick what you already know, then tune. Even if you hit a dead end; you will likely know much more about your problem space.

sdg
"dynamic memory allocation inside the critical loops - mostly by using allocated pools of objects created beforehand."a little off-topic, but is it possible to do this using C#?Are there any samples available to do this in C++?"hundreds of events every millisecond "You do get these many when you are parsing market data feed from different exchanges
bsobaid
C++ - have a look at boost::pool C# - I am not as conversant, but understand it can/has been done
sdg
thanks, that was a useful lead. pool has it. Boos is not famous for its speed. Do you use it for your high-performance applications? I am mainly a C# developer but now I am stepping into C++ world. Its a must for HFT developers.
bsobaid
+1  A: 

It all depends on the situation.

Things that make unmanaged code faster / managed code slower:

  • the code needs to be converted to machine code before it can be executed
  • garbage collection might cause an overhead
  • calls from managed to unmanaged code have a serious overhead
  • unmanaged compilers can optimize more since they directly generate machine code (seen myself)

Things that make managed code faster / unmanaged code slower:

  • since the code is converted to machine code right before it's used, managed code can be optimized for the actual processor (with unmanaged code you have to target the 'minimum-supported' processor).

And probably there are many more reasons.

Patrick
"the code needs to be converted to machine code before it can be executed"but it is a one time thing, it does'nt effect overall performance, does it?
bsobaid
Depends on often you execute the same code (only once or millions of times). In practice it probably won't matter.
Patrick
A: 

There are many good answers here, but one aspect of managed code that may give it an advantage in the long term is runtime analysis. Since the code generated by the managed compiler is an intermediate format, the machine code that actually executes can be optimized based on actual usage. If a particular subset of functionality is heavily used, the JIT'er can localize the machine code all on the same memory page, increasing locality. If a particular sub-call is made repeatedly from a particular method, a JIT'er can dynamically inline it.

This is an improvement over unmanaged code, where inlining must be "guessed" ahead of time, and excessive inlining is harmful because it bloats code size and causes locality issues that cause (very time-expensive) L2/L1 cache misses. That information is simply not available to static analysis, so it is only possible in a JIT'ing environment. There's a goody basket of possible wins from runtime analysis such as optimized loop unwinding, etc.

I'm not claiming the .NET JIT'er is as smart as it could be, but I know I've heard about global analysis features and I know a lot of research into runtime analysis has been done at Hewlett-Packard and other companies.

David Gladfelter
a basic question, by run-time analysis you mean profiling? how do you do run-time analysis of your code?
bsobaid
One implementation would be for the .NET framework to begin execution of a managed assembly by interpreting the CLR byte codes and note frequency of execution of opcodes, high correlation between the execution of a routine and execution of a subroutine from that routine, etc, and then generate machine code taking advantage of that knowledge to minimize overhead (call stack construction/destruction, loop variable incrementing and jumps, fragmented "hot" memory regions, etc.) in frequently-executed operations.
David Gladfelter
that would be a very good way of tuning the code, but a very hard one for me to do...such as noting exec freq of opcodes etc
bsobaid
To be fair if you're using native code you could use PGO in VC++ (presumably other toolsets have something like it) to do profiled guided optimization of the app. You're speculating this might exist for managed - I know for a fact it exists for at least one native toolset.
Kate Gregory