views:

5689

answers:

24

Or is it now the other way around?

From what I've heard there are some areas in which C# proves to be faster than C++, but I've never had the guts to test it by myself.

Thought any of you could explain these differences in detail or point me to the right place for information on this.

A: 

Well, in most applications, the bottlekneck is in the databse anyways. But I know that the garbage collection done in C# makes it almost as fast, if not faster than unmanaged code.

Charles Graham
Be careful ! This is true for business or internet software only ! You still have plenty of software domain where raw processor power is the real bottleneck !
PierreBdR
Databases don't even need to be discussed here - the question revolves around the C# and C++ languages and their respective runtimes.
OJ
The GC is not what makes C# "fast". In fact you'll find that in some scenarios C++ outperforms C#, and in others C# will outperform C++. The answer will vary depending on a few things, including the C++ compiler you use and the optimisations you have turned on.
OJ
Downmodded because GC is strictly non-faster than deterministic memory management. Proof: GC has to create a list of memory to free, then free it. delete/delete[] can simply add memory directly to the list, then use the same algo to free it.
MSalters
It was actually proven by Chris Sells that Non deterministic finalization (GC) is actually faster then using refernce counters.
Charles Graham
Big deal. (about ref counters) - the big news will be when GC is as fast as unmanaged and without taking up room as well...
Tim
@charles: What if you use RAII instead of reference counts?
Arafangion
+5  A: 

You can start here:

The Computer Language Benchmarks Game http://shootout.alioth.debian.org/

Alex Jenter
That's a rather useless test, since it really depends on how well the individual programs have been optimized; I've managed to speed up some of them by 4-6 times or more, making it clear that the comparison between unoptimized programs is rather silly.
Dark Shikari
problem is that it focuson a relatively small subset of tests. no graphical interface. no integration with libraries. no resource management (memory, CPU, network)
call me Steve
@Dark Shikari "I've managed to speed up some of them by 4-6 times or more" - Why haven't you contributed those programs to the benchmarks game?
igouy
+25  A: 

It's five oranges faster. Or rather: there can be no (correct) blanket answer. C++ is a statically compiled language (but then, there's profile guided optimization, too), C# runs aided by a JIT compiler. There are so many differences that questions like “how much faster” cannot be answered, not even by giving orders of magnitude.

Konrad Rudolph
Have you got any evidence to support your outrageous five oranges claim? My experiments all point to 2 oranges at most, with a 3 mango improvement when doing template metaprogramming.
Alex
At yeast he's not clamming it's hors d'oeuvres of magnitude faster.
Chris
+82  A: 

There is no strict reason why a bytecode based language like C# or Java that has a JIT cannot be as fast as C++ code. However C++ code used to be significantly faster for a long time, and also today still is in many cases. This is mainly due to the more advanced JIT optimizations being complicated to implement, and the really cool ones are only arriving just now.

So C++ is faster, in many cases. But this is only part of the answer. The cases where C++ is actually faster, are highly optimized programs, where expert programmers thoroughly optimized the hell out of the code. This is not only very time consuming (and thus expensive), but also commonly leads to errors due to over-optimizations.

On the other hand, code in interpreted languages gets faster in later versions of the runtime (.NET CLR or Java VM), without you doing anything. And there are a lot of useful optimizations JIT compilers can do that are simply impossible in languages with pointers. Also, some argue that garbage collection should generally be as fast or faster as manual memory management, and in many cases it is. You can generally implement and achieve all of this in C++ or C, but it's going to be much more complicated and error prone.

As Donald Knuth said, "premature optimization is the root of all evil". If you really know for sure that your application will mostly consist of very performance critical arithmetic, and that it will be the bottleneck, and it's certainly going to be faster in C++, and you're sure that C++ won't conflict with your other requirements, go for C++. In any other case, concentrate on first implementing your application correctly in whatever language suits you best, then find performance bottlenecks if it runs too slow, and then think about how to optimize the code. In the worst case, you might need to call out to C code through a foreign function interface, so you'll still have the ability to write critical parts in lower level language.

Keep in mind that it's relatively easy to optimize a correct program, but much harder to correct an optimized program.

Giving actual percentages of speed advantages is impossible, it largely depends on your code. In many cases, the programming language implementation isn't even the bottleneck. Take the benchmarks at http://shootout.alioth.debian.org/ with a great deal of scepticism, as these largely test arithmetic code, which is most likely not similar to your code at all.

Martin Probst
<quote>code in interpreted languages gets faster in later versions of the runtime</quote> As code compiled by a better version of the compiler will also get faster.
Martin York
Note that, if sb has to code performance critical arithmetic, it's best to code it in Fortran, otherwise provide hints to the C++ compiler that the data won't be aliased by other pointers.
ΤΖΩΤΖΙΟΥ
In fact there is at least one reason: JIT needs to be fast, and cannot afford to spend time on various advanced optimizations available to a C++ compiler.
Nemanja Trifunovic
@Nemanja Trifunovic: depends on your scenario. In server applications, JIT doesn't really need to be fast - you can amortize the cost over time very well, and perform incremental enhancements on the code.
Martin Probst
I like C++, but that's probably because I program games where almost everything is insanely math heavy (physics, collison, weighted mesh deformation - stuff like that.)
Cristián Romo
"but also commonly leads to errors due to over-optimizations." [citation desperately needed]. I work at a national lab, and we optimize the hell out of our code. This does not commonly result in buggy code.
tgamblin
@martinprobst "with a great deal of scepticism" - no, not scepticism. The appropriate attitude is curiosity - see the benchmarks game FAQ "Flawed Benchmarks".
igouy
<quote>This is mainly due to the more advanced JIT optimizations being complicated to implement, and the really cool ones are only arriving just now.</quoute> what is the best source to see what is possible nowadays?
Janko R
"It's relatively easy to optimize a correct program, but much harder to correct an optimized program."
gradbot
+3  A: 

As usual, it depends on the application. There are cases where C# is probably negligibly slower, and other cases where C++ is 5 or 10 times faster, especially in cases where operations can be easily SIMD'd.

Dark Shikari
+46  A: 

C# may not be faster, but it makes YOU/ME faster. Thats the most important measure for what I do. :)

mattlant
Oh man, that's so true!
Trap
Depends on what you do, really.
Nemanja Trifunovic
Haha, there's a good quote by Larry Wall on the topic. He's speaking about perl, but it can be thought of for all discussions involving languages and performance: " ..earlier computer languages, such as Fortran and C, were designed to make efficient use of expensive computer hardware. In contrast, Perl is designed to make efficient use of expensive computer programmers"
Falaina
+11  A: 

One particular scenario where C++ still has the upper hand (and will, for years to come) occurs when polymorphic decisions can be predetermined at compile time.

Generally, encapsulation and deferred decision-making is a good thing because it makes the code more dynamic, easier to adapt to changing requirements and easier to use as a framework. This is why object oriented programming in C# is very productive and it can be generalized under the term “generalization”. Unfortunately, this particular kind of generalization comes at a cost at run-time.

Usually, this cost is non-substantial but there are applications where the overhead of virtual method calls and object creation can make a difference (especially since virtual methods prevent other optimizations such as method call inlining). This is where C++ has a huge advantage because you can use templates to achieve a different kind of generalization which has no impact on runtime but isn't necessarily any less polymorphic than OOP. In fact, all of the mechanisms that constitute OOP can be modelled using only template techniques and compile-time resolution.

In such cases (and admittedly, they're often restricted to special problem domains), C++ wins against C# and comparable languages.

Konrad Rudolph
Actually, Java VMs (and probably .NET) go to great lengths to avoid dynamic dispatch. Basically, if there is a way to avoid polymorphims, you can be pretty sure your VM will do it.
Martin Probst
I'm aware of the VMs' abilities. However, this goes much farther. The point is that template C++ codes *do* use “dynamic” dispatching, or rather, something analogous.
Konrad Rudolph
+1 I always have trouble explaining this to my C# colleagues who know little C++ in a way that would enable them to appreciate the significance. You've explained it rather nicely.
romkyns
But when combing code from multiple owners I believe it is still true that template instantiations are very hard to share across module boundaries. I'm talking about sharing common code like List<T> or vector<T> across many modules in an application. So for composable systems (many modules, many owners) runtimes like the CLR start to make up for their fixed overhead by reducing thrashing of the CPU cache with many copies of the same template instantiations. I think over time the C++ performance lead will shrink until only niche libraries and untyped C libraries remain.
crtracy
@crtracy: you are making your bet without high-performance computing applications. Consider weather forecasting, bioinformatics and numeric simulations. The performance lead of C++ in these areas will *not* shrink, because no other code can achieve comparable performance at the same level of high abstraction.
Konrad Rudolph
+4  A: 

C++ (or C for that matter) gives you fine-grained control over your data structures. If you want to bit-twiddle you have that option. Large managed Java or .Net apps (OWB, VS2005) that use the internal data structures of the Java/.Net libraries carry the baggage with them. I've seen OWB designer sessions using over 400MB of RAM and BIDS for cube or ETL design getting into the 100's of MB as well.

On a predictable workload (such as most benchmarks that repeat a process many times) a JIT can get you code that is optimised well enough that there is no practical difference.

IMO on large apps the difference is not so much the JIT as the data structures that the code itself is using. Where an application is memory-heavy you will get less efficient cache usage. Cache misses on modern CPUs are quite expensive. Where C or C++ really win is where you can optimise your usage of data structures to play nicely with the CPU cache.

ConcernedOfTunbridgeWells
+7  A: 

It's an extremly vague question without real definitive answers.

For example; I'd rather play 3D-games that are created in C++ than in C#, because the performance is certainly a lot better. (And I know XNA etc, but it comes noway near the real thing).

On the other hand, as previously mentioned; you should develop in a language that lets you do what you want quickly, and then if necessary optimize.

David The Man
Could you name a few examples? Games written in C# what you've found slow
Even the example applications that came with the installation felt slow.
David The Man
The garbage collector is a huge liability in making games with C#, as it can kick in any time, causing major pauses. Explicit memory management ends up being easier for game development.
postfuturist
Most modern games are GPU-limited. For such games it does not matter if the logic (executed on CPU) is 10% slower, they are still limited by GPU, not CPU.Garbage collector is a real problem, causing random short freezes if the memory allocations are not tuned well.
Michael
+6  A: 

In my experience (and I have worked a lot with both languages), the main problem with C# compared to C++ is high memory consumption, and I have not found a good way to control it. It was the memory consumption that would eventually slow down .NET software.

Another factor is that JIT compiler cannot afford too much time to do advanced optimizations, because it runs at runtime, and the end user would notice it if it takes too much time. On the other hand, a C++ compiler has all the time it needs to do optimizations at compile time. This factor is much less significant than memory consumption, IMHO.

Nemanja Trifunovic
+1  A: 

I know it isn't what you were asking, but C# is often quicker to write than C++, which is a big bonus in a commercial setting.

Kramii
I'd say it's quicker most of the time :)
Trap
+2  A: 

I suppose there are applications written in C# running fast, as well as there are more C++ written apps running fast (well C++ just older... and take UNIX too...)
- the question indeed is - what is that thing, users and developers are complaining about ...
Well, IMHO, in case of C# we have very comfort UI, very nice hierarchy of libraries, and whole interface system of CLI. In case of C++ we have templates, ATL, COM, MFC and whole shebang of alreadyc written and running code like OpenGL, DirectX and so on... Developers complains of indeterminably risen GC calls in case of C# (means program runs fast, and in one second - bang! it's stuck).
To write code in C# very simple and fast (not to forget that also increase chance of errors. In case of C++, developers complains of memory leaks, - means crushes, calls between DLLs, as well as of "DLL hell" - problem with support and replacement libraries by newer ones...
I think more skill you'll have in the programming language, the more quality (and speed) will characterize your software.

bgee
+2  A: 

> From what I've heard ...

Your difficulty seems to be in deciding whether what you have heard is credible, and that difficulty will just be repeated when you try to assess the replies in this forum.

How are you going to decide if the things people say here are more or less credible than what you originally heard?

One way would be to ask for evidence.

When someone claims "there are some areas in which c# proves to be faster than c++" ask them why they say that, ask them to show you measurements, ask them to show you programs. Sometimes they will simply have made a mistake. Sometimes you'll find out that they are just expressing an opinion rather than sharing something that they can show to be true.

Often information and opinion will be mixed up in what people claim, and you'll have to try and sort out which is which. For example, from the replies in this forum:

  • "Take the benchmarks at http://shootout.alioth.debian.org/ with a great deal of scepticism, as these largely test arithmetic code, which is most likely not similar to your code at all."

    Ask yourself if you really understand what "these largely test arithmetic code" means, and then ask yourself if the author has actually shown you that his claim is true.

  • "That's a rather useless test, since it really depends on how well the individual programs have been optimized; I've managed to speed up some of them by 4-6 times or more, making it clear that the comparison between unoptimized programs is rather silly."

    Ask yourself whether the author has actually shown you that he's managed to "speed up some of them by 4-6 times or more" - it's an easy claim to make!

I couldn't agree with you more and that's the reason why I asked in this forum... After all, the answers have to be somewhere, haven't they? :)
Trap
Yes. The answer is "It depends.".
+4  A: 

For graphics the standard C# Graphics class is way slower than GDI accessed via C/C++. I know this has nothing to do with the language per se, more with the total .NET platform, but Graphics is what is offered to the developer as a GDI replacement, and its performance is so bad I wouldn't even dare to do graphics with it.

We have a simple benchmark we use to see how fast a graphics library is, and that is simply drawing random lines in a window. C++/GDI is still snappy with 10000 lines while C#/Graphics has difficulty doing 1000 in real-time.

QBziZ
+5  A: 

Exactly 63.5 %

Eclipse
Slightly wrong. When I checked it was 63.46% ;)
Varun Mahajan
@Varun: Where's the proof? I'm inclined to trust Eclipse on this one.
Arafangion
+2  A: 

> After all, the answers have to be somewhere, haven't they? :)

Umm, no.

As several replies noted, the question is under-specified in ways that invite questions in response not answers, to take just one way:

and then Which programs? Which machine? Which OS? Which data set? ...

I fully agree. I wonder why people expect a precise answer (63.5%), when they ask a general question. I don't think there is no general answer to this kind of question.
call me Steve
+2  A: 

In theory, for long running server-type application, a JIT-compiled language can become much faster than a natively compiled counterpart. Since the JIT compiled language is generally first compiled to a fairly low-level intermediate language, you can do a lot of the high-level optimizations right at compile time anyway. The big advantage comes in that the JIT can continue to recompile sections of code on the fly as it gets more and more data on how the application is being used. It can arrange the most common code-paths to allow branch prediction to succeed as often as possible. It can re-arrange separate code blocks that are often called together to keep them both in the cache. It can spend more effort optimizing inner loops.

I doubt that this is done by .NET or any of the JREs, but it was being researched back when I was in university, so it's not unreasonable to think that these sort of things may find their way into the real world at some point soon.

Eclipse
+3  A: 

The garbage collection is the main reason Java# CANNOT be used for real-time systems.

  1. When will the GC happen?

  2. How long will it take?

This is non-deterministic.

I'm not a huge Java fan but there's nothing that says Java can't use a real-time friendly GC.
Zan Lynx
There are plenty of real-time GC implementations if you care to look. (GC is an area that is *overflowing* with research papers)
Arafangion
+3  A: 

Applications that require intensive memory access eg. image manipulation are usually better off written in unmanaged environment (C++) than managed (C#). Optimized inner loops with pointer arithmetics are much easier to have control of in C++. In C# you might need to resort to unsafe code to even get near the same performance.

Kalle
+3  A: 

We have had to determine if C# was comparable to C++ in performance and I wrote some test programs for that (using Visual Studio 2005 for both languages). It turned out that without garbage collection and only considering the language (not the framework) C# has basically the same performance as C++. Memory allocation is way faster in C# than in C++ and C# has a slight edge in determinism when data sizes are increased beyond cache line boundaries. However, all of this had eventually to be paid for and there is a huge cost in the form of non-deterministic performance hits for C# due to garbage collection.

ILoveFortran
+1  A: 

For 'embarassingly parallel' problems, OpenMP on C++ is about 10 times faster than C# with Parallel Extensions.

Dmitri Nesteruk
+1  A: 

Hi,

Inspired by this, i did a quick test with 60 percent of common instruction needed in most of the programs.

Here’s C# code -

for(int i=0;i<1000;i++)
{
    StreamReader str=new StreamReader("file.csv");
    StreamWriter stw=new StreamWriter("examp.csv");
    string strL="";
    while((strL=str.ReadLine())!=null)
    {
        ArrayList al=new ArrayList();
        string[] strline=strL.Split(',');
        al.AddRange(strline);
        foreach(string str1 in strline)
        {
            stw.Write(str1+",");
        }
        stw.Write("\n");
    }
    str.Close();
    stw.Close();
}

string array and arraylist used purposely to include those instructions.

Here's c++ code -

for(int i=0;i<1000;i++)
{
    std::fstream file("file.csv", ios::in);
    if(!file.is_open())
    {
        std::cout << "File not found!\n";
        return 1;
    }

    ofstream myfile;
    myfile.open ("example.txt");
    std::string csvLine;

    while( std::getline(file, csvLine))
    {
        std::istringstream csvStream(csvLine);
        std::vector csvColumn;
        std::string csvElement;

        while( std::getline(csvStream, csvElement, ‘,’) )
        {
            csvColumn.push_back(csvElement);
        }

        for(std::vector::iterator j = csvColumn.begin(); j != csvColumn.end(); ++j)
        {
            myfile << *j << ", ";
        }

        csvColumn.clear();
        csvElement.clear();
        csvLine.clear();
        myfile << "\n";
    }
    myfile.close();
    file.close();
}

The input file size i used was 40KB

And here's result -

  • c++ code ran in 9 second.
  • C# code :- 4 second !!!

oh but this was on linux.. with c# running on mono.. and c++ with g++. ok this is i got on windows – VS 2003 -

  • C# code ran in 9 second.
  • C++ code – horrible 370 seconds!!!
rks
You're using different data structures and library code there, although "370 seconds" does indicate something horrible - you aren't running it in the debugger by any chance are you?I suspect that the performance of the CSV library you are using is more interesting than the performance of the language you are using.I would question the use of a vector in that context, and what optimisations you used.Additionally, it is widely known that iostreams (in particular, the "myfile << *j << ", ";") is much slower than other methods of writing to the file, for at least some common implementations.
Arafangion
Finally, you're doing more work in the C++ version. (Why are you clearing the csvColumn, csvElement and csvLines?)
Arafangion
+2  A: 

.Net languages can be as fast as C++ code, or even faster, but C++ code will have a more constant throughput as the .Net runtime has to pause for GC, even if it's very clever about its pauses. So if you have some code that has to consistently run fast witout any pause, .Net will introduce latency at some point, even if you are very careful with the runtime GC.

Florian Doyon
A: 

Well, it depends, if the byte-code is translate into machine-code (and not just JIT) (i mean if you execute the program) and if your program uses many allocations/deallocations it could be faster because the GC algorithm just need one pass (theoreticaly) througth the whoole memory once, but normal malloc/realloc/free c/c++ calls causes on every call an overhead(call-overhead, data-structure overhead, cache misses ;) ).

So it is theoreticaly possible (also for other GC languages).

I don't realy see the extreme disadvantage of not to be able to use Metaprogramming with C# for the most applications because the most programmers don't use it anyway.

A other big advantage is that the SQL like LINQ "extendtion" provides oportunities for the compiler to optimize calls to databases (in other words, the compiler could compile the whole LINQ to one "blob" binary where the called functions are inlined or for your use optimized, but im speculating here).

Quonux