views:

1036

answers:

20

A friend and I have written an encryption module and we want to port it to multiple languages so that it's not platform specific encryption. Originally written in C#, I've ported it into C++ and Java. C# and Java will both encrypt at about 40 MB/s, but C++ will only encrypt at about 20 MB/s. Why is C++ running this much slower? Is it because I'm using Visual C++?

What can I do to speed up my code? Is there a different compiler that will optimize C++ better?

I've already tried optimizing the code itself, such as using x >> 3 instead of x / 8 (integer division), or y & 63 instead of y % 64 and other techniques. How can I build the project differently so that it is more performant in C++ ?

EDIT:

I must admit that I have not looked into how the compiler optimizes code. I have classes that I will be taking here in College that are dedicated to learning about compilers and interpreters.

As for my code in C++, it's not very complicated. There are NO includes, there is "basic" math along with something we call "state jumping" to produce pseudo random results. The most complicated things we do are bitwise operations that actually do the encryption and unchecked multiplication during an initial hashing phase. There are dynamically allocated 2D arrays which stay alive through the lifetime of the Encryption object (and properly released in a destructor). There's only 180 lines in this. Ok, so my micro-optimizations aren't necessary, but I should believe that they aren't the problem, it's about time. To really drill the point in, here is the most complicated line of code in the program:

input[L + offset] ^= state[state[SIndex ^ 255] & 63];

I'm not moving arrays, or working with objects.

Syntactically the entire set of code runs perfect and it'll work seamlessly if I were to encrypt something with C# and decrypt it with C++, or Java, all 3 languages interact as you'd expect they would.

I don't necessarily expect C++ to run faster then C# or Java (which are within 1 MB/s of each other), but I'm sure there's a way to make C++ run just as fast, or at least faster then it is now. I admit I'm not a C++ expert, I'm certainly not as seasoned in it as many of you seem to be, but if I can cut and paste 99% of the code from C# to C++ and get it to work in 5 mins, then I'm a little put out that it takes twice as long to execute.

RE-EDIT: I found an optimization in Visual Studio I forgot to set before. Now C++ is running 50% faster then C#. Thanks for all the tips, I've learned a lot about compilers in my research.

+14  A: 

The question is extreamly broad. Something that's efficient in C# may not be efficient in C++ and vice-versa.

You're making micro-optimisations, but you need to examine the overall design of your solution to make sure that it makes sense in C++. It may be a good idea to re-design large parts of your solution so that it works better in C++.

As with all things performance related, profile the code first, then modify, then profile again. Repeat until you've got to an acceptable level of performance.

Glen
+22  A: 

Without source code it's difficult to say anything about the performance of your encryption algorithm/program. I reckon though that you made a "mistake" while porting it to C++, meaning that you used it in a inefficient way (e.g. lots of copying of objects happens). Maybe you also used VC 6, whereas VC 9 would/could produce much better code.

As for the "x >> 3" optimization... modern compilers do convert integer division to bitshifts by themselves. Needless to say that this optimization may not be the bottleneck of your program at all. You should profile it first to find out where you're spending most of your time :)

Christian
+! for suggesting profiling.
R. Bemrose
Actually, modern compilers often *don't* do this, because we've thrown enough transistors at the ALU that it's actually faster to do the math directly.
Paul Betts
@Paul: You're right that they don't do it very often. I just wanted to point out that there's no need to do that by yourself because the compiler will do it (when it makes sense to the compiler).If you have more detailed information about this issue, I'd be glad to learn something new :)
Christian
+13  A: 

Things that are 'relatively' fast in C# may be extremely slow in C++.

You can write 'faster' code in C++, but you can also write much slower code. Especially debug builds may be extremely slow in C++. So look at the type of optimizations by your compiler.

Mostly when porting applications, C# programmers tend to use the 'create a million newed objects' approach, which really makes C++ programs slow. You would rewrite these algorithm to use pre-allocated arrays and run with tight loops over these.

With pre-allocated memory you leverage the strengths of C++ in using pointers to memory by casting these to the right pod structured data.

But it really depends on what you have written in your code.

So measure your code an see where the implementations burn the most cpu, and then structure your code to use the right algorithms.

Christopher
A: 

Try the intel compiler. Its much better the VC or gcc. As for the original question, I would be skeptical. Try to avoid using any containers and minimize the memory allocations in the offending function.

Steve
Use containers, but use them wisely. Don't put large objects into frequently-used containers.
David Thornley
I don't see how avoiding the use of containers (I assume you mean e.g. std::vector) would help the performance a lot. Especially considering the performance difference that the OP experiences.
Christian
Its been my experience from developing highly performant scientific calculations that accessing std::vector from within a tight loop is much slower than working with the raw array. Its not std::vectors fault, there is some effort to get things out of the container, probably some error handling. It would really help to see the OP's code.
Steve
I should also note that I'm guessing its mostly math based since he's talking about bit shifting optimizations
Steve
Hmmm, I'm sure you had a reason to come to this conclusion :) Furthermore I don't know the circumstances (such as compiler, environment, quality of the containers/stl etc.). It does surprise me a bit though, since index-based access via the []-operator should be equally fast for std::vector and a plain array e.g.If you look at the source of GCC 4.3.4, the op[] is simply: referenceoperator[](size_type __n){ return *(this->_M_impl._M_start + __n); }Now, the at()-methods does bounds checking and thus is somewhat slower :)
Christian
Hi Christian, that's very interesting. I may have been using the at(), I can't recall now. I do recall that I got about a 20% boost by moving to raw arrays. However, doesn't even this implement have an addition and a dereferncing? The loop I was working in had millions of operations, all of them using values in an array/container. Interesting point you make however, I may rewrite that code using the vector with [] to be sure. Thanks!
Steve
Hi Steve, 20% boost is a lot :O I hesitate to blame a good std::vector implementation. You're right though, the []-op does dereference and perform an addition, but if you use a raw array, you also have to do an addition in order to jump to the right address... which leaves us with the dereferencing (which shouldn't be too expensive). In case you use VC++, this might be interesting as well: http://msdn.microsoft.com/en-us/library/aa985896.aspx
Christian
+7  A: 

Your timing results are definitely not what I'd expect with well-written C++ and well-written C#. You're almost certainly writing inefficient C++. (Either that, or you're not compiling with the same sort of options. Make sure you're testing the release build, and check the optimization options.

However, micro-optimizations, like you mention, are going to do effectively nothing to improve the performance. You're wasting your time doing things that the compiler will do for you.

Usually you start by looking at the algorithm, but in this case we know the algorithm isn't causing the performance issue. I'd advise using a profiler to see if you can find a big time sink, but it may not find anything different from in C# or Java.

I'd suggest looking at how C++ differs from Java and C#. One big thing is objects. In Java and C#, objects are represented in the same way as C++ pointers to objects, although it isn't obvious from the syntax.

If you're moving objects about in Java and C++, you're moving pointers in Java, which is quick, and objects in C++, which can be slow. Look for where you use medium or large objects. Are you putting them in container classes? Those classes move objects around. Change those to pointers (preferably smart pointers, like std::tr1::shared_ptr<>).

If you're not experienced in C++ (and an experienced and competent C++ programmer would be highly unlikely to be microoptimizing), try to find somebody who is. C++ is not a really simple language, having a lot more legacy baggage than Java or C#, and you could be missing quite a few things.

David Thornley
+1  A: 

There are areas where a language running on a VM outperforms C/C++, for example heap allocation of new objects. You can find more details here.

Vijay Mathew
+1  A: 

There is a somwhat old article in Doctor Dobbs Journal named Microbenchmarking C++, C#, and Java where you can see some actual benchmarks, and you will find that C# sometimes is faster than C++. One of the more extreme examples is the single hash map benchmark. .NET 1.1 is a clear winner at 126 and VC++ is far behind at 537.

Some people will not believe you if you claim that a language like C# can be faster than C++, but it actually can. However, using a profiler and the very high level of fine-grained control that C++ offers should enable you to rewrite your application to be very performant.

Martin Liversage
I find that result a bit odd, and the article doesn't appear to provide the source code used. Without that, I can't know if it was a matter of good C# performance or bad C++ writing.
David Thornley
+1  A: 

When serious about performance you might want to be serious about profiling.

Separately, the "string" object implementation used in C# Java and C++, is noticeably slower in C++.

call me Steve
+7  A: 

"Porting" performance-critical code from one language to another is usually a bad idea. You tend not to use the target language (C++ in this case) to its full potential.

Some of the worst C++ code I've seen was ported from Java. There was "new" for almost everything - normal for Java, but a sure performance killer for C++.

You're usually better off not porting, but reimplementing the critical parts.

DevSolar
Yup. Somehow, when people compare the performance of two languages, it *always* turns out that the language they originally wrote the code in is faster. This shouldn't surprise anyone, and it just means your test is flawed, but that's never stopped people from making these comparisons anyway.
jalf
+4  A: 

Show your code. We can't tell you how to optimize your code if we don't know what it looks like.

You're absolutely wasting your time converting divisions by constants into shift operations. Those kinds of braindead transformations can be made even by the dumbest compiler.

Where you can gain performance is in optimizations that require information the compiler doesn't have. The compiler knows that division by a power of two is equivalent to a right-shift.

Apart from this, there is little reason to expect C++ to be faster. C++ is much more dependent on you writing good code. C# and Java will produce pretty efficient code almost no matter what you do. But in C++, just one or two missteps will cripple performance.

And honestly, if you expected C++ to be faster because it's "native" or "closer to the metal", you're about a decade too late. JIT'ed languages can be very efficient, and with one or two exceptions, there's no reason why they must be slower than a native language.

You might find these posts enlightening. They show, in short, that yes, ultimately, C++ has the potential to be faster, but for the most part, unless you go to extremes to optimize your code, C# will be just as fast, or faster.

If you want your C++ code to compete with the C# version, then a few suggestions:

  • Enable optimizations (you've hopefully already done this)
  • Think carefully about how you do disk I/O (IOStremas isn't exactly an ideal library to use)
  • Profile your code to see what needs optimizing.
  • Understand your code. Study the assembler output, and see what can be done more efficiently.
  • Many common operations in C++ are surprisingly slow. Dynamic memory allocation is a prime example. It is almost free in C# or Java, but very costly in C++. Stack-allocation is your friend.
  • Understand your code's cache behavior. Is your data scattered all over the place? It shouldn't be a surprise then that your code is inefficient.
jalf
There's certainly no reason to expect the C++ version to be a lot slower, either.
David Thornley
Sure is. Depends on the code, but it is *extremely* easy to write very slow C++ code. Overreliance on dynamic memory allocations, for example, can easily make your C++ code an order of magnitude slower than equivalent C#. There's no reason why C++ *has* to be that much slower, of course, but it can easily happen.
jalf
+1  A: 

There are some cases where VM based languages as C# or Java can be faster than a C++ version. At least if you don't put much work into optimization and have a good knowledge of what is going on in the background. One reason is that the VMs can optimize byte-code at runtime and figure out which parts of the program are used often and changes its optimization strategy. On the other hand an old fashioned compiler has to decide how to optimize the program on compile-time and may not find the best solution.

Mobbit
+5  A: 

The main reason C#/Java programs do not translate well (assuming everything else is correct). Is that C#/Java developers have not grokked the concept of objects and references correctly. Note in C#/Java all objects are passed by (the equivalent of) a pointer.

Class Message
{
    char buffer[10000];
}

Message Encrypt(Message message)  // Here you are making a copy of message
{
    for(int loop =0;loop < 10000;++loop)
    {
        plop(message.buffer[loop]);
    }

    return message;  // Here you are making another copy of message
}

To re-write this in a (more) C++ style you should probably be using references:

Message& Encrypt(Message& message)  // pass a reference to the message
{
   ...

    return message;  // return the same reference.
}

The second thing that C#/Java programers have a hard time with is the lack of Garbage collection. If you are not releasing any memory correctly, you could start running low on memory and the C++ version is thrashing. In C++ we generally allocate objects on the stack (ie no new). If the lifetime of the object is beyond the current scope of the method/function then we use new but we always wrap the returned variable in a smart pointer (so that it will be correctly deleted).

void myFunc()
{
    Message    m;
    // read message into m

    Encrypt(m);
}

void alternative()
{
    boost::shared_pointer<Message>  m(new Message);

    EncryptUsingPointer(m);
}
Martin York
+2  A: 

Totally of topic but...

I found some info on the encryption module on the homepage you link to from your profile http://www.coreyogburn.com/bigproject.html

(quote)

Put together by my buddy Karl Wessels and I, we believe we have quite a powerful new algorithm.

What separates our encryption from the many existing encryptions is that ours is both fast AND secure. Currently, it takes 5 seconds to encrypt 100 MB. It is estimated that it would take 4.25 * 10^143 years to decrypt it!

[...]

We're also looking into getting a copyright and eventual commercial release.

I don't want to discourage you, but getting encryption right is hard. Very hard.

I'm not saying it's impossible for a twenty year old webdeveloper to develop an encryption algorithm that outshines all existing algorithms, but it's extremely unlikely, and I'm very sceptic, I think most people would be.

Nobody who cares about encryption would use an algorithm that's unpublished. I'm not saying you have to open up your sourcecode, but the workings of the algorithm must be public, and scrutinized, if you want to be taken seriously...

Pieter
Plans are to eventually publish the source code and allow for scrutinizing. We are shy to do this too early before we're properly tested for encryption strength. Soon we will take these steps.PS. I hate to toot my own horn, but I've won National Competitions and this spring I go to Internationals for Programming in Java and C++. I'm trying to be self confident without being arrogant though, I know that encryption is a very complicated thing to get right.
Corey Ogburn
+1  A: 

The C# JIT probably noticed at run-time that the CPU is capable of running some advanced instructions, and is compiling to something better than what the C++ was compiled.

You can probably (surely with enough efforts) outperform this by compiling using the most sophisticated instructions available to the designated C.P.U and using knowledge of the algorithm to tell the compiler to use SIMD instructions at specific stages.

But before any fancy changes to your code, make sure are you C++ compiling to your C.P.U, not something much more primitive (Pentium ?).

Edit:

If your C++ program does a lot of unwise allocations and deallocations this will also explain it.

Liran Orevi
A: 

[Joke]There is an error in line 13[/Joke]

Now, seriously, no one can answer the question without the source code.

But as a rule of the thumb, the fact that C++ is that much slower than managed one most likely points to the difference of memory management and object ownership issues.

For instance, if your algorithm is doing any dynamic memory allocations inside the processing loop, this will affect the performance. If you pass heavy structures by the value, this will affect the performance. If you do unnecessary copies of objects, this will affect the performance. Exception abuse will cause performance to go south. And still counting.

I know the cases when forgotten "&" after the parameter name resulted in weeks of profiling/debugging:

void DoSomething(const HeavyStructure param);  // Heavy structure will be copied
void DoSomething(const HeavyStructure& param); // No copy here

So, check your code to find possible bottlenecks.

blinnov.com
A: 

C++ is not a language where you must use classes. In my opinion its not logical to use OOP methodologies where it doesnt really help. For a encrypter / decrypter its best not use classes; use arrays, pointers, use as few functions / classes / files possible. Best encryption system consists of a single file containing few functions. After your function works nice you can wrap it into classes if you wish. Also check the release build. There is huge speed difference

Cem Kalyoncu
+1  A: 

In another thread, I pointed out that doing a direct translation from one language to another will almost always end up in the version in the new language running more poorly.

Different languages take different techniques.

kyoryu
A: 

Nothing is faster than good machine/assembly code, so my goal when writing C/C++ is to write my code in such a way that the compiler understands my intentions to generate good machine code. Inlining is my favorite way to do this.

First, here's an aside. Good machine code:

  • uses registers more often than memory
  • rarely branches (if/else, for, and while)
  • uses memory more often than functions calls
  • rarely dynamically allocates any more memory (from the heap) than it already has

If you have a small class with very little code, then implement its methods in the body of the class definition and declare it locally (on the stack) when you use it. If the class is simple enough, then the compiler will often only generate a few instructions to effect its behavior, without any function calls or memory allocation to slow things down, just as if you had written the code all verbose and non-object oriented. I usually have assembly output turned on (/FAs /Fa with Visual C++) so I can check the output.

It's nice to have a language that allows you to write high-level, encapsulated object-oriented code and still translate into simple, pure, lightning fast machine code.

Steel
A: 

Here's my 2 cents.

I wrote a BlowFish cipher in C (and C#). The C# was almost 'identical' to the C.

How I compiled (i cant remember the numbers now, so just recalled ratios):

C native:       50
C managed:      15
C#:             10

As you can see, the native compilation out performs any managed version. Why?

I am not 100% sure, but my C version compiled to very optimised assembly code, the assembler output almost looked the same as a hand written assembler one I found.

leppie