views:

631

answers:

8

Hi, I am planning to participate in development of a code written in C language for Monte Carlo analysis of complex problems. This codes allocates huge data arrays in memory to speed up its performance, therefore the author of the code has chosen C instead of C++ claiming that one can make faster and more reliable (concerning memory leaks) code with C.

Do you agree with that? What would be your choice, if you need to store 4-16 GB of data arrays in memory during calculation?

+3  A: 

There is no real difference between C and C++ in terms of memory allocation. C++ Has more 'hidden' data, such as virtual pointers and so on, if you chose to have virtual methods on your objects. But allocating an array of chars is just as expensive in C as in C++, in fact, they're probably both using malloc to do it. In terms of performance, C++ calls a constructor for each object in the array. Note that this is only done if there is one, the default constructor does nothing and is optimized away.

As long as you're preallocating pools of data, to avoid memory fragmentation, you should be good to go. If you have simple POD-structs without virtual methods, and without constructors, there's no difference.

roe
"in fact, they're both using malloc to do it" Just being a pedant, but that's not necessarily true; `new`/`delete` don't have to use `malloc` and `free` by default.
GMan
@GMan; of course not, there should be a 'probably' in there. G++ does it by default I think. C doesn't have to use it either. :)
roe
For g++/gcc, both `new` and `malloc` ultimately call `brk`
Dan Andreatta
@Dan Andreatta: only on Linux systems.
Stephen Canon
@Stephen Canon: Of course, right, and actually only when they ultimately need a new pool of memory form the OS.
Dan Andreatta
+19  A: 

Definitely C++. By default, there's no significant difference between the two, but C++ provides a couple of things C doesn't:

  1. constructors/destructors. These let you automate most memory management, improving reliability.
  2. per-class allocators. These let you optimize allocation based on how particular objects are designed and/or used. This can be particularly useful if you need a large number of small objects (to give one obvious example).

The bottom line is that in this respect, C provides absolutely no possibility of an advantage over C++. In the very worst case, you can do exactly the same things in the same ways.

Jerry Coffin
Constructors/destructors are sometimes a bad thing though, performance wise, and it's perhaps easy to miss.
roe
@roe: Getting maximum performance from either requires care -- but a ctor and dtor are ultimately just a way of packaging the same operations you do in C. The only difference is that they make management easy enough that you're often tempted to use them when you wouldn't even consider it in C.
Jerry Coffin
Er, if you're using constructors and destructors, C++ will *definitely* be slower, simply because it has to allocate *and* initialise.
paxdiablo
@paxdiablo: Sorry, but that's complete nonsense. A ctor can do as much *or* as little initialization as desired. Specifically, you'll (normally) use it to do exactly the same initialization as you would in C, in which case it's no slower (but due to the other factors noted above, it may still be faster).
Jerry Coffin
@paxdiablo: Surely, if you have to initialize, you have to initialize whether it's in a constructor or via a block of code somewhere else. If you need initialized memory you have to allocate _and_ initialize. If you don't need to initialize, you have no need for a user-declared constructor and the implementation can optimize its generated one to nothing.
Charles Bailey
@paxdiablo: You can choose to have no constructor/destructor in your class. The performance does not rely on how the C++ feature are working but on how you are (or are not) using them depending on your need.
Phong
Misunderstanding, Jerry. I thought your "Definitely C++" comment was to do with the performance whereas, on a re-read, I can see you meant it's the one you'd choose. Apologies.
paxdiablo
@paxdiablo:it's what I'd choose based on the cited criteria of performance and reliability. C++ doesn't *definitely* have an advantage in performance, but in the worst case it can give exactly the same performance, and about 90% of the time it will have at least a slight advantage.
Jerry Coffin
... and reliability is much better in C++. Even if you only use the C subset of C++, you can drop in a couple of RAII objects to help handle memory leaks in complex functions (multiple returns, potential error shortcuts...) and code will be more reliable.
David Rodríguez - dribeas
A: 

You can use the C family of memory allocation functions in C++ too: both the standard malloc and free, realloc to enlarge/shring arrays and alloca to allocate memory on the stack.

If you go with new, it will allocate more memory than is needed (mostly during debugging) and do extra checks for consistency. It will also call constructor for classes. In a release (-O3) build the difference will be negligible for most applications.

Now what new brings that malloc doesn't is the in-place new. You can preallocate a buffer and then use the in-place new to put your structure in that buffer, thus making "allocating" it instantaneous.

All in all, I wouldn't stay away from C because of performance concerns. If anything, your code will be more efficient because classes pass the this pointer in registers instead of parameters like in the C equivalent. A real reason to stay away from C is the size of the C++ runtime. If you develop programs for embedded systems or boot-loaded programs, you can't embed the ~4mb runtime. For normal applications however, this won't make a difference.

Blindy
+2  A: 

For allocating raw data, there shouldn't be a difference between C and C++ on most systems as they normally both use the same runtime library mechanisms. I wonder if this was the classic benchmark pitfall where they also measured the runtime of the constructor calls in C++ and conveniently forgot about including the runtime of any sort of initialisation code in C.

Also, the "more reliable (concerning memory leaks)" argument doesn't hold any water if you're using RAII in C++ (as you should). Unless someone's referring to making it leak more reliably, using RAII, smart pointers and container classes will reduce the potential for leaks, not increase it.

My main concerns with allocating that much memory would be twofold:

  • If you're getting close to the physical memory limit on the machines you're running the Monte Carlo simulation on, it's a good way to decrease performance because the disk might well start to thrash when the virtual memory system needs to start paging a lot. Virtual memory isn't "free" even though a lot of people think it is.
  • Data layout needs to be carefully considered to maximise processor cache usage, otherwise you'll partially lose the benefits of keeping the data in main memory in the first place.
Timo Geusch
+1  A: 

If memory allocation is a bottleneck in such code, I would suggest rather redesigning, not changing language for faster allocation. If you allocate memory once and then perform lots of calculations I would expect those calculations to be a bottleneck. If cost of allocation is significant, something is wrong here.

Tadeusz Kopec
A: 

If you need to store 4-16 GB of data arrays in memory during calculation and your machine has only 2GB of physical memory, then what?

What if your machine has 16GB of physical memory? Does the operating system take up no physical memory?

Does the operating system even allow you an address space of 4GB, 16Gb, etc?

I suggest that, if performance is a primary implementation constraint, then understanding how the platforms, that are intended to be used, function and perform are far more significant than the question of any measurable performance difference between C and C++ given identical environments and algorithms.

Sam
+4  A: 

There is one feature of C99 that's absent from C++ and that potentially gives significant speed gains in heavy number-crunching code, and that is keyword restrict. If you can use a C++ compiler that supports it, then you have an extra tool in the kit when it comes to optimizing. It's only a potential gain, though: sufficient inlining can allow the same optimizations as restrict and more. It also has nothing to do with memory allocation.

If the author of the code can demonstrate a performance difference between C and C++ code allocating a 4-16GB array, then (a) I'm surprised, but OK, there's a difference, and (b) how many times is the program going to allocate such large arrays? Is your program actually going to spend a significant amount of its time allocating memory, or is it spending most of its time accessing memory and doing computations? It takes a long time to actually do anything with a 4GB array, compared with the time it took to allocate, and that means you should be worried about the performance of "anything", not the performance of allocation. Sprinters care a lot how quickly they get off the blocks. Marathon runners, not so much.

You also have to be careful how you benchmark. You should be comparing for example malloc(size) against new char[size]. If you test malloc(size) against new char[size]() then it's an unfair comparison since the latter sets the memory to 0 and the former doesn't. Compare against calloc instead, but also note that malloc and calloc are both available from C++ in the (unlikely) event that they do prove measurably faster.

Ultimately, though, if the author "owns" or started the project, and prefers to write in C rather than C++, then he shouldn't justify that decision with probably-spurious performance claims, he should justify it by saying "I prefer C, and that's what I'm using". Usually when someone makes a claim like this about language performance, and it turns out on testing not to be true, you discover that performance is not the real reason for the language preference. Proving the claim false will not actually cause the author of this project to suddenly start liking C++.

Steve Jessop
+2  A: 

The only thing in disfavor of C++ is it's additional complexity - combine that with a programmer who uses it incorrectly, and you can easily slow down notably. Using a C++ compiler without C++ features will give you the same performance. Using C++ correctly, you have some posisbilities to be faster.

The language isn't your problem, allocating and traversing large arrays is.

The main deadly mistake you could make in allocation (in either language) is allocating 16G of memory, initializing it to zero, only to fill it with actual values later.

The most performance gains I'd expect from algorithmic optimizations that improve locality of reference.

Depending on the underlying OS, you may also affect caching algorithms - e.g. indicating that a range of memroy is processed only sequentially.

peterchen