views:

37

answers:

2

Hi Folks,

I'm working on a multiplaform project (MacOS, Linux and Windows) and I've been having some performance issues when trying to compile a big source file in VS C++ 2010.

Here's a little background. There's one .cpp file inside the project that is 800KB big. The size of the file is caused by the fact that I'm compiling an array that contains image information. So, it's a huge unsigned char array that can't be split.

Now, I've been working on MacOS during the last couple of months, so I didn't notice this problem until some days ago. In both MacOS and Linux, gcc compiles the file in a second or so, but when I use VC++ it takes about an hour.

At first I though it was cased by the computer itself, since it's not a fast one. But then I tried Cygwin and GCC 4 in the same machine and the compilation time was almost as fast as in MacOS. So I have to assume the problem is caused by something within VC++ 2010.

I haven't tweek VC++ in any form. The project files are generated by CMake, so I believe there should some room for optimizations here. Any help will be appreciated.

Thanks.

Hernan

A: 

Any chance you can place that large array into a seperate resource file and read it in that way? That's how I would go about fixing this problem if that array is indeed the problem. Failing that, I'd place the array in its own file so that it doesn't recompile often.

Michael Dorgan
A: 

Looks like there is some O(n^k) part of VC++ with k>1 when parsing array initializers...

That would qualify as a logical bug you cannot do much about, but something that may work is

unsigned char bdata[][100] = {
    { 0x01, 0x02, ... , 0x63} ,
    { 0x64, 0x65, ... , 0xC7} ,
    { 0xC8, 0xC9, ... , 0x2B} ,
    ...
};
unsigned char *data = &(bdata[0][0]);

that is breaking the data in 100-bytes rows... MAY BE this will be parsed/compiled a lot faster by VC (just a suspect I have given the symptoms) and it shouldn't change your build process by much.

I don't use VC++2010 so I cannot check.

Just pay attention that sizeof(data) in this case will be just the size of a pointer and sizeof(bdata) will be instead the size of the image but rounded up to a multiple of the size of the row.

If this version runs at the same speed the unfortunately the code is O(n^k) in the number of bytes and you're basically doomed if you want that to be compiled as an array.

Another option could be using a huge string literal... the compiler may work better on that (may be they coded a special code path for string literals because "big" literals are not so uncommon), but your code generator will have to handle escaping of special chars.

6502