views:

2013

answers:

9

Last week I wrote a few lines of code in C# to fire up a large text file (300,000 lines) into a Dictionary. It took ten minutes to write and it executed in less than a second.

Now I'm converting that piece of code into C++ (because I need it in an old C++ COM object). I've spent two days on it this far. :-( Although the productivity difference is shocking on its own, it's the performance that I would need some advice on.

It takes seven seconds to load, and even worse: it takes just exactly that much time to free all the CStringWs afterwards. This is not acceptable, and I must find a way to increase the performance.

Are there any chance that I can allocate this many strings without seeing this horrible performace degradation?

My guess right now is that I'll have to stuff all the text into a large array and then let my hash table point to the beginning of each string within this array and drop the CStringW stuff.

But before that, any advice from you C++ experts out there?

EDIT: My answer to myself is given below. I realized that that is the fastest route for me, and also step in what I consider the right direction - towards more managed code.

+3  A: 

What sort of a container are you storing your strings in? If it's a std::vector of CStringW and if you haven't reserve-ed enough memory beforehand, you're bound to take a hit. A vector typically resizes once it reaches it's limit (which is not very high) and then copies out the entirety to the new memory location which is can give you a big hit. As your vector grows exponentially (i.e. if initial size is 1, next time it allocates 2, 4 next time onwards, the hit becomes less and less frequent).

It also helps to know how long the individual strings are. (At times :)

dirkgently
Fixed array... are you allocating all of the memory up front or resizing the array with each new string?
Judge Maygarden
No no no. I never realocate it. It's a hash table and in the *rare* occations where I get a duplicate hash key some extra things are done, but during my benchmarking I actually throw away the duplicates just to make sure that it wasn't any performance issue.
danbystrom
Looks like you have everything set up perfectly. Did you try with std::wstring? Also, it is about time to take the help of a profiler.
dirkgently
+10  A: 

Yikes. get rid of the CStrings...

try a profiler as well. are you sure you were not just running debug code?

use std::string instead.

EDIT:

I just did a simple test of ctor and dtor comparisons.

CStringW seems to take between 2 and 3 times the time to do a new/delete.

iterated 1000000 times doing new/delete for each type. Nothing else - and a GetTickCount() call before and after each loop. Consistently get twice as long for CStringW.

That doesn't address your entire issue though I suspect.

EDIT: I also don't think that using string or CStringW is the real the problem - there is something else going on that is causing your issue.

(but for god's sake, use stl anyway!)

You need to profile it. That is a disaster.

Tim
I'll try std::string, stay tuned! :-)
danbystrom
CString isn't that bad...
peterchen
CString is bad. First, as I just confirmed, it is slower than std::string. It is not portable. It is an MFC hack - along with all the other MFC junk. Don't get me wrong, I've used them for years, but the less MFC we use, the better...
Tim
+1  A: 

When working with string classes, you should always have a look at unnecessary operations, for example, don't use constructors, concatenation and such operations too often, especially avoid them in loops. I suppose there's some character coding reason you use CStringW, so you probably can't use something different, this would be another way to optimize your code.

schnaader
+4  A: 

If it is a read-only dictionary then the following should work for you.

Use fseek/ftell functionality, to find the size of the text file.

Allocate a chunk of memory of that size + 1 to hold it.

fread the entire text file, into your memory chunk.

Iterate though the chunk.

    push_back into a vector<const char *> the starting address of each line.

    search for the line terminator using strchr.

    when you find it, deposit a NUL, which turns it into a string.
    the next character is the start of the next line

until you do not find a line terminator.

Insert a final NUL character.

You can now use the vector, to get the pointer, that will let you access the corresponding value.

When you are finished with your dictionary, deallocate the memory, let the vector die when going out of scope.

[EDIT] This can be a little more complicated on the dos platform, as the line terminator is CRLF.

In that case, use strstr to find it, and increment by 2 to find the start of the next line.

EvilTeach
This will work great and is fast. I don't think it is necessary though - he is doing something bad in his code that can probably be fixed and have readable and simple code.
Tim
The thing that is killing it is the construct/destruct costs with memory allocation/deallocations.
EvilTeach
Or (if Win32 supports the equivalent) mmap the file MAP_PRIVATE (copy on write), which saves copying it.
Pete Kirkham
Certainly an option. I don't know what the additional overhead costs are for that.
EvilTeach
I'll probably do something loke this! But I don't know the size in advance - I want to read both ASCII, UTF8 and Unicode files. So I'm thinking that I will do exactly like you say BUT I will do it in 32Kb chunks. That way I will end up with 100 memory allocations instead of 330000.
danbystrom
I think this will be doing like half the work in C and not C++.
CDR
@EvilTeach - I don't know if you are correct about the ctor and dtor. without a profiler, it is not cut and dried. He may be doing other bad things in his code. I could create and destroy far more objects than he did in less than a second so I am not convinced.
Tim
@CDR use c++ constructs if it will make you feel better.
EvilTeach
@tim so does an stl string allocate/deallocate memory? If so.... does it allocate it in the constructor and deallocate in the destructor?
EvilTeach
+13  A: 

This sounds very much like the Raymond Chen vs Rico Mariani's C++ vs C# Chinese/English dictionary performance bake off. It took Raymond several iterations to beat C#.

Perhaps there are ideas there that would help.

http://blogs.msdn.com/ricom/archive/2005/05/10/performance-quiz-6-chinese-english-dictionary-reader.aspx

Tony Lee
+1 for the interesting link
EvilTeach
danbystrom
@danbystrom, ASCII can be regarded as a subset of UTF-8. UTF-8 is an Unicode encoding format. If you have your files in UTF-8 you can use any character defined in Unicode and existing ASCII text does not need re-encoding.
iconiK
+10  A: 

You are stepping into the shoes of Raymond Chen. He did the exact same thing, writing a Chinese dictionary in unmanaged C++. Rico Mariani did too, writing it in C#. Mr. Mariani made one version. Mr. Chen wrote 6 versions, trying to match the perf of Mariani's version. He pretty much rewrote significant chunks of the C/C++ runtime library to get there.

Managed code got a lot more respect after that. The GC allocator is impossible to beat. Check this blog post for the links. This blog post might interest you too, instructive to see how the STL value semantics are part of the problem.

Hans Passant
Well, to quote Mariani "So, yup, you can definately beat the CLR." but I agree the CLR performance is very impressive.
anon
+2  A: 

Load the string to a single buffer, parse the text to replace line breaks with string terminators ('\0'), and use pointers into that buffer to add to the set.

Alternatively - e.g. if you have to do an ANSI/UNICODE conversion during load - use a chunk allocator, that sacrifices deleting individual elements.

class ChunkAlloc
{
   std::vector<BYTE> m_data;
   size_t m_fill;
   public:
     ChunkAlloc(size_t chunkSize) : m_data(size), m_fill(0) {}
     void * Alloc(size_t size)
     {
       if (m_data.size() - m_fill < size)
       {
          // normally, you'd reserve a new chunk here
          return 0;
       }
       void * result = &(m_data[m_fill]);
       m_fill += size;
       return m_fill;
     }
}
// all allocations from chuunk are freed when chung is destroyed.

Wouldn't hack that together in ten minutes, but 30 minutes and some testing sounds fine :)

peterchen
Yes yes yes. That's what I've figured out during the night, too! :-)
danbystrom
:) See Raymond vs Rico (http://blogs.msdn.com/ricom/archive/2005/05/10/416151.aspx) is very similar to your project, with clear results: With C++ you can solve your problem faster than the C# app even loads, but it costs you. A lot.
peterchen
+3  A: 

Thanks all of you for your insightful comments. Upvotes for you! :-)

I must admit I wasn't prepared for this at all - that C# would beat the living crap out of good old C++ in this way. Please don't read that as an offence to C++, but instead what an amazingly good memory manager that sits inside the .NET Framework.

I decided to take a step back and fight this battle in the InterOp arena instead! That is, I'll keep my C# code and let my old C++ code talk to the C# code over a COM interface.

A lot of questions were asked about my code and I'll try to answer some of them:

  • The compiler was Visual Studio 2008 and no, I wasn't running a debug build.

  • The file was read with an UTF8 file reader which I downloaded from a Microsoft employee who published it on their site. It returned CStringW's and about 30% of the time was actually spent there just reading the file.

  • The container I stored the strings in was just a fixed size vector of pointers to CStringW's and it was never resized.

EDIT: I'm convinced that the suggestions I was given would indeed work, and that I probably could beat the C# code if I invested enough time in it. On the other hand, doing so would provide no customer value at all and the only reason to pull through with it would be just to prove that it could be done...

danbystrom
thanks for the synopsis.
Tim
+2  A: 

The problem is not in the CString, but rather that you are allocating a lot of small objects - the default memory allocator isn't optimized for this.

Write your own allocator - allocate a big chunk of memory and then just advance a pointer in it when allocating. This what actually the .NET allocator does. When you are ready delete the whole buffer.

I think there was sample of writing custom new/delete operators in (More) Effective C++

devdimi