views:

188

answers:

5

Hi all, i do have a cpp program which does text processing on 40k records. we developed this program in cpp because we thought it would be faster. then i used/executed this cpp part inside my c# program using the process-execute but the problem is we feel like we lost control of the execution flow.. cant able to log-in the happenings at the cpp part.. i want to integrate the cpp very much inside my c# program. i googled and found -like i have to generate a dll for my cpp and then i can use it inside my c# program. my question is- 1)will this slow down the execution of the cpp part? 2)is there any other better alternative to integrate the cpp part inside my c# program? Thanks.

+1  A: 

Wrapping C++ code inside DLL will not slow it down in any way.

Yes there is a (slight) performance penalty for calling functions in DLL as opposed in the executable - for instance the compiler cannot inline calls. But this often is completely negligible overhead (3-5 CPU instructions)

This is probably the simplest way.

EFraim
>>Wrapping C++ code inside DLL will not slow it down in any way. Thanks. its useful.
javasoul
While for most cases PInvoke overhead does not matter it is certainly MUCH higher than 3-5 CPU instructions. In fact it usually is about 1000 to several thousand CPU instructions (depending on the amount of marshalling needed). http://www.creativedocs.net/blog/index.php?/archives/5-Trying-to-speed-up-PInvoke-interop.html
Foxfire
I am talking about the DLL overhead. Of course .NET can (and will) add additional overhead. (Note, that if we are talking about managed C++ module than there will be no PInvoke)
EFraim
A: 

You can't tell if this will be fast enough to meet your goals without measuring. Do it the simplest way possible (wrap the existing C++ code inside a DLL) and see if it meets your performance goals. I'm guessing it probably will.

Calling native code from managed does have some overhead per each method call - if your program is heavily compute bound and will be calling the native methods many times per record, you may see a slow-down due to the interop. If your code calls the native code once to process all 40k records in bulk, the cost of doing interop will be greatly dwarfed by the actual time spent processing the records. If the records are coming from a slower storage media, such as over the network, your processing time will probably be negligible compared to the I/O time.

Michael
michael, it calls only once. and its working on some files. thanks for ur response.
javasoul
+4  A: 

You have a few options here:

  1. Write the processing in .NET and measure the performance. If it is unacceptable try to optimize it. If it is still too slow you revert to unmanaged code. But thinking that unmanaged code will be faster and for this reason writing unmanaged code without measuring IMHO is wrong approach.

  2. As you already wrote unmanaged code you can expose it as a dynamic link library by exporting a function that will do the processing:

    extern "C" __declspec(dllexport) int DoProcessing(int);
    

    Next you import the function in managed code:

    class Program 
    {
        [DllImport("mylibrary.dll")]
        static extern int DoProcessing(int input);
        static void Main()
        {
            int result = DoProcessing(123);
        }
    }
    

    This works if the input and output of your processing is not very complex and can be easily marshaled. It will have very little overhead.

  3. Compile the unmanaged code using C++ CLI as managed assembly and reference it directly.

Darin Dimitrov
Thanks Darin, thanks for ur response. but we dont have time to write the same cpp function in c#. i think i have to use ur option 2. am not using vc++ - so i cant use option 3 -right?. thanks. am googling to read about option 2. if u know some good links please share with me. thanks again.
javasoul
There's a tutorial on MSDN about option 2 (P/Invoke): http://msdn.microsoft.com/en-us/library/aa288468(VS.71).aspx
Darin Dimitrov
A: 

Try to implement it in C#.

40k records seems like a VERY low number. It may be (depending on how much processing you need to do on each record) that processing the 40k records in C# is actually faster than even spawning the process like you currently do.

Other that that compile your C app to a dll and load that in-process. That will still have some overhead, but it will be WAY smaller than spawning an additional process

Foxfire
no foxfire, we dont have enough time to rewrite the cpp code in c#. Thanks for ur response.
javasoul
Then following Darin Dimitrov's suggestion will be the best solution (as my last paragraph reads)
Foxfire
Thanks foxfire, i used the security permission setting given in the link - which u posted in comment. thanks.
javasoul
A: 

I agree with AdamRalph - I do not think you gained anything but integration pains by writing this code in CPP.

BTW is CPP code managed? if it is why do not you just link it into your C# code and avoid all interop overhead

mfeingold
am not using vc++. so dont think its a managed one. thanks for ur response.
javasoul