tags:

views:

154

answers:

2

Hi,

Anyone got any details on the overhead of a native -> managed context switch in Mono? Namely the overhead caused by calling .NET methods/creating .NET objects using the C API.

Thanks.

+2  A: 

Profile, if you want or need specific details, since that's the only way you'll know whether your specific situation is fast enough...

That being said,

The Mono native API is very fast. When you create an object using Mono's C API, it's doing basically the same thing as the managed runtime when it creates the object. Calling a method is similar.

The real overhead comes in trying to pass and convert data back and forth. Just creating an object and calling a method is blazingly fast - trying to convert data through multiple types is slightly slower. However, using the C API is very fast, so unlikely to be a performance issue. (Unless, of course, you're doing this in a very tight loop, or something similar. In that case, refactor your loop into a method the managed side, and just call it once. That puts a single context switch into place.)

Reed Copsey
It _very_ strongly depends on the types involved. Some types are blittable (i.e. the exact same representation in both C and CLR), so no marshalling is involved - it can just push arguments on the stack according to calling convention and CALL directly. Others, like strings, have to be copied and massaged into shape expected on the other side first.
Pavel Minaev
Yes - the marshalling of types is the expensive part. Constructing an object and calling a method are fast, though (provided no types need to be marshalled for arguments)
Reed Copsey
Awesome. Thanks for that info.
jameszhao00
+5  A: 

The current API for invoking a managed method from C code has these kinds of overhead:

  • It does some locking and hash lookup operations to see if the method you're calling and a synthetized helper method are compiled
  • If the methods are not already compiled to native code they are compiled
  • The actual method invocation is fast and contrary to the speculation in some of the answers no marshaling overhead happens, so blittable types and other such considerations don't apply
  • If the return type is a valuetype then the value is boxed: this causes some GC overhead. Note that for methods returning void or a reference type there is no overhead

We're going to introduce a new API that does away with the overhead in the first and the last points above. In the mean time, unless you're doing millions of calls per second, these overheads are pretty small and almost always dwarfed by the actual managed method called doing real work.

lupus
Thank you for the info. For #1, is the locking something akin to a Global Interpreter Lock (in python)? Also, for #3, how does complex types (classes, ...) get passed then?
jameszhao00
It is a lock, but it has no relation to the GIL in python: all the python code requires the GIL to run, so no other code can run in the meantime. The locks I was talking about are basically held only for the duration of the hash lookup, all the rest of the code can run concurrently with other code.A complex class is just a reference and since no marshaling happens it's just a pointer copy. You and other people are confused about the embedding invoke API we're talking about in this thread and the P/Invoke mechanism.
lupus
Ah I see. Thanks.
jameszhao00