tags:

views:

1294

answers:

17

Now, I know it's because there's not the overhead of calling a function, but is the overhead of calling a function really that heavy (and worth the bloat of having it inlined) ?

From what I can remember, when a function is called, say f(x,y), x and y are pushed onto the stack, and the stack pointer jumps to an empty block, and begins execution. I know this is a bit of an oversimplification, but am I missing something? A few pushes and a jump to call a function, is there really that much overhead?

Let me know if I'm forgetting something, thanks!

+1  A: 

Because there's no call. The function code is just copied

Raphael
@kodal, the call stack doesn't have instructions. At least, not in any normal code.
kanaka
Instructions don't go on the stack, @Kodai, and since there's no function call, there's certainly nothing extra on the call stack.
Rob Kennedy
Your question doesn't make sense. The "call stack" does not get bloated by inline instructions. The system doesn't need to keep track of how many instructions there are in the current function. Inlining just takes the code that was in the called function and splices it into the calling function, so that instead of adding a frame to the call stack, the code just executes.
Yuliy
@kanaka you're right, embarrassing comment.
kodai
Inlining avoids bloating the call stack! But it could make the code to big to fit in the cache, so yes, inlining can also reduce performance. Along with the increase in code size, that's what stops compilers from inlining everything.
delnan
-1 he didn't ask what inline is, he asked if it really was significantly faster.
Lo'oris
Which is also answered here: the code is faster because there's no function call
Raphael
A: 

Because no jump is performed.

Johann Gerell
This is not strictly true on modern Intel cpus. The prefetch unit will follow unconditional, direct jumps so there is no direct overhead. The OS may introduce an overhead if the target address causes a page fault. EDIT: what I meant was, the presence or abscence of a jmp instruction makes no difference.
Skizz
There is still a jump, even if the CPU is quite good at handling it. Its performance impact is just much more subtle than most people realize, but it's not exactly "free" either.
jalf
A: 

Inlining makes the big difference when a function is called multiple times.

MicSim
Could you explain please? Thanks!
kodai
See the other answers which also elaborate on this point. Additionaly check the following link for some points about inlining and performance: http://www.parashift.com/c++-faq-lite/inline-functions.html#faq-9.3
MicSim
+2  A: 

One other potential side effect of the jump is that you might trigger a page fault, either to load the code into memory the first time, or if it's used infrequently enough to get paged out of memory later.

Jimmy
+10  A: 

There is no calling and stack activity, which certainly saves a few CPU cycles. In modern CPU's, code locality also matters: doing a call can flush the instruction pipeline and force the CPU to wait for memory being fetched. This matters a lot in tight loops, since primary memory is quite a lot slower than modern CPU's.

However, don't worry about inlining if your code is only being called a few times in your application. Worry, a lot, if it's being called millions of times while the user waits for answers!

Pontus Gagge
+5  A: 

let

int sum(const int &a,const int &b)
{
     return a + b;
}
int a = sum(b,c);

is equal to

int a = b + c

No jump - no overhead

kilotaras
Better yet: "int a=sum(4,5);" can become "int a=9;". Also, reading and writing variables through references is generally slower than reading and writing them directly; in many cases an in-lined function can be resolved to use faster direct-variable access (note that in your scenario, if not in-line, it would be better to pass variables by value rather than by reference, but if the function did something like "a+=b;" the reference would be necessary).
supercat
The statement `reading and writing variables through references is generally slower than reading and writing them directly` is way to general to be true. I also find it highly unlikely in most normal situations (see how easy it is to over generalize).
Martin York
And how often do we write functions like `sum()`? I think accessors are a much more relevant example for what inlining does.
sbi
+7  A: 

Inlining is considered faster than a function because it is faster than a function. Any time you call a function you have the added overhead of increased stack management for that function.

The overhead might not be huge (for a single function) but if you're going to be calling a method many thousands of times within a short span of time, inlining is definitely worth it. But don't do it for the sake of doing it. That's called premature optimization and is to be avoided. Optimize only when you really have to.

Interesting read from a journal article about inlining:

[...] there are more reasons why your compiler will not inline a function that those given here. These are clear-cut cases but there are several other reasons why I avoid using inline functions. The main culprit is: code bloat, inline functions usually make your code bigger because you have duplicated code for each invocation of the function. Apart from needing a bigger hard disc code bloat has several implications:

Increased number of CPU cache misses: a CPU may keep a frequently executed function entirely in it's cache, if you increase the function size with lots of inline functions it may be too big for the cache, and because your functions are bigger fewer may be held in cache at once.

Disc cache: if every function is inlined the OS may not load the entire program into memory and your disc will start to thrash.

So, don't be surprised if making methods inline actually makes your program slower!

Paul Sasik
Actually the overhead is huge. Creating new stack frame and jumping is a huge performance hit. Especially for shorter functions. Plus any sane compiler already does inlining automatically (and some even ignore the `inline` keyword).
Let_Me_Be
But inlining the function thousands of times will probably make the code much bigger thus resulting in more cache misses when reding the program thus more processor stalls and therefore slower execution. You need to balance the cost of function calls with the cost of a larger code base.
Martin York
@Let_Me_Be: They all ignore the inline keyword nowadays (all mainstream compilers that anybody here will ever use anyway).
Martin York
@Martin, I'll bet there are a lot of embedded compilers that are behind the times. I wouldn't make sweeping conclusions.
Mark Ransom
It can be faster than calling a function, but if it was always faster then the compiler would just "inline" every single function call it could.
Chris
@Chris: You're thinking in absolutes. Compilers make strategic decisions about whether a function is inlined or not. If you think about it, if you inlined every single function you would get a program that collapsed to a single function and which (if non-trivial) would probably blow the stack as soon as you tried to load it. And why the -1 ?!?
Paul Sasik
@Paul: The answer as is says that an inlined function is always faster than a non-inlined function. This is blatantly not true and misleading. As you just said, compilers must make strategic decisions about when to inline, because it is difficult to determine if an inlined function will be faster.
Chris
@Chris: I disagree with you because: time_to_run(code) < time_to_run(code + stack_mgmt_overhead) and therefore embedded code will always execute faster than code within a function. But there are great reasons this not ALWAYS done. Please refer to the article link that i added into my answer. It is a really good read on this topic.
Paul Sasik
Martin York is correct: time_to_run(100 copies of code, once each) can be much larger than time_to_run(1 copy of code + stack overhead, 100 times) because the second fits in L1 code cache and the first doesn't. You can easily find reports online that `gcc -Os` (optimize for size) produces faster code for very large programs than `gcc -O2` (optimize for speed), and this is why.
Zack
@Zack: But that's only true if that code is in functions that are _not_ called from the loop. If they are called, cache thrashing might likely be worse _without_ inlining. Bottom line: You never know, so if this is important, __you need to measure__.
sbi
sbi: Entirely agreed.
Zack
+49  A: 

Aside from the fact that there's no call (and therefore no associated expenses, like parameter preparation before the call and cleanup after the call), there's another significant advantage of inlining. When the function body is inlined, it's body can be re-interpreted in the specific context of the caller. This might immediately allow the compiler to further reduce and optimize the code.

For one simple example, this function

void foo(bool b) {
  if (b) {
    // something
  }
  else {
    // something else
  }
}

will require actual branching if called as a non-inlined function

foo(true);
...
foo(false);

However, if the above calls are inlined, the compiler will immediately be able to eliminate the branching. Essentially, in the above case inlining allows the compiler to interpret the function argument as a compile-time constant (if the parameter is a compile-time constant) - something that is generally not possible with non-inlined functions.

However, it is not even remotely limited to that. In general, the optimization opportunities enabled of inlining are significantly more far-reaching. For another example, when the function body is inlined into the specific caller's context, the compiler in general case will be able to propagate the known aliasing-related relationships present in the calling code into the inlined function code, thus making it possible to optimize the function's code better.

Again, the possible examples are numerous, all of them stemming from the basic fact that inlined calls are immersed into the specific caller's context, thus enabling various inter-context optimizations, which would not be possible with non-inlined calles. With inlining you basically get many individual versions of your original function, each version is tailored and optimized individually for each specific caller context. The price of that is, obviously, the potential danger of code bloat, but if used correctly, it can provide noticeable performance benefits.

AndreyT
Another sweet optimization that inline affords you is instruction-cache efficiency. It's far more likely that inlined code is already in the cache, whereas called code could easily cause a cache miss.
Detmar
@Detmar: Maybe. And maybe not. From what I know about instruction caches (very little, admittedly), you usually need to measure in order to know, and more often than not the result seems funny and strange.
sbi
@Detmar, @sbi: Agreed that this can be mysterious. Using inlines can push hot code out of the sweet L1 instruction cache while using function calls means each function is in cache independently, using less cache space. This is why code compiled on GCC with -Os (reduce size) can be counter-intuitively faster than O2 or O3.
Zan Lynx
Yes!!! Function body is inlined and ... suddenly the compiler can eliminate most of the code. This is number one reason why inlining (especially with link-time code generation) is a great thing.
sharptooth
@andreyT: It's good to mention that one doesn't need to worry too much, unless the function is potentially called X million+ times from within some loop or the other. In that case, every cycle saved, can have seconds speed gain. If it's just some trivial function, don't inline.
Toad
Inlining can push hot code out of the L1 cache, but only when the inlined function itself isn't hot. The reason is simple: an inlined version of a function is smaller because there is no call instruction, no argument passing, and no return value passing.
MSalters
@MSalters: except if the inlined function is called multiple times. (or, I guess, if the called function is inlined multiple times? If the multiply called function is repeatedly inlined? ;)) Then you might get multiple copies of the same code polluting L1 cache.
jalf
@sbi: It certainly can appear mysterious, especially since it's architecture-specific behaviour. At least on x86 systems, however, Detmar is right. Cache (line) sizes and mysteriously inlining 'cold' code notwithstanding, of course. ;)
Michael Foukarakis
+4  A: 

There are multiple reasons for inlining to be faster, only one of which is obvious:

  • No jump instructions.
  • better localization, resulting in better cache utilization.
  • more chances for the compiler's optimizer to make optimizations, leaving values in registers for example.

The cache utilization can also work against you - if inlining makes the code larger, there's more possibility of cache misses. That's a much less likely case though.

Mark Ransom
A: 

Inlining a function is a suggestion to compiler to replace function call with definition. If its replaced, then there will be no function calling stack operations [push, pop]. But its not guaranteed always. :)

--Cheers

Koteswara sarma
+17  A: 

"A few pushes and a jump to call a function, is there really that much overhead?"

It depends on the function.

If the body of the function is just one machine code instruction, the call and return overhead can be many many hundred %. Say, 6 times, 500% overhead. Then if your program consists of nothing but a gazillion calls to that function, with no inlining you've increased the running time by 500%.

However, in the other direction inlining can have a detrimental effect, e.g. because code that without inlining would fit in one page of memory doesn't.

So the answer is always when it comes to optimization, first of all MEASURE.

Cheers & hth.,

Alf P. Steinbach
Moreover, a really short function might be smaller than the setup and teardown instructions for a function call, and inlining might actually make the code smaller. Measure and profile.
David Thornley
+4  A: 

Consider a simple function like:

int SimpleFunc (const int X, const int Y)
{
    return (X + 3 * Y); 
}    

int main(int argc, char* argv[])
{
    int Test = SimpleFunc(11, 12);
    return 0;
}

This is converted to the following code (MSVC++ v6, debug):

10:   int SimpleFunc (const int X, const int Y)
11:   {
00401020   push        ebp
00401021   mov         ebp,esp
00401023   sub         esp,40h
00401026   push        ebx
00401027   push        esi
00401028   push        edi
00401029   lea         edi,[ebp-40h]
0040102C   mov         ecx,10h
00401031   mov         eax,0CCCCCCCCh
00401036   rep stos    dword ptr [edi]

12:       return (X + 3 * Y);
00401038   mov         eax,dword ptr [ebp+0Ch]
0040103B   imul        eax,eax,3
0040103E   mov         ecx,dword ptr [ebp+8]
00401041   add         eax,ecx

13:   }
00401043   pop         edi
00401044   pop         esi
00401045   pop         ebx
00401046   mov         esp,ebp
00401048   pop         ebp
00401049   ret

You can see that there are just 4 instructions for the function body but 15 instructions for just the function overhead not including another 3 for calling the function itself. If all instructions took the same time (they don't) then 80% of this code is function overhead.

For a trivial function like this there is a good chance that the function overhead code will take just as long to run as the main function body itself. When you have trivial functions that are called in a deep loop body millions/billions of times then the function call overhead begins to become large.

As always, the key is profiling/measuring to determine whether or not inlining a specific function yields any net performance gains. For more "complex" functions that are not called "often" the gain from inlining may be immeasurably small.

uesp
This is a debug build, there is memory guarding going on and an oversized stack frame to allow for edit-and-continue. You mustn't use debug code to analyse optimisations!
Skizz
+1  A: 

Optimizing compilers apply a set of heuristics to determine whether or not inlining will be beneficial.

Sometimes gain from the lack of function call will outweigh the potential cost of the extra code, sometimes not.

Joe Gauterin
+3  A: 

A typical example of where it makes a big difference is in std::sort which is O(N log N) on its comparison function.

Try creating a vector of a large size and call std::sort first with an inline function and second with a non-inlined function and measure the performance.

This, by the way, is where sort in C++ is faster than qsort in C, which requires a function pointer.

CashCow
+11  A: 

The classic candidate for inlining is an accessor, like std::vector<T>::size().

With inlining enabled this is just the fetching of a variable from memory, likely a single instruction on any architectures. The "few pushes and a jump" (plus the return) is easily multiple times as much.

Add to that the fact that, the more code is visible at once to an optimizer, the better it can do its work. With lots of inlining, it sees lots of code at once. That means that it might be able to keep the value in a CPU register, and completely spare the costly trip to memory. Now we might take about a difference of several orders of magnitude.

And then theres template meta-programming. Sometimes this results in calling many small functions recursively, just to fetch a single value at the end of the recursion. (Think of fetching the value of the first entry of a specific type in a tuple with dozens of objects.) With inlining enabled, the optimizer can directly access that value (which, remember, might be in a register), collapsing dozens of function calls into accessing a single value in a CPU register. This can turn a terrible performance hog into a nice and speedy program.


Hiding state as private data in objects (encapsulation) has its costs. Inlining was part of C++ from the very beginning in order to minimize these costs of abstraction. Back then, compilers were significantly worse in detecting good candidates for inlining (and rejecting bad ones) than they are today, so manually inlining resulted in considerable speed gainings.
Nowadays compilers are reputed to be much more clever than we are about inline. Compilers are able to inline functions automatically or don't inline functions users marked as inline, even though they could. Some say that inlining should be left to the compiler completely and we shouldn't even bother marking functions as inline. However, I have yet to see a comprehensive study showing whether manually doing so is still worth it or not. So for the time being, I'll keep doing it myself, and let the compiler override that if it thinks it can do better.

sbi
I really like this example. I hadn't thought much about accessors and recursive functions for templates. Thanks so much!
kodai
You misspelled _costs_ in the first sentence below the divider.
Core Xii
@Core Xii: Thanks, I fixed it.
sbi
+2  A: 

(and worth the bloat of having it inlined)

It is not always the case that in-lining results in larger code. For example a simple data access function such as:

int getData()
{
   return data ;
}

will result in significantly more instruction cycles as a function call than as an in-line, and such functions are best suited to in-lining.

If the function body contains a significant amount of code the function call overhead will indeed be insignificant, and if it is called from a number of locations, it may indeed result in code bloat - although your compiler is as likely to simply ignore the inline directive in such cases.

You should also consider the frequency of calling; even for a large-ish code body, if the function is called frequently from one location, the saving may in some cases be worthwhile. It comes down to the ratio of call-overhead to code body size, and the frequency of use.

Of course you could just leave it up to your compiler to decide. I only ever explicitly in-line functions that comprise of a single statement not involving a further function call, and that is more for speed of development of class methods than for performance.

Clifford
+2  A: 

Andrey's answer already gives you a very comprehensive explanation. But just to add one point that he missed, inlining can also be extremely valuable on very short functions.

If a function body consists of just a few instructions, then the prologue/epilogue code (the push/pop/call instructions, basically) might actually be more expensive than the function body itself. If you call such a function often (say, from a tight loop), then unless the function is inlined, you can end up spending the majority of your CPU time on the function call, rather than the actual contents of the function.

What matters isn't really the cost of a function call in absolute terms (where it might take just 5 clock cycles or something like that), but how long it takes relative to how often the function is called. If the function is so short that it can be called every 10 clock cycles, then spending 5 cycles for every call on "unnecessary" push/pop instructions is pretty bad.

jalf
Yes, and also when a function only contains a few instructions whose might be significantly reduced when the function body is optimized in the caller context. So instead of prologue/epilogue + say ten instructions you may end up with no prologue, no epilogue and maybe four instructions which gives a huge performance gain.
sharptooth