views:

2204

answers:

5

I'm a beginner in assembly language and have noticed that the x86 code emitted by compilers usually keeps the frame pointer around even in release/optimized mode, when it could use the EBP register for something else. I undertand why the frame pointer might make code easier to debug, and might be necessary if alloca() is called within a function. However, x86 has very few registers, and using two of them to hold the location of the stack frame when one would suffice just doesn't make sense to me. Why is omitting the frame pointer considered a bad idea even in optimized/release builds?

+3  A: 

It depends on the compiler, certainly. I've seen optimized code emitted by x86 compilers that freely uses the EBP register as a general purpose register. (I don't recall which compiler I noticed that with, though.)

Compilers may also choose to maintain the EBP register to assist with stack unwinding during exception handling, but again this depends on the precise compiler implementation.

Greg Hewgill
A: 

Using stack frames has gotten incredibly cheap in any hardware even remotely modern. If you have cheap stack frames then saving a couple of registers isn't as important. I'm sure fast stack frames vs. more registers was an engineering trade-off, and fast stack frames won.

How much are you saving going pure register? Is it worth it?

dwc
+18  A: 

Frame pointer is a reference pointer allowing a debugger to know where local variable or an argument is at with a single constant offset. Although ESP's value changes over the course of execution, EBP remains the same allowing you to reach the same variable at the same offset (such as first parameter will always be at EBP-4 while ESP offsets can change significantly since you'll be pushing/popping things)

Why don't compilers throw away frame pointer? Because it makes compiler's job easier, it doesn't have to track the change in ESP when generating local variable/argument access code.

With frame pointer, debugger can figure out where local variables and arguments are using the symbol table since they are guaranteed to be at a constant offset to EBP. Otherwise there isn't an easy way to figure where a local variable is at any point in code.

As Greg mentioned, it also helps stack unwinding for a debugger since EBP provides a reverse linked list of stack frames therefore allowing the debugger to figure out size of stack frame (local variables + arguments) of the function.

Most compilers provide an option to omit frame pointers although it makes debugging really hard. That option should never be used globally, even in release code. You don't know when you'll need to debug a user's crash.

ssg
Also, it helps in generating a stack trace if your program crashes.
flodin
Yes let me add to the answer too.
ssg
The compiler probably knows what it does to ESP. The other points are valid, though, +1
erikkallen
+2  A: 

However, x86 has very few registers

This is true only in the sense that opcodes can only address 8 registers. The processor itself will actually have many more registers than that and use register renaming, pipelining, speculative execution, and other processor buzzwords to get around that limit. Wikipedia has a good introductory paragraph as to what an x86 processor can do to overcome the register limit: http://en.wikipedia.org/wiki/X86#Current_implementations.

MSN
The original question is about generated code, which is strictly limited to the registers referenceable by opcodes.
Darron
Yes, but this is why ommitting the frame pointer in optimized builds isn't as important nowadays.
Michael
Register renaming isn't quite the same thing as actually having a larger number of registers available though. There are still plenty of situations where register renaming won't help, but more "regular" registers would.
jalf
+2  A: 

Just adding my two cents to already good answers.

It's part of a good language architecture to have a chain of stack frames. The BP points to the current frame, where subroutine-local variables are stored. (Locals are at negative offsets, and arguments are at positive offsets.)

The idea that it is preventing a perfectly good register from being used in optimization begs the question: when and where is optimization actually worthwhile?

Optimization is only worthwhile in tight loops that 1) do not call functions, 2) where the program counter spends a significant fraction of its time, and 3) in code the compiler actually will ever see (i.e. non-library functions). This is usually a very small fraction of the overall code, especially in large systems.

Other code can be twisted and squeezed to get rid of cycles, and it simply won't matter, because the program counter is practically never there.

I know you didn't ask this, but in my experience, 99% of performance problems have nothing at all to do with compiler optimization. They have everything to do with over-design.

Mike Dunlavey
Thanks @Mike, I found your answer very helpful.
sixtyfootersdude