views:

324

answers:

4

This is related to the question 'Why do stacks typically grow downwards?', but more from a security point of view. I'm generally referring to x86.

It strikes me as odd that the stack would grow downwards, when buffers are usually written to upwards in memory. For example a typical C++ string has its end at a higher memory address than the beginning.

This means that if there's a buffer overflow you're overwriting further up the call stack, which I understand is a security risk, since it opens the possibility of changing return addresses and local variable contents.

If the stack grew upwards in memory, wouldn't buffer overflows simply run in to dead memory? Would this improve security? If so, why hasn't it been done? What about x64, do those stacks grow upwards and if not why not?

+2  A: 

Probably because the architecture for most CPUs was designed in a time when men were men, and you could trust your programmers to not want to steal people's credit card numbers... it's mostly too late to change now (though as you say, it probably could been done for new architectures like Itanium which actually has two stacks!)

Dean Harding
+7  A: 

Technically this is OS/CPU dependant, but typically this is because the stack and heap grow in opposite directions and from opposite ends of the address space.

This arrangement gives you the most flexibility to split/allocate memory between the heap and the stack without causing them to collide. If they were both to grow in the same direction, then you would need to have a starting address for the stack that would put a hard limit the maximum size of the heap (and a hard limit on the size of the stack)

ETA:

Found an interesting piece on wikipedia about why making a stack grow upwards does not necessarily prevent stack overflows - it just makes them work a bit differently.

Eric Petroelje
Interesting... so why not swap the stack and heap grow directions? I guess stack overflows are more serious than heap overflows, right?
AshleysBrain
@AshleysBrain - Maybe new CPU architectures will do this, but changing it on an existing CPU would break any programs compiled for that CPU. Probably just a bad design choice made way back when before people thought too much about such things - and now we are stuck with it.
Eric Petroelje
I visit stackoverflow everyday, but heapoverflow not all that much. So I guess stack overflows are more serious -:)
sri
That's why I mentioned x64, I thought they might have taken the opportunity to change this when designing it, since the issue should have been well-known by then. Perhaps it would break x86 compatibility somehow?
AshleysBrain
@AshleysBrain - It would certainly break compatibility since old programs would be written to manipulate the stack as if it was growing downward. Likely they would just have to be recompiled (with a new compiler) but I don't think they would be binary compatible. You may also have some programs that rely on this behaviour in wierd ways that would break entirely as well.
Eric Petroelje
Ah, I see. Good answer, thanks.
AshleysBrain
+1  A: 

Well, I don't know if the stack growth direction would have much effect on security, but if you look at machine architecture, growing the stack in the negative address direction really simplifies calling conventions, stack frame pointers, local variable allocation, etc. etc.

Mike Dunlavey
A: 

The architecture for the 8088 (start of the x86 family) used a stack that grew downward, and for compatibility it has been that way ever since. Back then, (early 80s) buffer overflow vulnerabilities on home computers were well off the radar.

I couldn't tell you why they chose to have it grow down though, when it seems more intuitive to have it grow up. As has been mentioned, though, memory was often split between stack and heap; perhaps the CPU designer thought it was important for the heap to grow up, so the stack grew down as a consequence.

Qwertie
It's older than that - the 8080's stack grew downwards as well, and presumably the 8008's. I think this is as old as Intel processors.
David Thornley
@David: Right, including the 4004 (http://download.intel.com/museum/archives/pdf/4004_datasheet.pdf). I actually built a little breadboard system using an 8008 that I used to demonstrate in class. I programmed it to play a little duet on a pair of speakers. By hooking a capacitor to the memory timer chip, it would run *real slow*, to convey the concept that computers only appear magical because they're fast.
Mike Dunlavey