So I'm working on designing an OS, and I'm working on designing (not coding yet) the kernel. This is going to be for an x86 operating system, and my target is for more modern computers, so RAM can be considered to be at least 256M or more.
My question is this: What is a good size to make the stack for each thread run on the system? Or better yet, should I try to design the system in such a way that the stack can be extended automatically if the max length is reached?
I think if I remember correctly that a page in RAM is 4k (4096 bytes) and that just doesn't seem like a lot to me. I can definitely see times, especially when using lots of recursion, that I would want to have more than 1000 ints in RAM at once. Now, the real solution would be to have the program doing this to use malloc and manage its own memory resources a bit, but really I would like to know the user opinion on this.
Is 4k big enough for a stack with modern PC programs? Should the stack be bigger than that? Should the stack be auto-expanding to accomodate any (reasonable) size? I'm interested in this both from a practical developer's standpoint, and a security standpoint.
Or, on the flip side, is 4k to big for a stack? Considering normal program execution (especially from the point of view of classes in C++) I notice that good code tends to malloc/new the data it needs when classes are created, to minimize the data being thrown around in a function call.
What might also be important here, and I haven't even gotten into this, is the size of the processor's cache memory. Ideally, I think the stack would reside in the cache to speed things up, and quite frankly I'm not sure if I need to achieve this, or if the processor can handle it for me? I was just planning on using regular boring old RAM for testing purposes.
I can't decide, so I ask you.
Thanks