views:

234

answers:

8

As I have learned data structure, I know there are plenty of other data stuctures besides Stack and Heap, why the processes nowadays only contain these 2 paradigm as the "standard equipment" in their address space? Could there be any brand new paradigm for memory usage?

Thanks for your replies. Yes, I realized that something is wrong with my statement. The heap data structure is not the same as the heap in a process' address space. But what I am wondering is besides Stack area and Heap area in the proecss address space, is there any new paradigm to use memory? It seems that other ways of memory usage is built upon these two basic paradigms. And these 2 paradigms are kind of some meta-paradigms?

+2  A: 

FIFO comes to mind. Shared memory between processors. Would messaging passing be a memory paradigm?

kenny
Thanks, kenny. IMHO, the messaging passing approach needs some kind of "mail-slots" buffer to be allocated in the heap area of a process' address space. But I am wondering why there should be a heap area? Why not other arrangement?
smwikipedia
As an ex-hardware guy, my thoughts are more towards physical memory. Message passing architectures don't have to be heap-based. In most implementations today, you're right though.
kenny
+1  A: 

memory mapped files?

dmckee
Memory mapped files is some kind of interaction among the file sub-system, i/o sub-system and memory-management sub-system. The mapped file are still in the heap area of a process's address space. My question is why there's a heap area? Why not other paradigms?
smwikipedia
If you're claiming that everything that is not managed as the stack is part of the heap, you've created a tautology. In at least some implementations a different code path in the OS is invoked to map files than to allocate heap. What else do you want?
dmckee
Yes, dmckee. Maybe my range about Heap is too wide.
smwikipedia
+3  A: 

Note that "the heap" (a region of memory where you can allocate and release memory in random order) has nothing to do with the data structure called "heap" (priority lists).

By the way, yes, there is a third memory usage paradigm besides Stack and Heap: static storage ;-)

FredOverflow
Thanks, Fred. My mistake.
smwikipedia
+2  A: 

Javolution (http://javolution.org/) has a few interesting allocation paradigms implemented via code and interpreter 'hinting' using contexts. Pooled memory, object recycling support, and so on. Although this is Java and not C++, it could still be of use to study the concepts.

Chris Dennett
Thanks, Chris. I will check it out. My question is kind of related to the reason for the lay out of the process address space. Hope this link is relevant. :D
smwikipedia
Chris Dennett
Slab allocation, too: http://en.wikipedia.org/wiki/Slab_allocation ; hazard pointers..? : http://en.wikipedia.org/wiki/Hazard_pointer
Chris Dennett
A: 

What about DMA? http://en.wikipedia.org/wiki/Direct_memory_access

Chris Dennett
DMA buffer is allocated in the Heap area. Why should there be a heap area?
smwikipedia
+1  A: 

The "Heap" is not a paradigm at all, it's the most basic thing you can get: the memory is all yours, use it how ever you want. ("You" here referring the OS/Kernel).

Even the stack is not all that special if you think about it; you're just starting from one end of the heap and growing/shrinking as needed.

hasen j
Thanks, hasenj, your answer is just what I want to say by describing the Heap and Stack as the "meta-paradigms". I was not sure of the choice of the word just now, but seems you have similar thoughts to mine. So glad. :D
smwikipedia
+3  A: 

Let's think for a moment. We have two fundamental storage disciplines. Contiguous and Fragmented.

Contiguous.

  • Stack is constrained by order. Last in First Out. The nesting contexts of function calls demand this.

  • We can easily invert this pattern to define a Queue. First in First Out.

  • We can add a bound to the queue to make a Circular Queue. Input-output processing demands this.

  • We can combine both constraints into a Dequeue.

  • We can add a key and ordering to a queue to create a Priority Queue. The OS Scheduler demands this.

    So. That's a bunch of variations on contiguous structures constrained by entry order. And there are multiple implementations of these.

  • You can have contiguous storage unconstrained by entry order: Array and Hash. An array is indexed by "position", a hash is indexed by a hash function of a Key.

Fragmented:

  • The bare "heap" is fragmented storage with no relationships. This is the usual approach.

  • You can have heap storage using handles to allow relocation. The old Mac OS used to do this.

You can have fragmented storage with relationships -- lists and trees and the like.

  • Linked Lists. Single-linked and doubly-linked lists are implementation choices.

  • Binary Trees have 0, 1 or 2 children.

  • Higher-order trees. Tries and the like.

What are we up to? A dozen?

You can also look at this as "collections" which exist irrespective of the storage. In this case you mix storage discipline (heapish or array-ish)

Bags: unordered collections with duplicates allowed. You can have a bag built on a number of storage disciplines: LinkedBag, TreeBag, ArrayBag, HashBag. The link and tree use fragmented storage, the array and hash use contiguous storage.

Sets: unordered collections with no duplicates. No indexing. Again: LinkedSet, TreeSet, ArraySet, HashSet.

Lists: ordered collections. Indexed by position. Again: LinkedList, TreeList, ArrayList, HashList.

Mapping: key-value association collections. Indexed by key. LinkedMap, TreeMap, ArrayMap, HashMap.

S.Lott
Thanks very much. S.Lott. Your answer is a treasure to me. Just need some more clarification: when you say "have heap storage using handles to allow relocation", do you mean that let the system to maintain a table with entries like (handle value, memeory address) so that the system could relocate the objects in memory without affecting the client's view, but at the cost of yet another level of indirection, the "Handle" seems to be some kind of "opaque pointer"?
smwikipedia
@smwikipedia: That's the way the Mac OS (Pre Mac OS X) worked. Handles are opaque pointers to real structures so the OS can relocate heap elements. In effect, it gave you a high-performance virtual memory.
S.Lott
+1  A: 

I'm thinking that it has to do with the physical nature of memory. Heaps and stacks are just intuitive ways of representing it.

For example, a queue or list does not lend itself conceptually to random access. A tree does not represent the physical nature of memory (one cell after another, like an array). Any sort of tuple with an x,y address is unnecessarily complicated compared to a simple integer address.

Jason
Thanks, Jason. Your answer gave me some refreshment.
smwikipedia