views:

59

answers:

2

Hello!

Lets me ask a very specific question: What is the difference (in memory usage) when you have a large array, or large list of the same size (implemented with pointers). e.g

var a:array[1..1000000] of integer;

and

type
  po=^p1;
  p1=record
     v:integer; 
     next:po;
  end;
var p:po;

and you create list with 1000000 integers.

  1. Will the pointer implementation use much more memory than array?
  2. Will the difference will even be larger on 64 bit computers, since the pointers are 64bits.
A: 

x86 bpointers are 4 bytes (32 bits). x64 pointers are 8 bytes (64 bits).

  1. Yes, by the size of a pointer per record (i.e. 1 million times the size of a pointer), plus the size of one initial pointer to the list.
  2. Yes, by at least 4 bytes per record and 4 bytes for the initial record.

At 2., it is a minimum size increase. The actual size increase might be bigger, and depends on how Embarcadero is going to do record packing and field alignment in the x64 world.

--jeroen

Jeroen Pluimers
Thanks. What about the speed? Will the same operation on the same structure (e.g. creating the list with 1.000.000 nodes implemented with pointers) be slower on 64 than on 32 bit machine, since the amount of memory needed to be reserved for each pointer will be twice larger?
Petra
The easiest way to test that, is just do it. Start with 10.000 nodes, then increase.
Jeroen Pluimers
A: 

The record is 8 bytes in size (on 32 bit Delphi), the array is 4 bytes (* length).

Assuming the size of a pointer is 8 bytes in the upcoming 64 bit Delphi the record would be 12 bytes (if integer remains 4 which I presume).

PS: I think it would be better to declare a large array as dynamic since a dynamic array's memory is allocated on the heap instead of the stack.

Remko