Somebody already mentioned numpy
, and the OP commented that "it's back to the fancy pointer math with C arrays" -- that's a totally trivial implementation detail! Since the underlying memory in a (normal;-) computer can be seen as an array of bytes (or words), well then of course any data structure whatsoever is implemented in top of that array (or slices thereof) plus "fancy pointer math" -- double-ended queues, multi-dimensional arrays, binary trees, you name it, the underlying implementations will always boil down to that (just like all fancy control structures boil down to conditional and unconditional jumps at machine level, and so forth). SO WHAT?! These are implementation details, of course. numpy
, just like Fortran and other languages and libraries, provides N-dimensional arrays -- no matter how it implements them "deep inside" (actually numpy's pretty up front about that, as you can easily flatten and reshape arrays -- it's pretty typical of Python to provide higher level abstractions with pretty good "hooks" on how they relate to lower-level ones;-).
E.g.,
>>> import numpy
>>> x = numpy.arange(12)
>>> x
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
>>> x.reshape((3,4))
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.reshape((4,3))
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
>>> x.reshape((4,3))[::2, ::2]
array([[0, 2],
[6, 8]])
>>> x.reshape((4,3))[(0,1,3), ::2]
array([[ 0, 2],
[ 3, 5],
[ 9, 11]])
>>>
You can reshape, index, slice and mold the data in the N-dimensional array with high flexibility and excellent performance -- even while knowing that the underlying data block is just that one-dimensional array (here x is born and stays 1-D, but even if that weren't the case you could still access the underlying 1-D array by flattening).
This is what "support for N-dimensional array" means (though in most other languages and frameworks offering such support you may get less transparency, lower functionality, or both;-).