views:

912

answers:

6

Hey everyone,

When implementing a Matrix construct using arrays, which would be more efficient? Using a 1D array, or an array of arrays (2D)?

I would think a 2D is more efficient as you already have the X and Y coordinates of an element, where in a 1D implementation you have to calculate the index.

Any thoughts?

Thank you

Edit: it is being implemented using Java

+9  A: 

"Efficient" is not a catch-all term.

An array-of-arrays solution is more efficient in terms of storage, where the array may be sparse (i.e., you can use null pointer to represent a matrix line of all zeroes). This would be (in C):

int *x[9];

where each "int *" would be allocated separately.

A 2D array (which is not necessarily an array of arrays) will generally be faster (efficient in terms of speed) since it works out memory locations with math, without having to de-reference memory locations. I'm talking of the construct:

int x[9][9];

A 1D array of the form:

int x[81];

is unlikely to be any faster than the equivalent 2D version since you still have to do the calculations at some point to find the correct cell (manually in your code rather than letting the compiler do it).

After edit where Java was added as a requirement:

I believe Java 2D arrays are of the array of arrays variety (which will require two memory accesses as opposed to the one required for a 1D array) so the 1D array with manual index calculation may well be faster. So, instead of declaring and using:

int x[width][height];
x[a][b] = 2;

you may get more speed with:

int x[width*height];
x[a*height+b] = 2;

You just need to be careful that you don't get the formula mixed up anywhere (i.e., don't swap 4 and 7 inadvertently).

This speed difference is based on how I think Java is coded under the covers so I could be wrong (but I doubt it :-). My advice is, as always for optimisation questions, measure, don't guess!

paxdiablo
+1 for "efficient is not a catch-all phrase" Remember that .NET has optimizations for jagged arrays that don't work for multidimensional ones, for example.
Jimmy
In Java the 1D array is probably faster, since the 2D-Array requires at least two memory reads (no calculation, 'though).
Joachim Sauer
I would be careful making assumptions. For example, if you are looping over a row of the 2D array, Java might optimize out the index lookups wheares it might not if you are performing a calculation in the index with a 1D array).
Laplie
+1  A: 

Depending on the language, there will be no difference. The real question is how the 2D matrix is allocated. Is it a single contiguous space of X*Y bytes, or is it allocated as Y independent arrays of X size. The latter is usually done when creating sparse matrices.

MadCoder
it is being implemented in java
barfoon
+1  A: 

I'm going to break ranks with the answers to date and suggest the following reason that a 1D array is quite possibly faster.

A 2D array involves 2 memory accesses. A[x][y] for instance first must lookup A[x], and then do another lookup of that array[y].

A 1D implementation traditionally would be A[x + (width *y)]. When width is in registers (or a literal), this means 2 math ops and 1 lookup instead of 2 lookups. Lookups are orders of magnitude slower than math ops, so if width is in register even a small percent of the time, or is a literal, it will be faster.

Of course the standard caveats apply. Always profile, and avoid premature optimizations.

patros
Implementation of a 2D array depends on the language, it's not necessarily an array of arrays. x[9][9] may be x[81] under the covers which will NOT be any different than you declaring x[81]. The difference is just whether your code or the compiler does the math to find the cell.
paxdiablo
Actually, thought I'd better test that. Under gcc, "int x[9][9];x[5][4] = 7;" compiles down to "movl $7,-148(%ebp)", a fixed location in the stack frame. So it at least doesn't do two memory references. Again, it depends on the language and compiler.
paxdiablo
Agreed, it's highly language/compiler dependent. I'm assuming though that these matrices would be dynamically allocated, so your test isn't what I imagine what the OP had in mind.
patros
+1  A: 

I don't think this question can be answered without actually writing sample code and testing the results. For example this question found the following results.

sum(double[], int): 2738 ms (100%)
sum(double[,]):     5019 ms (183%)
sum(double[][]):    2540 ms ( 93%)

Jagged arrays being the fastest, followed by 1 dimensional arrays, followed by multidimensional arrays. Jagged arrays being the fastest is probably not what people would have predicted. These results are probably useless for Java since Java has different optimizations (and no multidimensional arrays in Java).

I would be very careful making assumptions. For example, if you are looping over a row of the 2D array, Java might optimize out the index lookups or out of bounds checking wheares it might not be able to if you are using a 1D array with inline index calculations.

I suggest writing a simple program to test the speeds on the desired platform.

Laplie
A: 

The commercial finite element package that I used during my career as a mechanical engineer using a 1D array as the basis for its linear algebra calculations. Finite element methods result in matricies that are large, sparse, and banded. Storing all those zero elements outside the band made no sense.

The only time you'll see 2D arrays used is for small, academic problems or those that are not sparse (e.g., boundary element methods).

duffymo
A: 

In the general case, the most efficient implementation for any algorithm is the one which has the least amount of code. This is for many reasons:

  • Less code -> less time to write it
  • Fewer lines of code means less bugs (since bugs per KLOC is pretty constant for a given programmer) which are easier to find (since they can't hide well in a few lines of code)
  • If your algorithm isn't fit for the task, it's easier to replace when it's 10 lines instead of 100
  • A compact implementation usually leads to clean interface which makes the code which uses the library clean (so the effect multiplies). This can also lead to a more complex library implementation if that makes the client code more efficient (i.e. you concentrate the effort in a single place to make everything else more simple).

It also depends a lot on the access patterns. Do you always walk the whole matrix? Is it sparse? Do you prefer to process rows or columns?

In the extreme case (matrix with a billion rows and columns with only 10 cells used), a HashMap can be more efficient than any array implementation. For other problems, it can be more efficient to mix the approaches depending on the problem (for example, a HashMap of arrays of mini-matrices when cells "clump" in a gigantic empty space).

If your problems asks to locate a row/column and then process those values, it might be more efficient to use a 2D approach, so the first access returns an array which you can then process without bothering about boundaries, one-off-errors, etc.

Aaron Digulla