tags:

views:

128

answers:

4

Hello I have a 1000 data series with 1500 points in each.

They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data.

Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?

I would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically.

Thanks.

A: 

You should use reshape() and/or resize() depending on your precise requirement.

If you want chapter and verse from the authors you are probably better off posting on the numpy discussion board.

Simon
+1  A: 

If you want zeroes in the added elements, my_array.resize((1600, 1000)) should work. Note that this differs from numpy.resize(my_array, (1600, 1000)), in which previous lines are duplicated, which is probably not what you want.

Otherwise (for instance if you want to avoid initializing elements to zero, which could be unnecessary), you can indeed use hstack and vstack to add an array containing the new elements; numpy.concatenate() (see pydoc numpy.concatenate) should work too (it is just more general, as far as I understand).

In either case, I would guess that a new memory block has to be allocated in order to extend the array, and that all these methods take about the same time.

EOL
A: 

This should do what you want: (using 3x3 array and 4x4 array to represent the two arrays in your Q)

import numpy as NP
a = NP.random.randint(0, 10, 9).reshape(3, 3)
b = NP.zeros((4, 4))
b[:2,:2] = a

# returns: array([[ 0.,  1.,  2.,  0.],
#                 [ 3.,  4.,  5.,  0.],
#                 [ 6.,  7.,  8.,  0.],
#                 [ 0.,  0.,  0.,  0.]])
doug
+1  A: 

No matter what, you'll be stuck reallocating a chunk of memory, so it doesn't really matter if you use arr.resize(), np.concatenate, hstack/vstack, etc. Note that if you're accumulating a lot of data sequentially, Python lists are usually more efficient.

dwf