I have a large set of data (a data cube of 250,000 X 1,000 doubles, about a 4 gig file) and I want to manipulate it using a previous set of OOP classes I have written in Python. Currently the data set is already so large that to read into my machine memory I have to at least split it in half so computing overhead is a concern. My OOP classes create new objects (in this case I will need 250,000 new objects, each object is an array of 1,000 doubles) to handle the data. What is the overhead in terms of memory and computing required in creating objects for a generic OOP language? In python? What about in C++?
Yes, I realize I could make a new class that is an array. But 1) I already have these classes finished and 2) I put each object that I create back into an array for access later anyways. The question is pedagogical
*update: I want to be efficient with time, my time and the computers. I don't want to rewrite a program I already have if I don't have to and spending time optimizing the code wastes my time, I don't care that much if I waste the computers time. I actually do have a 64bit machine with 4Gig ram. The data is an image and I need to do several filters on each pixel.*