views:

61

answers:

2

Dear All,

I am in looking for a buffer code for process huge records in tuple / csv file / sqlite db records / numpy.darray, the buffer may just like linux command "more".

The request came from processing huge data records(100000000 rows maybe), the records may look like this:

0.12313 0.231312 0.23123 0.152432
0.22569 0.311312 0.54549 0.224654
0.33326 0.654685 0.67968 0.168749
...
0.42315 0.574575 0.68646 0.689596

I want process them in numpy.darray. For example, find special data process it and store them back, or process 2 cols. However it is too big then if numpy read the file directly it will give me a Memory Error.

Then, I think an adapter like mem cache page or linux "more file" command may save the memory when processing.

Because those raw data may presents as different format - csv / sqlite_db / hdf5 / xml. I want this adapter be more normalized, then, use the "[]" as a "row" may be a more common way because I think each records can be represented as a [].

So the adapter what I want may looks like this:

fd = "a opend big file" # or a tuple of objects, whatever, it is an iterable object can access all the raw rows 

page = pager(fd)

page.page_buffer_size = 100    # buffer 100 line or 100 object in tuple

page.seek_to(0)        # move to start
page.seek_to(120)      # move to line #120
page.seek_to(-10)      # seek back to #120

page.next_page()        
page.prev_page()

page1 = page.copy()

page.remove(0)

page.sync()

Can someone show me some hints to prevent reinvent the wheel?

By the way, ATpy, http://atpy.sourceforge.net/ is a module can sync the numpy.array with raw datasource in different format, however, it also read all the data in-a-go into memory.

And the pytable is not suitable for me so far because SQL is not supported by it and the HDF5 file may not as popular as sqlite db(forgive me if this is wrong).

   My plan is write this tools in this way:
    1. helper.py        <-- define all the house-keeping works for different file format
                            |- load_file()
                            |- seek_backword()       
                            |- seek_forward()
                            | ...
    2. adapter.py       <-- define all the interface and import the helper to interact 
                            with raw data and get a way to interact with numpy.darray in somehow.
                            |- load()
                            |- seek_to()
                            |- next_page()
                            |- prev_page()
                            |- sync()
                            |- self.page_buffer_size
                            |- self.abs_index_in_raw_for_this_page = []
                            |- self.index_for_this_page = []
                            |- self.buffered_rows = []

Thanks,

Rgs,

KC

A: 

The linecache module may be helpful — you can call getline(filename, lineno) to efficiently retrieve lines from the given file.

You'll still have to figure out how high and wide the screen is. A quick googlance suggests that there are about 14 different ways to do this, some of which are probably outdated. The curses module may be your best bet, and will I think be necessary if you want to be able to scroll backwards smoothly.

intuited
A: 

Ummmm.... You're not really talking about anything more than a list.

fd = open( "some file", "r" )
data =  fd.readlines()

page_size = 100

data[0:0+page_size] # move to start
data[120:120+page_size] # move to line 120
here= 120
data[here-10:here-10+page_size] # move back 10 from here
here -= 10
data[here:here+page_size]
here += page_size
data[here:here+page_size]

I'm not sure that you actually need to invent anything.

S.Lott
thanks, I am writing an adapter between numpy.darray <=> sqlite, for the very big array processing without memory overflow
K. C
@K. C: What does this "adapter between numpy.darray <=> sqlite" mean? Is this more information that belongs in the question?
S.Lott
@S.Lott,suppose I have a very big(1000000000 records) table stored in sqlite DB, I can use the sqlite module from python to get data from database with iterable row object, but if I want to process whole table to numpy.darray, it will give me a Memory Error because there's too much records. I googled and find there's 2 way to solve this problem, 1. upgrade the platform to x64, 2.using module pytable.
K. C
@K. C: What is this "suppose I have a very big(1000000000 records) table"? Is this more information that belongs in the question? Please **update** the question. Please make the question complete and consistent. Please don't add random requirements as a comment to an answer. Please **update** the question.
S.Lott
@S.Lott, 1 is not possible for me now(it's not about cost), and 2 can not support SQL. also, I want to store(sync) the numpy.darray into(with) sqlite db in an easy way, the existing approach is module ATpy, but it just read all records of a table into a numpy.darray. To solve the memory problem, I may need a buffer based records adapter works like memory cache page or linux "more" command, it may give those big records a fixed window buffer and read and process the data only when necessary.
K. C
@K. C What is "I want to store(sync) the numpy.darray into(with) sqlite db in an easy way". Is this more information that belongs in the question? Please **update** the question. Please make the question complete and consistent. Please don't add random requirements as a comment to an answer. Please update the question.
S.Lott
@S.Lott, thanks for help, the question has been upgraded.
K. C
@K. C: Don't include status ("the question has been upgraded"). There's no point. Stack Overflow records the changes with timestamps for you. Your status messages are redundant. Please delete status-oriented comments.
S.Lott