views:

462

answers:

6

I have some json files with 500MB. If I use the "trivial" json.load to load its content all at once, it will consume a lot of memory.

Is there a way to read partially the file? If it was a text, line delimited file, I would be able to iterate over the lines. I am looking for analogy to it.

Any suggestions? Thanks

+3  A: 

Short answer: no.

Properly dividing a json file would take intimate knowledge of the json object graph to get right.

However, if you have this knowledge, then you could implement a file-like object that wraps the json file and spits out proper chunks.

For instance, if you know that your json file is a single array of objects, you could create a generator that wraps the json file and returns chunks of the array.

You would have to do some string content parsing to get the chunking of the json file right.

I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file.

codeape
Unfortunately, I can't post the file here and it's not generated by me either. I was thinking about reading the json file with the regular json.load and generate a new text, line delimited file to iterate over it. The problem I am facing is that I have 195 files like that to process and it seems that python's garbage collector is not doing a good job. After the 10th file, I run out of memory. I'm using Python 2.6.4 on windows 7.
duduklein
+1  A: 

On your mention of running out of memory I must question if you're actually managing memory. Are you using the "del" keyword to remove your old object before trying to read a new one? Python should never silently retain something in memory if you remove it.

Aea
I'm not using the del command, since I thouth it did it automatically, because there were no more references to it.
duduklein
Since it wasn't removed, you still have references. Global variables are the usual problem.
S.Lott
+1  A: 

in addition to @codeape

I would try writing a custom json parser to help you figure out the structure of the JSON blob you are dealing with. Print out the key names only, etc. Make a hierarchical tree and decide (yourself) how you can chunk it. This way you can do what @codeape suggests - break the file up into smaller chunks, etc

George
+2  A: 

So the problem is not that each file is too big, but that there are too many of them, and they seem to be adding up in memory. Python's garbage collector should be fine, unless you are keeping around references you don't need. It's hard to tell exactly what's happening without any further information, but some things you can try:

  1. Modularize your code. Do something like:

    for json_file in list_of_files:
        process_file(json_file)
    

    If you write process_file() in such a way that it doesn't rely on any global state, and doesn't change any global state, the garbage collector should be able to do its job.

  2. Deal with each file in a separate process. Instead of parsing all the JSON files at once, write a program that parses just one, and pass each one in from a shell script, or from another python process that calls your script via subprocess.Popen. This is a little less elegant, but if nothing else works, it will ensure that you're not holding on to stale data from one file to the next.

Hope this helps.

jcdyer
+1  A: 

Another idea is to try load it into a document-store database like MongoDB. It deals with large blobs of JSON well. Although you might run into the same problem loading the JSON - avoid the problem by loading the files one at a time.

If path works for you, then you can interact with the JSON data via their client and potentially not have to hold the entire blob in memory

http://www.mongodb.org/

George
A: 

"the garbage collector should free the memory"

Correct.

Since it doesn't, something else is wrong. Generally, the problem with infinite memory growth is global variables.

Remove all global variables.

Make all module-level code into smaller functions.

S.Lott