tags:

views:

672

answers:

3

Kind of the opposite of this question.

Is there a way to tell Python "Do not write to disk until I tell you to." (by closing or flushing the file)? I'm writing to a file on the network, and would rather write the entire file at once.

In the meantime, I'm writing to a StringIO buffer, and then writing that to the disk at the end.

+3  A: 

No, a glance at the python manual does not indicate an option to set the buffer size to infinity.

Your current solution is basically the same concept.

You could use Alex's idea, but I would hazard against it for the following reasons:

  1. The buffer size on open is limited to 2^31-1 or 2 gigs. Any larger will result in "OverflowError: long int too large to convert to int"
  2. It doesn't seem to work:

    a = open("blah.txt", "w", 2 ** 31 - 1)
    for i in xrange(10000): 
        a.write("a")
    

Open up the file without closing python, and you will see the text

Unknown
+3  A: 

You can open your file with as large a buffer as you want. For example, to use up to a billion bytes for buffering, x=open('/tmp/za', 'w', 1000*1000*1000) -- if you have a hundred billion bytes of memory and want to use them all, just add another *100...;-). Memory will only be consumed in the amount actually needed, so, no worry...

Alex Martelli
+1  A: 

I would say this partly depends on what you're trying to do.

The case where I came across this issue was when my application was a bit slow creating a file that was used by another application, the other application would get incomplete versions of the file.

I solved it by writing the file to a different place, then renaming it into the correct place once I'd finished writing.

If you want this for other reasons then maybe that doesn't help.

Colin Coghill
This is worth testing. Writing the file locally and using the OS to copy it to the network might be faster. thanks
jcoon