views:

52

answers:

2

In my application I'm dealing with upload of really big image files. They will be stored on a remote server, so from what I was able to learn I need to write some custom Storage system (probably with the use of python's poster module). Because of the size I would like to send the files directly to media server without storing them in memory (which 'poster' enables). But all uploaded files are handled by UploadHandler class, which forces files to be stored locally in some way (file, temp or in memory). So how can I get around this ?

A: 

According to the docs, the UploadedFile class should have a method chunks() which returns a generator. The chunk size is configurable (2.5 MB by default). So you can do something like that (copied from the docs):

destination = open('some/file/name.txt', 'wb+')
for chunk in f.chunks():
    destination.write(chunk)
destination.close()

This will read one chunk after another into the memory and write it to a file. (So there is only one chunk at the time in the memory). You might want to change the path of open() to a NFS volume, then every call to write() would only send the current chunk to the NFS. (NFS exports all file operations like open/write/read/seek/close as RPC). Samba works similar.

Alternatively, you can also implement such a mechanism on your own, by running another service at the media server which offers a method of appending a chunk to a file. (Anyway, using NFS or Samba would be the better choice in my opinion).

tux21b
but still after I'll send this data a Storage will be run. Or am I wrong ?
mizou
A: 

You might find xsendfile(snippet) or similar extensions of web servers usefull. But you have to initiate the connection from the requesting machine then.

artificialidiot
this is not an option unfortunately
mizou