views:

67

answers:

3

We have a server (Java EE) application, it will do some image processing jobs based on user request. Such as convert image format (e.g. TIFF to JPEG), convert image color (e.g. RGB to Gray to BW), resample (resize) image. Some customer from printing industry use very large image, such as 2000 dpi, 6 * 8 inch, 4 color components, which will take 6 * 2000 * 8 * 2000 * 4 = 768MB memory. The server can not hold that large image in memory, so we decide to do the process stripe by stripe. The problem is that still not work because there may have many customers at same time. Do you have any idea about how to implement a memory limited image processing? Or, do you know if there are some paper/article can provide us solutions.

Thanks,

A: 

Well the first thing you should know is that the amount of memory you can allocate does not depend on how much RAM you have. The OS manages virtual memory and any allocation is done in virtual memory. The chunks are then paged to RAM/Disk by the OS. You have no control over this. You may think you're allocating 5 MB in RAM but it's up to the OS to keep it in RAM or get it from disk to your program when you need it. In windows 32 bit you have 2 GB in user space to work with (the rest is used by kernel space). In 64 OS bit that amount is much larger.

However just because you have 2 GB of user space that you can allocate (in 32 bit OS or more in 64 bit) doesn't mean you can always allocate a chunk that big because memory allocation requires contiguous blocks of memory. So memory fragmentation may become a problem.

The third thing is JVM limits the amount of memory you can allocate. So you need to increase these by -Xms -Xmx parameters that limit the heap size.

Other than these comments, I don't really have a solution for you.

Budric
Yes I fully understand the VM mechanism. My problem is we want a solution that can process image within limited memory size, like 64MB. Simply enlarge memory is not our first choice. Actually what we did is just use 64bit machine and set very large value for -Xmx. But that doesn't work for 32bit machines.
Xinwang
Yes. I don't think there's really any easy solution. Like you said you could break it down into processing by chunks, but time becomes important for other customers. You could look at implementing the algorithms in OpenCL (or CUDA) and use GPU arrays to get faster speed from parallel processing. Things like image encoding are usually done in independent blocks so these algorithms scale nicely.
Budric
A: 

You can use "tiled" image processing techniques. IPP, for example, supports tiled image processing.

Ross
+2  A: 

I would suggest considering moving the image processing part to a separate JVM which you communicate with from your main application using RMI or similar.

This allows you to tune the processing JVM separately from the main JVM, and perhaps even create a distributed system if needed on multiple machines. This might also allow you to manage the conversions so that only a few happen at the same time allowing bigger individual images.

Is there any restrictions which would refrain you from doing this?

As a last resort, I would suggest moving the actual image conversions to native programs like ImageMagick on Linux, which is then called by your program, does the conversion, and let your JVM dish out the result to the user. This would arguably be faster cpu-wise and require less memory.

Thorbjørn Ravn Andersen