Hadoop map-reduce configuration provides the mapred.task.limit.maxvmem and mapred.task.default.maxvmem. According to the documentation both of these are values of type long that is anumber, in bytes, that represents the default/upper VMEM task-limit associated with a task. It appears that meaning of "long" in this context is 32bit and setting values higher than 2GB may lead to negative values being used as limit. I am running on 64 bit system and 2GB is much lower limit than I actually want to impose.
Is there any way around this limitation?
I am using hadoop version 0.20.1