Okay, so i am currently attempting to debug a problem on a win2003 server. Basically we have an ancient program that calls C's system()
function. Now i basically tracked it down to the fact that when the system() runs the application. The application only has access to around ~500 or so megabytes of memory. Yet if i run the application manually myself, it runs fine.
Is there some sort of limiting factor on the system command and memory?
EDIT
Okay to elaborate more. We have an automatic processing system, that basically takes an input file and runs a bunch of applications on it. This has worked fine for the last 12 years or so. But now we are dealing with larger and larger images(Remote Sensing) using ArcEngine (ESRI Image Manipulation). Now we basically have the following:
Input file Comes in -> Gets picked up by processing system -> Executes a set of predefined tasks one after another by calling system().
Now when it comes to the ESRI Application the system the program crashes when attempting to read the image into memory. I cannot do anything about this by partially reading the image file, because this is how their sdk works. Also i just did a test by creating a veru simple C program that forcefullies allocates memory and seeing when it crashes. It crashes almost exactly around 512MB. I've looked around on the internet and cannot find anything. =/...
EDIT 2
I just did some funky tests. I basically wrote a small C program that calls the application via system(). And called that, it crashes in the exact same place. When i called this program the system had 2.5GB free memory(out of 3). I then basically wrote a python script that uses subprocess and popen and it worked fine. Adding the python script to the automatic processing system also runs fine.
What could possibly be doing this?
EDIT 3
Python script and the processing system both run as the same user. The only difference is the processing system runs as a service, logged in as a user.