views:

623

answers:

3

I am trying to diagnose this exception :

System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process this command. (Exception from HRESULT: 0x80070008)
at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(RuntimeType objectType)
at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(Type objectType)
at System.Runtime.Remoting.Activation.ActivationServices.CreateInstance(Type serverType)
at System.Runtime.Remoting.Activation.ActivationServices.IsCurrentContextOK(Type serverType, Object[] props, Boolean bNewObj)
at Oracle.DataAccess.Client.CThreadPool..ctor()
at Oracle.DataAccess.Client.OracleCommand.set_CommandTimeout(Int32 value)
...

It does not look like any of the normal types of "storage" have hit any limits. The application is using about 400MB of memory, 70 threads, 2000 handles and the hard drive has many GB free. The machine is running Windows 2003 Enterprise Server 32bit with 16GB of RAM so memory shouldn't be an issue.

The application is running as a windows service so there are no GDI objects being used. Running out of GDI handles is a common cause of this exception.

Database connections, commands & readers are all all wrapped with using blocks so they should be getting cleaned up correctly.

UPDATE: Reducing the number of threads we were using from 12 to 4 seems to have fixed the issue. We managed to run with no errors for over 24 hours, before we were lasting between 4 and 8 hours. UPDATE2: I never figured out what resource we were running out of, but reducing the number of threads seems to have fixed the problem. Or at least hid it.

A: 

Looks like something is looping and instantiating too many objects on the heap so you are running out of memory on the heap. Look for any loops in the calling code.

Richard Hein
Actually forget it, Googling makes it look like this is an Oracle memory issue on 32-bit machines, so I doubt I can help at the moment.
Richard Hein
+2  A: 

Another factor you need to consider is memory fragmentation.

The maximum single allocation you can perform is equal to the largest contiguous block of memory available to the process. This is almost always less than the total amount of memory available in the process because of fragmentation. That is allocated blocks of memory sitting in between 2 free blocks of memory "fragments" the space.

The more fragmentation you have in the process the smaller the largest contiguous block of memory that will be available. I've seen situations where there was close to 1GB of memory free but the largest contiguous block was around 10MB.

Have you checked for memory fragmentation in this process?

JaredPar
Any advice on how to detect memory fragmentation? I am currently trying DebugDiag's memory leak detection against the process running on a test server.
Darryl Braaten
Fragmentation doesn't seem to be an issue, the largest free block was 500MB while the application was having trouble. This was gathered using the sysinternals vmMap tool.
Darryl Braaten
+1  A: 

You did eliminate all the obvious sources of this error for your process. Makes it likely that this is actually an operating system issue. The one resource that is always under pressure on a server is the kernel memory pool. It is easy to see, TaskMgr.exe displays it on the Performance tab.

It somewhat matches your call stack, looks like the Oracle provider is creating threads for a thread pool. A thread takes a megabyte in your process and 24 KB in the kernel memory pool, used as a stack when the thread switches to kernel mode.

Background info is available here.

Hans Passant
This doesn't appear to be the issue as there are over 40MB of free non-pages pool and 100MB of paged pool. I also looked at the perfmon counters for all "Objects" events semaphores etc and we don't seem to be leaking those either.
Darryl Braaten
Well, time to give Oracle a call.
Hans Passant