views:

10324

answers:

4

I have a process in linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?

+24  A: 

This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type

ulimit -c unlimited

then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.

In tcsh, you'd type

limit coredumpsize unlimited
Eli Courtwright
Do you know how to do this in tcsh?
Nathan Fellman
I've updated my answer to include a tcsh example
Eli Courtwright
+3  A: 

By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.

Mark Harrison
By "current directory of the process" do you mean the $cwd at the time the process was run?~/abc> /usr/bin/cat defif cat crashes, is the current directory in question ~/abc or /usr/bin?
Nathan Fellman
~/abc. Hmm, comments have to be 15 characters long!
Mark Harrison
This would be the current directory at the time of the SEGV. Also, processes running with a different effective user and/or group than the real user/group will not write core files.
Darron
+2  A: 

What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.

Nathan Fellman
+3  A: 
tommieb75