views:

300

answers:

5

How do we determine where the mistake is in our code that causes a segmentation fault?

After writing some code, to determine where I have a segmentation fault, can gcc show me my have a mistake (or typing error) in a particular line number -- because that line caused the segmentation fault?

+6  A: 

Gcc can't do that but gdb sure can. Compile you program using the -g switch, like this:

gcc -g -Wall -o program program.c

Then use gdb:

gdb ./program
run
[..segfault happens here..]
where
[offending code is shown here]

Here is a nice tutorial to get you started with gdb.

nc3b
Using gdb's backtrace ( `bt` ) will find where it happened quickly.
nategoose
@nategoose: True :)
nc3b
Note that where the segfault occurs is generally only a clue as to where "the mistake which causes" it is in the code. An important clue, but it's not necessarily where the problem resides.
mpez0
You can also use ( bt full ) to get more details.
robUK
A: 

This is a bit of a brute force method, that you would want to use preprocessor debug flags to control, but you could put a flushed cout after each line and match that up. It's simple, and reusable.

glowcoder
Better than doing this to each line (what if you have a MLoC project?), do it selectively, using binary approximation to narrow it down.
sbi
how can we use flush ? (example should i use like thatflush(cout) )
gcc
If you don't want to flush it every time, use `cerr`. `cerr` should not be buffered.
nc3b
@nc3b: `cerr` is only for C++. This is C, so use the `stderr` `FILE` pointer.
nategoose
@nategoose: you are right. He mentioned `cout` in his comment so I guess he's already using c++.
nc3b
+1  A: 

You could also use a core dumb and then examine it with gdb. To get useful information you also need to compile with the -g flag.

Whenever you get the message

 Segmantation fault (core dumped)

a core file is written into your current directory. And you can examine it with the command

 gdb your_program core_file

The file contains the state of the memory when the program crashed. A core dump can be useful during the deployment of your software.

Make sure your system doesn't set the core dump file size to zero. You can change the size with ulimit -c. Be aware that core dumps can become huge.

Lucas
A: 

Lucas's answer about core dumps is good. In my .cshrc I have:

alias core 'ls -lt core; echo where | gdb -core=core -silent; echo "\n"'

to display the backtrace by entering 'core'. And the date stamp, to ensure I am looking at the right file :(.

Added: If there is a stack corruption bug, then the backtrace applied to the core dump is often garbage. In this case, running the program within gdb can give better results, as per the accepted answer (assuming the fault is easily reproducible). And also beware of multiple processes dumping core simultaneously; some OS's add the PID to the name of the core file.

Joseph Quinsey
and don't forget `ulimit -c unlimited` to enable core dumps in the first place.
James Morris
@James: Correct. Lucas already mentioned this. And for those of us who are still stuck in the csh, use 'limit'. And I've never been able to read the CYGWIN stackdumps (but I haven't tried for 2 or 3 years).
Joseph Quinsey
+2  A: 

Also, you can give Valgrind a try: if you install Valgrind and run valgrind --leak-check=full , then it will run your program and display stack traces for any segfaults, as well as any invalid memory reads or writes and memory leaks. It's really quite useful.

jwkpiano1
+1 , Valgrind is so much faster / easier to use to spot memory errors. On non-optimized builds with debugging symbols, it tells you _exactly_ where a segfault happened and why.
Tim Post