tags:

views:

268

answers:

3

I am an intern who was offered the task of porting a test application from Solaris to Red Hat. The application is written in Ada. It works just fine on the Unix side. I compiled it on the linux side, but now it is giving me a seg fault. I ran the debugger to see where the fault was and got this:

Warning: In non-Ada task, selecting an Ada task. => runtime tasking structures have not yet been initialized. <non-Ada task> with thread id 0b7fe46c0 process received signal "Segmentation fault" [11] task #1 stopped in _dl_allocate_tls at 0870b71b: mov edx, [edi] ;edx := [edi]

This seg fault happens before any calls are made or anything is initialized. I have been told that 'tasks' in ada get started before the rest of the program, and the problem could be with a task that is running.

But here is the kicker. This program just generates some code for another program to use. The OTHER program, when compiled under linux gives me the same kind of seg fault with the same kind of error message. This leads me to believe there might be some little tweak I can use to fix all of this, but I just don't have enough knowledge about Unix, Linux, and Ada to figure this one out all by myself.

A: 

Off the top of my head, if the code was used on Sparc machines, and you're now runing on an x86 machine, you may be running into endian problems.

It's not much help, but it is a common gotcha when going multiplat.

Alan
I literally was just dealing with a segfault with an Ada app on Linux that was caused by blowing out the task stack. In early tests only a few K were going on the stack (from an array slice) and all was well, in later tests a few Meg were going on the stack from the array. Short work-around was to increase the task size, later fix is to not put the data on the stack :-)
Marc C
+1  A: 

This is a total shot in the dark, but you can have tasks blow up like this at startup if they are trying to allocate too much local memory on the stack. Your main program can safely use the system stack, but tasks have to have their stack allocated at startup from dynamic memory, so typcially your runtime has a default stack size for tasks. If your task tries to allocate a large array, it can easily blow past that limit. I've had it happen to me before.

There are multiple ways to fix this. One way is to move all your task-local data into package global areas. Another is to dynamically allocate it all.

If you can figure out how much memory would be enough, you have a couple more options. You can make the task a task type, and then use a

for My_Task_Type_Name'Storage_Size use Some_Huge_Number;

statement. You can also use a "pragma Storage_Size(My_Task_Type_Name)", but I think the "for" statement is preferred.

Lastly, with Gnat you can also change the default task stack size with the -d flag to gnatbind.

T.E.D.
A: 

Hunch: the linking step didn't go right. Perhaps the wrong run-time startup library got linked in?

(How likely to find out what the real trouble was, months after the question was asked?)

DarenW