views:

36

answers:

1

I have a project with a huge amount of auto-generated code, which we build into a static library before linking into the final executable. We use gcc/gnat 5.04a There are so many files, we have to break the job into batches and invoke ar multiple times to construct the library (in order to avoid the command-line length limitation), e.g:

 [echo] Archiving codegen                   
 [echo] Deleting old codegen archive                     
   [ar] Using ar found in /usr/bin          
   [ar] Batch 1 of 7 completed in 37.871 s  
   [ar] Batch 2 of 7 completed in 55.796 s  
   [ar] Batch 3 of 7 completed in 89.709 s  
   [ar] Batch 4 of 7 completed in 256.894 s 
   [ar] Batch 5 of 7 completed in 196.704 s 
   [ar] Batch 6 of 7 completed in 248.334 s 
   [ar] Batch 7 of 7 completed in 243.759 s 
   [ar] Archiving took: 1129.067 s          
   [ar] Using ranlib found in /usr/bin      
   [ar] Indexing took: 247.223 s            
 [echo] Done with codegen

We are looking for potential speed improvements. It appears that, as the archive grows, each batch takes longer and longer, presumably because it has more to search (for updates) before adding objects. This appears to be why deleting the archive makes it quicker than just updating the old archive in place. In our quest for more speed, we use the flags "qcS" to the ar command. According to the man page, "q" (which should be quick-append) is really a synonym for "r" (which is "use replacement"), "c" creates the archive (nothing special there) and "S" skips generating an index (which we cover by using "ranlib" again at the end.

Is there any convenient way, using built-in-tools, to make this operation faster? If "quick append" mode worked that would probably be what we want, but alas.

+1  A: 

We found that a huge part of the timing issue was the location of the files being archived. The numbers above are for object and archive files located on a NAS device. Doing the same operation on a local hard disk (temporary storage) reduces the time to ~20 - 40 seconds. Copying all the files, doing local archive, and copying the result back takes longer than archiving directly on the NAS, but we're looking at moving our entire build process to local temporary storage, which should improve performance substantially.

Schamp