views:

144

answers:

1

I am reading Nicolai Josuttis book on C++STL algorithms. For many algorithms such as stable_sort(), he mentions that the complexity of the algorithm n * log(n) if enough memory is available, otherwise it is n * log(n) * log(n). My question is how does the memory usage affects the complexity ? And how does STL detect such a situation?

+12  A: 

Looking at gcc's STL, you'll find inplace_merge in stl_algo.h. This is a traditional merge implementation of merge sort, with O(N), using a buffer the same size as the input. This buffer is is allocated through _Temporary_buffer, from stl_tempbuf.h. This invokes get_temporary_buffer, which ultimately invokes new. Should that throw an exception, the exception gets caught, and the buffer is NULL - which is the "not sufficient memory" case. In that case, the merge works with __merge_without_buffer, which is O(N lg N). As the recursion depth of merge sort is O(lg N), you get O(N lg N) in the case of the "traditional" mergesort (with buffer), and O(N lg N lg N) in the version without buffer.

Martin v. Löwis
+1. The standard guarantees the complexity for both enough-memory and not-enough-memory cases in 25.3.4/7, so all implementations are obliged to determine if there is enough memory somehow. Although they are not bound to do so by catching exceptions thrown by `new`, that seems like a sensible way to do it. :)
j_random_hacker