views:

109

answers:

1

Which operations in Common Lisp programs are to be considered sufficiently primitive so as to count for a single "step" in algorithmic analysis? How widely do modern lisps vary in their implementation?

Certainly arithmetic with small integers would count as a single step, but what about larger numbers? And what about considering the difference between reverse and nreverse? Specifically, is nreverse theta of reverse? What about all of the array and sequence operations? Also, how do macros figure in - how should I think about macros when analyzing complexity?

+8  A: 
  • Don't try to find a bottom layer of uniform “single steps”, just know what's O(1) and what's not.
  • Addition of "larger numbers" (bignums) should be O(log n) where n is the larger of the integers. There are many different algorithms for multiplication; I'm not an expert in the field, and this is likely to be implementation specific.
  • reverse and nreverse can both be implemented O(n) (reverse by a cdr-input-and-cons-output strategy; nreverse by exchanging pointers while cdring). If I understand the unfamiliar term “theta of” correctly, then yes. Note, however, that the CL standard does not make any guarantees about time complexity.
  • Built-in sequence operations are generally implemented by choosing an array- or list-specific implementation depending on the type of the argument, and so should be expected to be the ordinary efficient algorithm for that data type.
  • Macros merely expand into other code. You can look at their expansion, or see what they're documented to do, or make an educated guess about the algorithm their expansion uses. The complexity of the macroexpander is completely irrelevant (unless you're using eval/compile, in which case you have to think about the complexity of the implementation's compiler as well) since it runs at compile time, once; only the expanded code matters.
Kevin Reid
+1 for the 1st bullet.
Pavel Shved
Very thorough, I appreciate it.
Aoriste
Theta-notation is similar to big-O notation, except that it implies that the cost of the operation is bounded both above *and below* by the given expression. While big-O says "the cost of this operation does not grow asymptotically more quickly than the given function", big-theta says "the cost of this operation does not grow asymptotically more quickly than the given function, nor does the given function grow asymptotically more quickly than the given function." In other words, big-theta describes a lower bound on performance as well as an upper bound.
Joe Carnahan
@Pavel - Did you mean to vote-up the answer and instead voted up a comment?
Joe Carnahan
+1 for the dedicated effort.
Vijay Mathew