The trick to analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations:
a', b' := a % b, b % (a % b)
Now a and b will both decrease, instead of only one, which makes the analysis easier. You can divide it into cases:
- Tiny A: 2*a <= b
- Tiny B: 2*b <= a
- Small A: 2*a > b but a < b
- Small B: 2*b > a but b < a
- Equal: a == b
Show that each case decreases a+b by a minimum amount:
- Tiny A: b % (a % b) < a and 2*a <= b, so b is decreased by at least half, so a+b decreased by at least 25%
- Tiny B: a % b < b and 2*b <= a, so a is decreased by at least half, so a+b decreased by at least 25%
- Small A: b will become b-a, which is less than b/2, decreasing a+b by at least 25%.
- Small B: a will become a-b, which is less than a/2, decreasing a+b by at least 25%.
- Equal: a+b drops to 0, which is obviously decreasing a+b by at least 25%.
Therefore, by case analysis, every double-step decreases a+b by at least 25%. So the total number of steps (S) until we hit 0 must satisfy 1.25^S <= A+B. Now just work it:
(5/4)^S <= A+B
S <= lg[5/4](A+B)
S is O(lg[5/4](A+B))
S is O(lg(A+B))
S is O(lg(A*B)) //because A*B asymptotically greater than A+B
S is O(lg(A)+lg(B))
//Input size N is lg(A) + lg(B)
S is O(N)
Therefore, because the number of iterations is linear in the input size and the total running time is linear in the number of iterations (assuming mod takes constant time), the total running time is linear in the input size.