I have a merge
function which takes time O(log n)
to combine two trees into one, and a listToTree
function which converts an initial list of elements to singleton trees and repeatedly calls merge
on each successive pair of trees until only one tree remains.
Function signatures and relevant implementations are as follows:
merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees
singleton :: a -> Tree a --// O(1)
empty :: Tree a --// O(1)
listToTree :: [a] -> Tree a --// Supposedly O(n)
listToTree = listToTreeR . (map singleton)
listToTreeR :: [Tree a] -> Tree a
listToTreeR [] = empty
listToTreeR (x:[]) = x
listToTreeR xs = listToTreeR (mergePairs xs)
mergePairs :: [Tree a] -> [Tree a]
mergePairs [] = []
mergePairs (x:[]) = [x]
mergePairs (x:y:xs) = merge x y : mergePairs xs
This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki.
According to the exercise, I shall now show that listToTree
takes O(n)
time. Which I can't. :-(
There are trivially ceil(log n)
recursive calls to listToTreeR
, meaning ceil(log n)
calls to mergePairs
.
The running time of mergePairs
is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1
, and the sizes of the trees are log(n/(2^h))
, where h=log n
is the first recursive step, and h=1
is the last recursive step. Each call to mergePairs
thus takes time (2^h-1) * log(n/(2^h))
I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?