You might consider graphing the data and deriving its (mathematical) function. Run a few trials with, say, 10, 100, 1000, 10000, and 100000 iterations. Using the number of iterations as your x
variable and the resulting time as your y
variable, plot a graph. From this you can determine the function which describes the performance of the code, using linear regression (also know as curve-fitting in software) - I use
Graphical Analysis
for this.
Repeat the trials with other data-structures/code and do the same graphic procedure, then compare your graphs and functions.
You can also use this data to determine the effective Big O time-complexity for each data structure.