Currently am developing a program that solves (if possible) any given labyrinth of dimensions from 3X4 to 26x30. I represent the graph using both adj matrix (sparse) and adj list. I would like to know how to output the total time taken by the DFS to find the solution using one and then the other method. Programatically, how could i produce such benchmark?
Assuming you have a suitable methods, this should be fairly simple. Just wrap both the methods in a timer, and repeat it many times for statistical significance.
--test method with adjacency matrix
start = TimeNow()
repeat 1000 times
DepthFirstSearchWithAdjMatrix()
timePerSearch = (TimeNow() - start) / 1000.0
--test method with adjacency list
start = TimeNow()
repeat 1000 times
DepthFirsySearchWithAdjList()
timePerOtherSearch = (TimeNow() - start) / 1000.0
if timePerSearch < timePerOtherSearch
print "Adj Matrix is better than adj list"
else
print "Adj Matrix is worse than adj list"
An useful table to work out with various graphs implementations:
OPERATION EDGE LIST ADJ LIST ADJ MATRIX
degree(v) O(m) O(d(v)) O(n)
incidentEdges(v) O(m) O(d(v)) O(n)
areAdjacent(v1,v2) O(m) O(min(d(v1),d(v2)) O(1)
addVertex(v) O(1) O(1) O(n²)
addEdge(v) O(1) O(1) O(1)
removeVertex(v) O(m) O(m) O(n²)
removeEdge(e) O(m) O(d(v1)+d(v2)) O(1)
memory O(m+n) O(m+n) O(n²)
where m
is the number of edges, n
is the number of vertices and d(vertex)
is the number of elements in the vertex adjacency list.. adj matrix implementation has an O(n²)
to add and remove vertices because you have to reallocate the matrix..
Just prepared this for an article, that why I have it ready :)
This is not directly related to benchmarking, since usually you study which operations you will mostly need and choose the right implementation for your needs, so it's a sort of "theoretical" benchmark that you do by studying what you are going to implement. Otherwise you can just measure time that your code needs to do the whole work with both implementations and compare it.
EDIT: since you use a sparse matrix implementation the complexity of operations could slightly change (mostly getting a little worse, since you are trading memory for speed).
EDIT2: ok, now that I know that this is Java it will be fair simple:
long before = System.nanoTime();
/* execution of your algorithm */
long elapsed = System.nanoTime() - before;
Answer is in nanoseconds and accuracy is not guaranteed, so use this thing carefully. Doing an average of many runs (and checking that variance is low, discarding the result that are more distant from the average) will give coherence to your results.