views:

62

answers:

2

Currently am developing a program that solves (if possible) any given labyrinth of dimensions from 3X4 to 26x30. I represent the graph using both adj matrix (sparse) and adj list. I would like to know how to output the total time taken by the DFS to find the solution using one and then the other method. Programatically, how could i produce such benchmark?

+3  A: 

Assuming you have a suitable methods, this should be fairly simple. Just wrap both the methods in a timer, and repeat it many times for statistical significance.

--test method with adjacency matrix
start = TimeNow()
repeat 1000 times
    DepthFirstSearchWithAdjMatrix()
timePerSearch = (TimeNow() - start) / 1000.0

--test method with adjacency list
start = TimeNow()
repeat 1000 times
    DepthFirsySearchWithAdjList()
timePerOtherSearch = (TimeNow() - start) / 1000.0

if timePerSearch < timePerOtherSearch
    print "Adj Matrix is better than adj list"
else
    print "Adj Matrix is worse than adj list"
Martin
@Martin: thanks for your answer, i perfectly understand. Please bear with me as i have never used anything like that. Do you happen to know how TimeNow() is called in java?
Carlucho
found it! import java.util.Date; -- DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss")....
Carlucho
A better thing to use in java is the system time http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#currentTimeMillis%28%29
Martin
+5  A: 

An useful table to work out with various graphs implementations:

OPERATION               EDGE LIST      ADJ LIST             ADJ MATRIX

degree(v)                 O(m)         O(d(v))                O(n)
incidentEdges(v)          O(m)         O(d(v))                O(n)
areAdjacent(v1,v2)        O(m)         O(min(d(v1),d(v2))     O(1)
addVertex(v)              O(1)         O(1)                   O(n²)
addEdge(v)                O(1)         O(1)                   O(1)
removeVertex(v)           O(m)         O(m)                   O(n²)
removeEdge(e)             O(m)         O(d(v1)+d(v2))         O(1)

memory                    O(m+n)       O(m+n)                 O(n²)

where m is the number of edges, n is the number of vertices and d(vertex) is the number of elements in the vertex adjacency list.. adj matrix implementation has an O(n²) to add and remove vertices because you have to reallocate the matrix..

Just prepared this for an article, that why I have it ready :)

This is not directly related to benchmarking, since usually you study which operations you will mostly need and choose the right implementation for your needs, so it's a sort of "theoretical" benchmark that you do by studying what you are going to implement. Otherwise you can just measure time that your code needs to do the whole work with both implementations and compare it.

EDIT: since you use a sparse matrix implementation the complexity of operations could slightly change (mostly getting a little worse, since you are trading memory for speed).

EDIT2: ok, now that I know that this is Java it will be fair simple:

long before = System.nanoTime();

/* execution of your algorithm */

long elapsed = System.nanoTime() - before;

Answer is in nanoseconds and accuracy is not guaranteed, so use this thing carefully. Doing an average of many runs (and checking that variance is low, discarding the result that are more distant from the average) will give coherence to your results.

Jack
very nice jack, thanks a lot. Good luck with your article
Carlucho
once again thanks buddy
Carlucho
I've never thought about eliminating variance in benchmarks, that's probably a pretty good idea...
Martin
@Jack how can you read this line, i mean what does it mean "O(min(d(v1),d(v2))" <-- areAdjacent <-- List
Carlucho