The problem with storing a graph directly in HDFS is that you have no means to perform random reads of the data. So to find all the neighbors of a node you have to process the whole edge list in HDFS to find the nodes that are connected to it.
So to perform a clustering coefficient calculation you would need to pass over all the data twice. The first time finding the nodes that are connected to the starting node. The second time to find out how those nodes are connected to each other.
Each time you want to go out another level in your graph you will therefore need to process the whole graph to find the new connections.
Is this an easy thing to do, well yes it is. Is it time efficient? That really depends on how fast you wish to be able to calculate things like the LCC and how large your graph actually is. It won't be anywhere near real time.
Another approach would be to use HBase to store your edges in some fashion, this would give you random access to nodes still in a parallel fashion. After all HBase is part of hadoop.
Something that might be of interest if you want to store large graphs in a parallel fashion might be FlockDB. Its a distributed graph database released recently by Twitter. I haven't used it but it might be worth a look.