I am writing a python script that will perform performance test in linux file system. so besides deadlocks, race conditions and time takes to perform an action (delete, read, write and create) what other variables/parameters should the test contain?
Can you be a little more clear?
I tried doing such once before using Python itself. I need time to try it out myself. I tried using time.time() to get the time since epoch. I think the time difference can suffice for file operations.
Update: Check this GSOC Idea, PSF had pledged to sponsor it http://allmydata.org/trac/tahoe/wiki/GSoCIdeas
I am trying to read up that page to get more information.
You might be inetersting in looking at tools like caollectd and iotop. Then again, yopu mightalso by interested in just using them instead of reinventing the wheel - as far as I see, such performance analysis is not learned in a day, and these guys invested significant amounts of time and knowledge in building these tools.
You should try to use the softwares already present. You can use iozone for the same. For tutorial, you should refer to this blog post on nixcraft
File system performance testing is a very complex topic. You can easily make a lots of mistakes that basically make your whole tests worthless.
Stony Brook University and IBM Watson Labs have published an highly recommended journal paper in the "Transaction of Storage" about file system benchmarking, in which they present different benchmarks and their strong and weak points: A nine year study of file system and storage benchmarking.
They give lots of advise how to design and implement a good filesystem benchmark. As I said: It is not an easy task.