views:

53

answers:

2

I am soon to be writing a dissertation as part of my degree course, which is based on enhancing an open source product. The main body of the dissertation will be about my research into how to enhance the product, how I implemented it, and how I evaluated it. In it I also want to briefly discuss the factors involved in my work being accepted into the project and how I had a consideration of these while developing. At the moment, I have been using intuition to form the acceptance factors. This would include things like:

  • Matching the conventions of the existing code
  • Functionality matching the philosophy of the project
  • Level of documentation within the contribution
  • Size of entirely new code which has been developed "in the dark"
  • Reputation of the contributor
  • ... etc.

I am slightly uncomfortable discussing these factors when I only have intuition to rely on (and it really is intuition, I have no personal experience of contributing to open source). I would prefer to reference scientifically valid, peer-assessed research into these factors. For example:

"Study has shown (Bloggs, 2008) that the largest factor in an open source contribution being accepted is the code matching the conventions of the project."

Are there any published studies on the acceptance factor of open source contributions?

+2  A: 

The Cathedral and the Bazaar is a good source of social/psychological commentary in open-source/hacker circles:

  1. http://catb.org/esr/writings/cathedral-bazaar/

Specifically Homesteading the Noosphere and The Magic Cauldron

Aiden Bell
+1 how could I have forgot about Eric S. Raymond? Thanks!
Grundlefleck
+2  A: 

interesting question, maybe these are helpful:

Patch Review Processes in Open Source Software Development Communities: A Comparative Case Study - http://www.computer.org/portal/web/csdl/doi/10.1109/HICSS.2007.426

Detecting Patch Submission and Acceptance in OSS Projects - http://portal.acm.org/citation.cfm?id=1269040

A preliminary examination of code review processes in open source projects - http://foss.mit.edu/papers/Rigby2006TR.pdf

The last says:

To determine why 56% of pre-commit reviewed patches were rejected, we intend to do more detailed data analysis including the use of more sophisticated message threading techniques and manual classification of a smaller time period.

Maybe the authors have another finding that looks at that:

Open source software peer review practices: a case study of the apache server http://portal.acm.org/citation.cfm?id=1368162

Specifically, we measure the frequency of review, the level of participation in reviews, the size of the artifact under review, the calendar time to perform a review, and the number of reviews that find defects.

+1 These look like the kind of thing I'm after. Thanks!
Grundlefleck
+1 nice links.{}{}
Aiden Bell