There's a big hullabaloo about hacked source code from the University of East Anglia's Climate Research Unit that Global Warming skeptics are saying proves that the scientists who wrote these IDL and Fortran programs were intentionally fudging data.
http://wattsupwiththat.com/2009/12/05/the-smoking-code-part-2/
To rebut the claim, one commenter said: I’ve been programming in IDL to produce plots for scientific papers for about 10 years and this pro clearly isn’t producing a plot for publication. The pro just plots to screen – if it were for publication then it write the graph out as a postscript file to submit to the publisher. To me, this looks like someone experimenting with the data, which doesn’t really mean anything.
Is that true? Coders make drafts and iterate their code and leave their imperfect drafts in their directories all the time. Or, is their a more insidious explanation?
I haven't worked on any open source projects or scientific research so I'd like your input on this. I was wondering if anyone in the open source community or scientific computing community has ever heard of anyone trying to intentionally program garbage output or to 'fudge' their data just to validate good data? Is this a known practice or taboo?
Can anyone think of any major projects they've seen or worked on where there's code that intentionally gives garbage output?