I have a function that processes some data and finds the threshold that classifies the data with the lowest error. It looks like this:
void find_threshold(FeatureVal* fvals, sampledata* data, unsigned int num_samples, double* thresh, double* err, int* pol) {
//code to calculate minThresh, minErr, minPol omitted
printf("minThresh: %f, minErr: %f, minPol: %d\n", minThresh, minErr, minPol);
*thresh = minThresh;
*err = minErr;
*pol = minPol;
}
Then in my test file I have this:
void test_find_threshold() {
//code to set up test data omitted
find_threshold(fvals, sdata, 6, &thresh, &err, &pol);
printf("Expected 5 got %f\n", thresh);
assert(eq(thresh, 5.0));
printf("Expected 1 got %d\n", pol);
assert(pol == 1);
printf("Expected 0 got %f\n", err);
assert(eq(err, 0.0));
}
This runs and the test passes with the following output:
minThresh: 5.000000, minErr: 0.000000, minPol: 1
Expected 5 got 5.000000
Expected 1 got 1
Expected 0 got 0.000000
However if I remove the call to printf() from find_threshold, suddenly the test fails! Commenting out the asserts so that I can see what gets returned, the output is:
Expected 5 got -15.000000
Expected 1 got -1
Expected 0 got 0.333333
I cannot make any sense of this whatsoever.