I'm working on an assignment that has a plain text file of data.
Each line of text represents a car race. Each line of text has four strings, delimited by commas.
The strings represent a racer name. The first string is the racer that came first, the second got second place etc.
The task we've been given is to read in this file and sort the racers based on their success. We've been given a comparison algorithm to use:
int compareTo(Racer r1, Racer r2)
{
for (int i = 0; i < r1.positions.length; i++)
{
int diff = r1.positions[i] - r2.positions[i];
if (diff == 0)
{
continue;
}
return diff;
}
return 0;
}
So essentially first place positions take precendence over second place positions, second over third etc.
But there's an issue with this code.
My thoughts would be that a racer who had 100 second places and no firsts would be better ranked than a racer with only one first place.
So this got me to thinking: Wouldn't it make more sense to have a weighting for each position count?
The weighting would be calculated for each Racer:
int weight = w1*positions[0]
+ w2*positions[1]
+ w3*positions[2]
+ w4*positions[3];
But I ran into an issue. How do I calculate the optimum weights for each position count?
Surely I can look at the existing data and calculate the weights from that? My instincts tell me that I should be trying to calculate it based on the ratio of unique winners per position or something similar.
Is there a theorem that can calculate the weights? I figure I can get a couple of bonus points if I can show a better algorithm ;)
Thanks in advance.