views:

100

answers:

2

Hello,

I'm attempting to design an algorithm that does the following.

Input:

I've a set of keys (total n) that are mapped to set of properties. The properties contain the weight for each property and the value for the property.

Output:

Identify a set of keys that are qualified (total k) based on the set of properties and their respective weights and values.

Additionally, the data should be modified as such in every cycle of choosing winners such that the chances of someone who was not chosen goes up in the next cycle (whereas the chances of someone who has won would be as if they are completely new in the system).

Hopefully the issue at hand is clear. Basically, the value of the property and the respective weight would determine which keys are more likely to win (a higher value with a higher weight would increase the probability of that key winning) and we will eventually end up choosing everyone.

Any input on how this can be done would be greatly appreciated.

Thanks! - Azeem

A: 

an unoptimized but effective method would to make a list of all contestants, but append additional indexes for contestest proportional to weight.

psuedo is totally out of context of any real implementation, but you should get the idea.

const int DEFAULT_WEIGHT = 1;
public List<Contestant> items = new List<Contestant>();
public List<Guid> LotteryPool = new List<int>();

public Contestant Roll()
{
     Random rnd = new Random();
     rnd.Seed = DateTime.Now.Ticks;

     // Generate LotteryPool
     foreach(Contestant item in items)
     {
              for(int i = 0; i < item.Weight; i++)
              {
                       LotteryPool.Add(item.Id);
              }

              item.Weight++;
     }

     // Find the contestant matching a random id from the pool.
     Contestant result = FindContestant(LotteryPool[rnd.Next(0, LotterPool.Count]);

     // apply the default weight the winner
     result.Weight = DEFAULT_WEIGHT;

     return result;
}
entens
entens, Thanks. This looks like it would in fact work. We thought about doing this earlier but gave up on it because we were not sure how the probability of someone not selected would behave as we ran through multiple iterations. Although, now that I think about it, I think we can adjust the weight differently based on another set of conditions so the increase in probability from one iteration to next can be controlled as well (for eg: using this method certain preferred keys can be guaranteed selection in three rounds etc.). Just thinking out loud...
Azeem
A: 

Consider your weights as segments of a line, with the overall line length equal to the sum of the weights. Pick a uniform random number between 0 and that length. The winner is the candidate whose segment the number falls into.

Remove that winner, and reduce the overall line length accordingly. Then repeat the process with the remaining candidates until you've chosen your k.

After the cycle, rescale the losers to occupy most of the original length and add back the winners with the remaining small chunk divided evenly between them.

walkytalky
Thanks walky talky. This is an excellent way to look at this. Let me figure out an algorithm that does this.
Azeem
And when the losers from the first round win the second, how do you restore their original weights? You have to store that information somewhere.
Beta
@Beta, yes, the permanent data about the weights are stored in a db separately... so we'll have access to it.
Azeem