tags:

views:

82

answers:

3

stock allocation problem.

I have a problem where each of a known set of products with various rates of sale need to be allocated into one of more of a fixed number of buckets. Each product must be in at least one bucket and buckets cannot share product. All buckets must be filled, and products will usually be in more than one bucket My problem is to optimize the allocation of products into all of the buckets such that it maximises the amount of time before any one product sells out.

To complicate matters, each type of bucket may hold differing amounts of each type of product. This is not necessarily related to the size of the product (which is not known), but may be arbitrary. Eg,

  • Bucket A holds 10 Product 1, Bucket B holds 20 product 2, however
  • Bucket A holds 5 Product 2, Bucket B holds 8 Product 1.

So, as inputs we have a set of products and their sales velocity eg

  • Product 1 Sells 6 per day
  • Product 2 Sells 5 per day
  • Product 3 Sells 4 per day
  • Product 4 Sells 7 per day

A set of Buckets

  • Bucket A
  • Bucket B
  • Bucket C
  • Bucket D
  • Bucket E
  • Bucket F
  • Bucket G

And a Product-Bucket lookup table to determine each buckets capacity for each product eg

  • Prod 1 Bucket A = 40;
  • Prod 1 Bucket B = 45:
  • Prod 1 Bucket C = 40;
  • ...
  • Prod 2 Bucket A = 35;
  • ...
  • Prod 2 Bucket E = 20;
  • ...
  • etc

Approaches i have tried so far include

  1. reduce the products per bucket to a common factor - until I realised the product-bucket size relationship was arbitrary.

  2. Place products into buckets at random and the iterate through each product swapping for an existing product in a bucket and test whether it improves the time taken till sold out. My concerns with this approach are that it may take a path that is optimal at the decision time but obscures a later more optimal choice. or perhaps the optimal decision requires multiple product changes that will never occur because the individual choices are not optimal.

  3. An exhaustive search - turns out this produces a very large combination of possibilities for a not so large set of products and buckets.

I initially thought the optimum solution would be allocate products in the same ratio as their sale rates, but discovered this not to be true as a configuration holding a very small number of products matching their sales ratios perfectly would be less desirable than a configuration holding much more stock and thus having a longer sale time before first sell out.

Any c# or pseudo code appreciated

A: 

I suggest a variant of approach 2 based on simulated annealing -- great approach to optimization where your underlying strategy is based on steepest-descent or the like. Wikipedia does a good job explaining the idea; the crucial conceptual part is:

each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process

Alex Martelli
Thinking about this some more, essentially what I have is a matrix of buckets * products. Then I have to make one selection per bucket such that it meets my goal.The simulated annealing looks good - it doesn't guarantee the best solution. But any solution it does come up with is guaranteed to be better than the previous one.One thing i'm not completely sure of is why avoiding the risk getting stuck in a local minima early on is necessarily better than local minima in later iterations. Is there not also a counter risk that we could be jumping from a path that leads to a better minima?
flesk
@flesk, it's possible, and you can probabilistically guard against it by doing more than one pass and remembering the best result; if you know something about the N-dimensional shape of your target function, and the cost of computing it at one point, you can generally estimate the best compromise (offhand, with your discontinuous set of allowed points, and target computation being a cheap sum, I suspect you'd be well advised to try iterating).
Alex Martelli
A: 

I think this problem may be NP complete and that you may have to resort to the usual methods GA/SA/Breadth/Depth searches and/or settle for non-optimal solutions depending on how many buckets/products you have.

Assuming that you have enough product to fit all your buckets (which you don't say), you may be able to brute force a single product with every bucket to determine which product is the best for each bucket. I somehow doubt that this is the case, but in case it is, here is the general algorithm.

(extremely pseudo-code python. This does not run unmodified!!)

index = {} # a hash table containing hash tables of buckets 

for bucket in buckets:
    for product in products:
        capacity = find_capacity(bucket,product)
        sell_rate = 1/sales_velocity[product] #assuming sales_velocity are not fractions

        longevity = capacity * sell_rate

        index[bucket][product] = longevity

for bucket in buckets:
    product = find_maximum_longevity(index, bucket)
    print bucket, product
Unknown
thanks. I have an infinite supply of any particular product. And never have more types of products than buckets, but most often will have more buckets than product types, so a given product type will usually go into more than one bucket.
flesk
A: 

Simulated annealing sounds good, although you have to be careful choosing the parameters and the mutation functions to get a good solution.

You could also specify the problem as a series of equations and call an Integer Programming (IP) package such as http://www.coin-or.org/ to find an optimal or near-optimal solution.

jcd