views:

739

answers:

7

I have a problem involving a collection of continuous probability distribution functions, most of which are determined empirically (e.g. departure times, transit times). What I need is some way of taking two of these PDFs and doing arithmetic on them. E.g. if I have two values x taken from PDF X, and y taken from PDF Y, I need to get the PDF for (x+y), or any other operation f(x,y).

An analytical solution is not possible, so what I'm looking for is some representation of PDFs that allows such things. An obvious (but computationally expensive) solution is monte-carlo: generate lots of values of x and y, and then just measure f(x, y). But that takes too much CPU time.

I did think about representing the PDF as a list of ranges where each range has a roughly equal probability, effectively representing the PDF as the union of a list of uniform distributions. But I can't see how to combine them.

Does anyone have any good solutions to this problem?

Edit: The goal is to create a mini-language (aka Domain Specific Language) for manipulating PDFs. But first I need to sort out the underlying representation and algorithms.

Edit 2: dmckee suggests a histogram implementation. That is what I was getting at with my list of uniform distributions. But I don't see how to combine them to create new distributions. Ultimately I need to find things like P(x < y) in cases where this may be quite small.

Edit 3: I have a bunch of histograms. They are not evenly distributed because I'm generating them from occurance data, so basically if I have 100 samples and I want ten points in the histogram then I allocate 10 samples to each bar, and make the bars variable width but constant area.

I've figured out that to add PDFs you convolve them, and I've boned up on the maths for that. When you convolve two uniform distributions you get a new distribution with three sections: the wider uniform distribution is still there, but with a triangle stuck on each side the width of the narrower one. So if I convolve each element of X and Y I'll get a bunch of these, all overlapping. Now I'm trying to figure out how to sum them all and then get a histogram that is the best approximation to it.

I'm beginning to wonder if Monte-Carlo wasn't such a bad idea after all.

Edit 4: This paper discusses convolutions of uniform distributions in some detail. In general you get a "trapezoid" distribution. Since each "column" in the histograms is a uniform distribution, I had hoped that the problem could be solved by convolving these columns and summing the results.

However the result is considerably more complex than the inputs, and also includes triangles. Edit 5: [Wrong stuff removed]. But if these trapezoids are approximated to rectangles with the same area then you get the Right Answer, and reducing the number of rectangles in the result looks pretty straightforward too. This might be the solution I've been trying to find.

Edit 6: Solved! Here is the final Haskell code for this problem:

-- | Continuous distributions of scalars are represented as a
-- | histogram where each bar has approximately constant area but
-- | variable width and height.  A histogram with N bars is stored as
-- | a list of N+1 values.
data Continuous = C {
      cN :: Int,
      -- ^ Number of bars in the histogram.
      cAreas :: [Double],
      -- ^ Areas of the bars.  @length cAreas == cN@
      cBars :: [Double]
      -- ^ Boundaries of the bars.  @length cBars == cN + 1@
    } deriving (Show, Read)


{- | Add distributions.  If two random variables @vX@ and @vY@ are
taken from distributions @x@ and @y@ respectively then the
distribution of @(vX + vY)@ will be @(x .+. y).

This is implemented as the convolution of distributions x and y.
Each is a histogram, which is to say the sum of a collection of
uniform distributions (the "bars").  Therefore the convolution can be
computed as the sum of the convolutions of the cross product of the
components of x and y.

When you convolve two uniform distributions of unequal size you get a
trapezoidal distribution. Let p = p2-p1, q - q2-q1.  Then we get:


>   |                              |
>   |     ______                   |
>   |     |    |           with    |  _____________
>   |     |    |                   |  |           |
>   +-----+----+-------            +--+-----------+-
>         p1   p2                     q1          q2
> 
>  gives    h|....... _______________
>            |       /:             :\
>            |      / :             : \                1
>            |     /  :             :  \     where h = -
>            |    /   :             :   \              q
>            |   /    :             :    \
>            +--+-----+-------------+-----+-----
>             p1+q1  p2+q1       p1+q2   p2+q2

However we cannot keep the trapezoid in the final result because our
representation is restricted to uniform distributions.  So instead we
store a uniform approximation to the trapezoid with the same area:

>           h|......___________________
>            |     | /               \ |
>            |     |/                 \|
>            |     |                   |
>            |    /|                   |\
>            |   / |                   | \
>            +-----+-------------------+--------
>               p1+q1+p/2          p2+q2-p/2

-}
(.+.) :: Continuous -> Continuous -> Continuous
c .+. d = C {cN     = length bars - 1,
             cBars  = map fst bars, 
             cAreas = zipWith barArea bars (tail bars)}
    where
      -- The convolve function returns a list of two (x, deltaY) pairs.
      -- These can be sorted by x and then sequentially summed to get
      -- the new histogram.  The "b" parameter is the product of the
      -- height of the input bars, which was omitted from the diagrams
      -- above.
      convolve b c1 c2 d1 d2 =
          if (c2-c1) < (d2-d1) then convolve1 b c1 c2 d1 d2 else convolve1 b d1 
d2 c1 c2
      convolve1 b p1 p2 q1 q2 = 
          [(p1+q1+halfP, h), (p2+q2-halfP, (-h))]
               where 
                 halfP = (p2-p1)/2
                 h = b / (q2-q1)
      outline = map sumGroup $ groupBy ((==) `on` fst) $ sortBy (comparing fst) 
$ concat
                [convolve (areaC*areaD) c1 c2 d1 d2 |
                 (c1, c2, areaC) <- zip3 (cBars c) (tail $ cBars c) (cAreas c),
                 (d1, d2, areaD) <- zip3 (cBars d) (tail $ cBars d) (cAreas d)
                ]
      sumGroup pairs = (fst $ head pairs, sum $ map snd pairs)

      bars = tail $ scanl (\(_,y) (x2,dy) -> (x2, y+dy)) (0, 0) outline
      barArea (x1, h) (x2, _) = (x2 - x1) * h

Other operators are left as an exercise for the reader.

+1  A: 

Is there anything that stops you from employing a mini-language for this?

By that I mean, define a language that lets you write f = x + y and evaluates f for you just as written. And similarly for g = x * z, h = y(x), etc. ad nauseum. (The semantics I'm suggesting call for the evaluator to select a random number on each innermost PDF appearing on the RHS at evaluation time, and not to try to understand the composted form of the resulting PDFs. This may not be fast enough...)


Assuming that you understand the precision limits you need, you can represent a PDF fairly simply with a histogram or spline (the former being a degenerate case of the later). If you need to mix analytically defined PDFs with experimentally determined ones, you'll have to add a type mechanism.


A histogram is just an array, the contents of which represent the incidence in a particular region of the input range. You haven't said if you have a language preference, so I'll assume something c-like. You need to know the bin-structure (uniorm sizes are easy, but not always best) including the high and low limits and possibly the normalizatation:

struct histogram_struct {
  int bins; /* Assumed to be uniform */
  double low;
  double high;
  /* double normalization; */    
  /* double *errors; */ /* if using, intialize with enough space, 
                         * and store _squared_ errors
                         */
  double contents[];
};

This kind of thing is very common in scientific analysis software, and you might want to use an existing implementation.

dmckee
I do indeed want to write a mini-language. But I want the underlying semantics to be something more efficient than monte-carlo.
Paul Johnson
Sorry, I wasn't clear. Do you really need to know what the composite PDFs are, or do you simply need to be able to draw numbers from the efficiently? The later case really only requires a good interpreter for PDF-composition inside you code.
dmckee
I want to know what the composite PDFs are. Ultimately I need to be able to determine things like P(x<y).
Paul Johnson
> you can represent a PDF fairly simply with a histogramYes. Do you know of any algorithms for doing this.
Paul Johnson
::casts about for a way to save this idea:: Uh, er, um. How about a cluster. Yeah, you need a cluster...
dmckee
+1  A: 

Autonomous mobile robotics deals with similar issue in localization and navigation, in particular the Markov localization and Kalman filter (sensor fusion). See An experimental comparison of localization methods continued for example.

Another approach you could borrow from mobile robots is path planning using potential fields.

eed3si9n
A: 

If you want some fun, try representing them symbolically like Maple or Mathemetica would do. Maple uses directed acyclic graphs, while Matematica uses a list/lisp like appoach (I believe, but it's been a loooong time, since I even thought about this).

Do all your manipulations symbolically, then at the end push through numerical values. (Or just find a way to launch off in a shell and do the computations).

Paul.

Paul W Homer
+1  A: 

A couple of responses:

1) If you have empirically determined PDFs they either you have histograms or you have an approximation to a parametric PDF. A PDF is a continuous function and you don't have infinite data...

2) Let's assume that the variables are independent. Then if you make the PDF discrete then P(f(x,y)) = f(x,y)p(x,y) = f(x,y)p(x)p(y) summed over all the combinations of x and y such that f(x,y) meets your target.

If you are going to fit the empirical PDFs to standard PDFs, e.g. the normal distribution, then you can use already-determined functions to figure out the sum, etc.

If the variables are not independent, then you have more trouble on your hands and I think you have to use copulas.

I think that defining your own mini-language, etc., is overkill. you can do this with arrays...

af
+1  A: 

Some initial thoughts:

First, Mathematica has a nice facility for doing this with exact distributions.

Second, representation as histograms (ie, empirical PDFs) is problematic since you have to make choices about bin size. That can be avoided by storing a cumulative distribution instead, ie, an empirical CDF. (In fact, you then retain the ability to recreate the full data set of samples that the empirical distribution is based on.)

Here's some ugly Mathematica code to take a list of samples and return an empirical CDF, namely a list of value-probability pairs. Run the output of this through ListPlot to see a plot of the empirical CDF.

empiricalCDF[t_] := Flatten[{{#[[2,1]],#[[1,2]]},#[[2]]}&/@Partition[Prepend[Transpose[{#[[1]], Rest[FoldList[Plus,0,#[[2]]]]/Length[t]}&[Transpose[{First[#],Length[#]}&/@ Split[Sort[t]]]]],{Null,0}],2,1],1]

Finally, here's some information on combining discrete probability distributions:

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf

dreeves
I agree with your point about bin size, but the volume of historical data is large, so I'm going to stick with bins. Useful to see the stuff about empirical CDFs though. I'd already read the chapter you point to. Thanks.
Paul Johnson
+1  A: 

I think the histograms or the list of 1/N area regions is a good idea. For the sake of argument, I'll assume that you'll have a fixed N for all distributions.

Use the paper you linked edit 4 to generate the new distribution. Then, approximate it with a new N-element distribution.

If you don't want N to be fixed, it's even easier. Take each convex polygon (trapezoid or triangle) in the new generated distribution and approximate it with a uniform distribution.

Mr Fooz
+1  A: 

Another suggestion is to use kernel densities. Especially if you use Gaussian kernels, then they can be relatively easy to work with... except that the distributions quickly explode in size without care. Depending on the application, there are additional approximation techniques like importance sampling that can be used.

Mr Fooz