views:

167

answers:

3

If P( cj | xi ) are already known, where i=1,2,...n; j=1,2,...k;

How do I calculate/estimate: P( cj | xl , xm , xn ), where j=1,2,...k; l,m,n belongs to {1,2,...n} ?

A: 

Maybe this site can help? I'm assuming your trying to implement the Bayes rule in Matlab.

aduric
+4  A: 

EDIT2 (following the OP's comment)

From bayes rule we know that P(C|x1,x2,x3) ~ P(C)*P(x1,x2,x3|C) and therefore for classification, you compute that expression for all C=j and predict the most likely class (MAP).

Now to compute P(x1,x2,x3|C), for i.i.d observations, this can be written as: P(x1,x2,x3|C) = P(x1|C)*P(x2|C)*P(x3|C), which given a parametric model each could be computed easily.

Amro
No,seems this is not what I'm doing.`C_i` denotes Categories,while `X_i` denotes samples.So my question is how to classify different samples.
given what little details you're sharing, no wonder both @aduric and I misunderstood the question!
Amro
Sorry man,now that you understand what I mean,do you have a solution ?Can I just use **P( c_j | x_l ) * P( c_j | x_m ) * P( c_j | x_n )** to approximate **P( c_j | x_l , x_m , x_n )**
Seems BNT can also be used to do this job by setting `sizeNodes` to `[2 2 2 2]`?
no that was a different thing. See my edit above..
Amro
It seems to me `independent is` enough,why do you require it to be `identically distributed` too?
This doesn't look like an answer to the question. He didn't ask about Bayes Nets, nor do I think he needs them.
ziggystar
there was some confusion at first ;)
Amro
A: 

What you want to do is not possible without further information or simplifying assumptions.

The conditional probability P(A|B,C) is not (completely/at all :) determined by P(A|B) and P(A|C).

ziggystar