Probabilistic Inference

Estudies4you
Probabilistic Inference in Artificial Intelligence

PROBABILISTIC INFERENCE

A GENERAL METHOD 
The process of computing the posterior distribution of variables given called probabilistic inference.
Let us have a set v of the propositional variable  A1, A2, ….Ak and an evidence is given that the variables in subset of É› of v have some define values É› = e (which is true r false). 
The process of calculating the conditional probability P(Ai) = a; | É› = e) that is some variable A, having value ai, some evidence e. This is called as probabilistic inference. 
As A, may be true or false two conditional probabilities exist. for Ai = true and Ai = false. But if we calculate any of them, we can calculate the other from probability rules that P(Ai) =, False | É› = e) + P(A= True | É› = e) = 1 
Let us use the 'brute-force'. method to calculate the conditional probability of Ai = true given evidence e. 
        P({A_i}{\rm{  =  True | }}\varepsilon {\rm{  =  e}}){\rm{  =  }}\frac{{{\rm{P(}}{{\rm{A}}_i}{\rm{  =  True, | }}\varepsilon {\rm{  =  e}})}}{{{\rm{P(}}\varepsilon {\rm{  =  e}})}}

The result of this can be obtained by using the. rule which specifies that/ we can use higher orders ones to get the lower order joint probabilities, 
        {\rm{P(}}{{\rm{A}}_i}{\rm{  =  True, }}\varepsilon {\rm{  =  e)  =  }}\sum\limits_{Ai = true,\varepsilon  = e} {P({A_1}.....{A_k})}
Where Ai,i=1,2,.....K are the collection of propositional variables. 
Example Given joint probabilities 
                P(A, B, C) = 0.3 
                P(A, B,  ᆨC) = 0.1
                P(A, ᆨB, ᆨC)= 0.2 
                P(A, ᆨB, C) = 0.2
                P(ᆨA, B, C) = 0.1 
                P(ᆨA, B, ᆨC) = 0.55 
                P(ᆨA, ᆨB, C) = 0.04 
                P(ᆨA, ᆨB, ᆨC) =0.0
Given ᆨB as evidence and to calculate P(A | ᆨB). We can calculate this as follows, 
{\rm{P(}}{{\rm{A}}_i}{\rm{ | }}\neg {\rm{B)  =  }}\frac{{P(A,\neg B)}}{{P(\neg B)}}{\rm{  =  }}\frac{{P(A,\neg B,C) + (A,\neg B,\neg C)}}{{P(\neg B)}}
As P(ᆨ) is not known we can calculate it from the rule that PᆨA | ᆨB) + P(A | ᆨB) = 1 or we can calculate the marginal P(ᆨB). In the first method we have to calculate P(ᆨA/-ᆨB) first, 
        {\rm{P(}}\neg {\rm{A | }}8N{\rm{)  =  }}\frac{{P(\neg A,\neg B)}}{{P(\neg B)}}{\rm{  =  }}\frac{{P(\neg A,\neg B,\neg C) + P(\neg A + \neg B + \neg C)}}{{P(\neg B)}}

        = \frac{{{\rm{0}}{\rm{.04  +  0}}{\rm{.0}}}}{{P(\neg B)}}
Now by summing up P(A | ᆨB) and P(ᆨA | ᆨB) to 1, we have, 
        \frac{{{\rm{0}}{\rm{.32}}}}{{(P|\neg B)}}{\rm{  =  }}\frac{{0.04}}{{P(\neg B)}}{\rm{  =  1}}

        \frac{{0.04}}{{P(\neg B)}}{\rm{  =  1}}

        P(ᆨB) = 0.34

From this we can calculate P(A | ᆨB) 
        P(A|\neg B) = \frac{{{\rm{0}}{\rm{.03}}}}{{P(\neg B)}}{\rm{  =  }}\frac{{0.03}}{{0.34}}
        = 0.88 
In this case as we have three variables we have calculated probabilistic inference easily. But if there are k variables, then {{\rm{2}}^k} list of joint probabilities have to be provided. So this type of evaluation is not suitable for such cases. 
Conditional independencies are involved to efficiently formulate our knowledge about the domain to simplify the computations of conditional probabilities of some variables if the evidence about them is given. 
To Top