views:

145

answers:

3

I know this is only part of a programming question, but at the moment, I'm doing a little bit of logic programming. One thing I still don't understand correctly is Unification in First Order Logic.

I read the Wikipedia article and it is more or less clear that the purpose is searching a term that unifies two sentences... There are also examples in this article but I just don't get the point why this should be useful. Can anyone give an example with real world objects instead of A, B, C,, etc.? I hope this will help me to understand. Thanks

+1  A: 

Logic programming is, AFAIK, pretty much all unification. You provide a statement to the interpreter, and the interpreter tries to unify it with something it knows to be "true" i.e. something that is in its database.

e.g.

cat(tom) :- true.

Asserts that tom is a cat.

Then you can query

?- cat(X).

and prolog will return

 X = tom

Prolog looks in its database and tries to unify your provided statement (cat(X)) with a fact it already "knows". In this case, it finds cat(tom) and thus can tell you that X=tom.

TJ Ellis
cite my sources: prolog example comes from wikipedia's prolog page.
TJ Ellis
+1  A: 

If you are looking at real-world examples where unification is used and useful, take a look at Unification-based grammars which are used in computational linguistics, for example HPSG and LFG. On the surface, this look like another flavour of unification, but they are really the same.

Unification-based grammar can be thought of as a CFG where the productions is extended with unification. Every term in the CGF gets an AVM, attribute value matrix, which is a directed acyclic graph of features and values. The idea here is somewhat akin to attribute grammars used in compilers.

Imagine this toy grammar:

 S -> NP VP  
 NP -> Kim  
 NP -> The cats  
 VP -> V NP  
 V -> see  
 V -> sees

We have some slight overgeneration here in the agreement:

*The cats sees Kim [S [NP The cats] [VP [V sees] [NP Kim]]]

In order to fix this we could refine the CFG to include the notion of agreement:

 S -> NP_sg VP_sg  
 S -> NP_sg VP_pl  
 NP_sg -> Kim  
 NP_pl -> The cats  
 VP_sg -> V_sg NP_sg  
 VP_sg -> V_sg NP_pl  
 V_sg -> sees  
 V_pl -> see  
 VP_pl -> V_pl NP_pl  
 VP_pl -> V_pl NP_sg

Here we will reject the overgeneration from before. But this leads to combinatorial explotion very quickly. We could however augment each term with an AVM and unify these together when we parse:

 S -> NP VP , C = A unified with B.  
 NP -> kim /[ AGR sg ]. We mark Kim as being singular   
 NP -> The cats / [ AGR pl ]  
 VP[ AGR #1 ] -> V [ AGR #1 ] NP 

The #1-notation are reentrancies, which means that the value of this feature must be the same, in fact they will point to the same node in the graph after unification, iff it succedes. Here in practice we say that the agreement feature of a verb phrase is the same as the agreement of the verb in the phrase.

 V -> See / [ AGR pl ]  
 V -> Sees / [ AGR sg ]

With our augmented toy grammar "Kim see the cats" is rejected because the NP and the VP will not unify, having a different value for its AGR feature. When we are parsing we unifiy the AVMs together, and therefore gain very much in expressiveness, making it easy for grammar-engineers to write grammars. Typically a broad-coverage UBG has in the order of a hundred rules, while the equvivalent CFG , which may not exist, CFGs with unifaction are Turing complete, will have rules in the number of thousands or more.

For more details see HPSG and LFG.

johanbev
A: 

Thanks to you to for these detailed answers. Now I really get it. However I would like to write down here an example I found in the Book Artificial Intelligence: A modern approach from Stuart Russell and Peter Norvig if somebody is looking for the same quesiton again. I think this answer uses a very practical approach:

Lifted inference rules require finding substitutions that make different logical expressions look identical. This process is called unification and is a key compontent of all first-order inference algorithms. The UNIFY algoirthm takes two sentences and returns a unifier for them if one exits.

Let us look at some examples of how UNIFY should behave. Suppose we have a query Knows(John, x): whom does John know? Some answers to this query can be found by finding all sentences in the knowledge base that unify with Knows(John, x). Here are the results of unification with four different sentences that might be in the knowledge base:

UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}
UNIFY(Knows(John, x), Knows(y, Bill)) = {x/Bill, y/John}
UNIFY(Knows(John, x), Knows(y, Mother(y))) = {y/John, x/Mother(John)}
UNIFY(Knows(Johm, x), Knows(x, Elisabeth)) = fail

The last unification fails because x cannot take on the values John and Elizabeth at the same time.

Roflcoptr