views:

83

answers:

2

I'm trying to use the Kowalski graph algorithm for resolution theorem proving. The description of the algorithm at http://www.doc.ic.ac.uk/~rak/ is silent on what to do about the large number of duplicate clauses it generates. I'm wondering if there's a well-known technique for dealing with them?

In particular, you can't simply suppress the generation of duplicate clauses, because the links that come with them are relevant.

It seems to me that it's probably necessary to track the set of all clauses generated thus far, and when a duplicate is generated, add the new links to the existing instance instead. This probably needs to be maintained even when a clause is nominally deleted, for when it is regenerated.

Duplication probably needs to be defined in terms of textual representation, rather than object equality, because literals of different clauses are distinct objects even when they are identical.

Can anyone confirm whether I'm on the right track here? Also, the only significant online reference I could find to this algorithm was the link above, does anyone know of any others, or any existing code implementing it?

A: 

This mostly looks perfectly plausible; some Googling doesn't present any obvious implementations. I agree, you want to look at equality between the representations instead of identity.

Charlie Martin
A: 

Turns out the Kowalski algorithm just isn't as useful as I thought it might be. Basically you need to keep everything you generate so as not to spend practically all your CPU time generating the same clauses over and over again. Keeping everything means you want to spot duplicates which means you want to hash everything, which has the useful side effect that identity can be checked by simple pointer comparison (since you only have one copy of each expression).

rwallace