Hi, you budding logicians! Here is a bit of plain old English for you to ponder: premises: The customs officials searched everyone who entered this country who was not a Very Important Person (VIP). Some of the drug pushers entered this country and they were only searched by drug pushers. No drug pusher was a VIP. Can we conclude: Some of the officials were drug pushers? Do the premises entail the statement that some of the official were drug pushers? Well, let's see... (I SUGGEST YOU PRINT THIS OFFLINE AND SIT DOWN AND STUDY IT IN DETAIL. ITS RATHER LENGTHY, BUT HOPEFULLY FUN.) Ok, let's answer this question logically. Note that we seek to prove that "some of the officials were drug pushers" is a logical consequence of the premises. We thus need to convert all these statements to FOPC and prove a theorem! Let me help you get started by first defining some predicates that you can use to convert this ambiguous English into precise and crisp logical formulae. Let E(x) mean "x entered the country" V(x) mean "x was a VIP" S(x,y) mean "y searched x" C(x) mean "x was a customs official" P(x) mean "x was a drug pusher" Now, look at each sentence carefully and construct its precise logical formula. The very first sentence is the tricky one: 1)The customs officials searched everyone who entered this country who was not a Very Important Person (VIP). This translates to 1) (forall x)( ( E(x) ^ ~V(x) ) --> (exists y) ( S(x,y) ^ C(y) ) ) Notice the scope of x. (Sorry about the symbols. I have no universal, existential and implication symbols on my xterm.) The logical expression says, for everyone in the universe(x), if they entered the country and they are NOT a VIP, then there is some thing in the universe (y) that searched the entity that was entering the country, and that thing is a customs official. Believe me, it is the correct logical statement derived from the English. See how badly English conveys meaning! and how elegant and clean the logical expression is! Now, let's look at the second English sentence: 2) Some of the drug pushers entered this country and they were only searched by drug pushers. This really means: (exists x) ( P(x) ^ E(x) ^ (forall y) ( S(x,y) --> P(y)) ) This means, that there is some thing in the universe (x) such that, the thing is a drug pusher, and it entered the country and for all the entities (y) that searched the thing (x), the thing that did the searching (y) is a drug pusher! Simple, huh? Now look at the last premise in English. 3) No drug pusher was a VIP. What this really means is: (forall x) (P(x) --> ~V(x) ) That is, for all things that are drug pushers, they are NOT VIPs. We seek to conclude: G) Some of the officials were drug pushers? (exists x) ( P(x) ^ C(x) ) We really mean, that there is some thing that is a drug pusher and a customs official. Is this true? Can we conclude G from 1) ^ 2) ^ 3) ?????????????????? Well, let's do it mechanically and forget about all those bad feelings about drugs, and customs officials, and all that stuff. We seek to prove that under all interpretations the formula S = ( (premise-1 ^ premise-2 ^ premise-3) --> G) Alternatively, we seek to prove that ~S is inconsistent, i.e. false under all interpretations. Recall, ~S = ~( (premise-1 ^ premise-2 ^ premise-3) --> G) = ~( ~(premise-1 ^ premise-2 ^ premise-3) -or- G) = ( (premise-1 ^ premise-2 ^ premise-3) ^ ~G) Ok, take this last form of the statement, and rewrite it to clause form. Luckily, we can work on each premise individually and convert each of them to skolem normal form. Let's start with the first premise. premise-1: (forall x)( ( E(x) ^ ~V(x) ) --> (exists y) ( S(x,y) ^ C(y) ) ) = (forall x)( ( ~E(x) -or- V(x) ) -or- (exists y) ( S(x,y) ^ C(y) ) ) = (forall x)(exists y)( ( ~E(x) -or- V(x) ) -or- ( S(x,y) ^ C(y) ) ) = (forall x)(exists y)( (~E(x) -or- V(x) -or- S(x,y)) ^ (~E(x) -or- V(x) -or- C(y) ) ) = (forall x) ( (~E(x) -or- V(x) -or- S(x,f(x))) ^ (~E(x) -or- V(x) -or- C(f(x)) ) ) There we are! Note we introduced the "skolem function" f(x), where f is a completely NEW function symbol not appearing anywhere in our formula S, and y was within the scope of the universal variable x. Now, for the second premise. premise-2: (exists x) ( P(x) ^ E(x) ^ (forall y) ( S(x,y) --> P(y)) ) =(exists x) ( P(x) ^ E(x) ^ (forall y) ( ~S(x,y) -or- P(y)) ) =(exists x)(forall y) ( P(x) ^ E(x) ^ ( ~S(x,y) -or- P(y))) =(forall y) ( P(c) ^ E(c) ^ ( ~S(c,y) -or- P(y))) That's it. We introduced the skolem constant "c". Now, for the third premise: premise-3: (forall x) (P(x) --> ~V(x) ) =(forall x) (~P(x) -or- ~V(x) ) Now, we are about to put all the skolem NF expressions derived from the premises together again. But we have to make sure the universal variables do not conflict. I'll rewrite them here but with new fresh non-conflicting variables: premise-1-SNF: (forall x) ( (~E(x) -or- V(x) -or- S(x,f(x))) ^ (~E(x) -or- V(x) -or- C(f(x)) ) ) premise-2-SNF: (forall y) ( P(c) ^ E(c) ^ ( ~S(c,y) -or- P(y))) premise-3-SNF: (forall z) (~P(z) -or- ~V(z) ) Now, we look at G. G: (exists x) ( P(x) ^ C(x) ) Recall, we negated G to perform our proof by refutation: ~G = ~ (exists x) ( P(x) ^ C(x) ) = (forall x) ~( P(x) ^ C(x) ) = (forall x) (~P(x) -or- ~C(x)) Renaming its variable, we finally get: = (forall w) (~P(w) -or- ~C(w)) Ok, now let's put it all together back as ~S in skolem NF: (forall x)(forall y)(forall z)(forall w) [ (~E(x) -or- V(x) -or- S(x,f(x))) ^ (~E(x) -or- V(x) -or- C(f(x))) ^ (P(c)) ^ (E(c)) ^ (~S(c,y) -or- P(y)) ^ (~P(z) -or- ~V(z)) ^ (~P(w) -or- ~C(w))] (Folks, this can be done by a machine. But let's keep going.) Now, we can presume to know that all the variables are universally quantified, and that we have CNF, so we can erase the quantifiers and the conjunction and disjunction symbols, getting a final derived SET OF CLAUSES (numbered for convenience showing which premise derived the corresponding clause): { 1a: (~E(x) V(x) S(x,f(x))) 1b: (~E(x) V(x) C(f(x))) 2a: ( P(c)) 2b: ( E(c)) 2c: (~S(c,y) P(y)) 3: (~P(z) ~V(z)) 4: (~P(w) ~C(w)) } For the PROOF that S was a theorem, (i.e. the proof that ~S is inconsistent), we simply now have to show that this set of clauses derives a falsehood. Recall RESOLUTION. This means we resolve these clauses until we derive an empty clause. Here is the proof: 1b + 4= 5: (~E(x) V(x) ~P(f(x)) ) 5 + 2c= 6: (~S(c,f(x)) ~E(x) V(x) ) 6 + 1a= 7: (~E(c) V(c) ) 7 + 2b= 8: (V(c)) 8 + 3 = 9: (~P(c)) 9 + 2a=10: () QED. Yep. We can conclude "Some of the officials were drug pushers"!!!! Without a shadow of a doubt, given the premises. There are a couple of other items to consider here. First, note that resolving required that we convert the clauses to a form that allowed us to annihilate complementary literals. The second is how exactly did I know which clauses to resolve anyway? Hmmmmmm. sal Notes on Implementing a Mechanical Theorem Prover CSW4701 Artificial Intelligence A Mechanical Theorem Prover is a computer program that proves theorems. Recall that given a set of statements (taken as axioms): F ,F ,...F , and another 1 2 n statement G, we seek to establish a proof that S = (F ^ F ^ ... ^ F --> G) is VALID. 1 2 n This means that for all interpretations of S, S always evaluates to True and hence G is a logical consequence of (or is deducible from) the axioms. Proving that S is valid, is equivalent to proving that ~S is inconsistent, i.e. ~S is always false for all interpretations. Proving that ~S is inconsistent is called "proof by refutation". Now, how might we go about mechanically proving that G is a logical consequence of the F 's? As we have seen in class, the method of choice is Resolution. i Resolution is a "complete" rule of inference that one applies to the Clause representation of ~S in order to prove that ~S is inconsistent. One may choose to build a mechanical theorem prover that accepts arbitrary statements in FOL and then proceeds to convert the statements to clause form, and then invoke a resolution theorem prover subsystem. For our purposes, we assume that users of our mechanical theorem prover will convert the statements first and input clauses to the resolution theorem prover. Thus, we assume: - Our user defines their own set of predicate, variable, constant and function symbols and proceeds to write down each of the statements F i and G. - Then, the user writes down the formula ~S in the following way: ~S = (F ^ F ^ ... ^ F ^ ~G) 1 2 n (Note how we simply conjoin ~G to the list of axioms.) - Then, the user converts this formula to Clause form producing a set of Clauses, CL. Each clause is of the form: ( literal literal ... literal ) 1 2 m where literal = (Predicate term ... term ) i 1 j or (not Predicate term ... term ) 1 j where Predicate = a predicate symbol (a LISP atom declared as a predicate) term = constant symbol (a LISP atom declared as a k constant) or a logic variable (a LISP atom declared as a variable) or (function term ... term ) 1 j where function = a function symbol (a LISP atom declared as a function) Now we invoke our resolution theorem prover on the LISP list of clauses, CL. The algorithm to execute in order to prove the theorem (i.e. refute ~S) is as follows: 1. Choose two clauses C and C from CL such that there is a literal L 1 2 1 contained in C that is complementary to and unifiable with literal 1 L contained in C : 2 2 a. Consider the first clause in CL, call it C . 1 b. For each clause remaining in CL (appearing later in the list), call it C : 2 i. Scan down C , consider its first literal, call it L : 1 1 ii. Scan down C , and determine whether there is a literal 2 that is both complementary to and unifiable with L . If 1 so, call this literal L . IMPORTANT, while determining 2 that L and L are complementary and unifiable, REMEMBER 1 2 the substitution list that makes them unifiable. Call this substitution list, SUBST. {To determine that literals are complementary, one of the literals will start with the symbol "not" followed by a predicate symbol. The other will simply start with the same predicate symbol but not have a leading "not" symbol. To determine that literals are unifiable, call the unification algorithm, making sure that the "not" symbol is stripped out from the negative literal.} iii. If you don't find complementary and unifiable literals, advance C to the next clause in CL. 2 c. If CL is exhausted, and you have not found another clause with a complementary and unifiable literal with C , then advance C 1 1 to the next clause in CL, and continue the search for C . 2 2. Now, if we have not found C and C , then the set of clauses cannot 1 2 lead to a proof, so we leave the theorem prover with "failure". 3. Otherwise, we proceed to resolve: 4. RESOLVE: a. DELETE (or remove) L from C and DELETE L from C . 1 1 2 2 IMPORTANT, DO NOT DESTROY C and C in any way. They must 1 2 remain on the list of clauses, UNCHANGED. Thus, apply this step to COPIES of C and C . 1 2 b. APPLY the substitutions appearing in SUBST to ALL the literals in BOTH C and C . REPEAT, DO NOT DESTROY C and C in any 1 2 1 2 way. They must remain on the list of clauses, UNCHANGED. {LISP's SUBLIS function should prove very handy here.} c. Now, APPEND the new versions of C and C together, producing 1 2 a new Clause, call it RESOLVENT. d. If RESOLVENT is Null, then exit the theorem prover WITH A SUCCESSFUL PROOF. You're done! e. Otherwise, PUT RESOLVENT on CL {somewhere convenient and smart!} and repeat the steps above! It would be convenient for your theorem prover to not only tell your user that the theorem has been proven, but also WHAT SUBSTITUTIONS or WHAT DEDUCTIONS actually led to the proof. {Imagine handing in your math homework with each theorem proved by pronouncement but without the details of the proof! What would you math professor say?} Thus, it would be wise to somehow record the steps taken to prove each theorem much in the same manner as recording the steps taken in finding a solution path of a state-space search. To do this, you may consider defining a LISP data structure, called node, of the following form: node = ( resolvent-clause C C SUBST) 1 2 In the case of axioms, or input clauses: node = ( input-clause nil nil nil) When printing out the trace of the proof, trace the nodes! Now, for the interesting part. Note that CL continues to grow, larger and larger. No previous clause ever produced is thrown away. This is because FOL is MONOTONIC. Anything proven or deduced from a set of statements, continues to persist. What we are trying to do, however, is find a series of resolution deductions producing a series of clauses that QUICKLY produces the empty clause. This means we have to be intelligent about what choices of clauses we make. In the algorithm above, we simply scan down CL linearly. Perhaps you can devise a smarter way of finding clauses C and C that will be more likely to 1 2 quickly resolve to the empty clause. What "heuristics" can you think of? Here are some hints: - When the empty clause is generated, what do C and C look like? 1 2 Thus, choose clauses to resolve that "look like" C and C . 1 2 - We assume that our axioms, F , i=1,...,n, are consistent. Thus, i clauses emanating from the axioms can never lead to the empty clause. The clauses emanating from G, however, introduce the inconsistency into our set of clauses. Thus, you may want to restrict C to be a 1 clause chosen from some appropriate subset of CL, and likewise, restrict C to a different subset of clauses on CL! 2 - Let's assume that we choose clause C and it contains j literals. 1 Now, suppose we find two clauses that have complementary and unifiable literals with C . These two clauses are candidates for C . 1 2 Say the first clause has k literals, while the second clause has m literals. If k > m, (i.e. the first candidate has more literals than the second candidate) which of the two do you think might have a higher chance of leading to the empty clause more quickly? Which of the two candidates would you choose for C ? 2 Enjoy! Helpful Common Lisp Code for a First Order Logic Theorem Prover Computer Science CSW4701 ;For simplicity, we assume that literals are represented as ;follows: ; A positive literal is of the form (P t1 t2 ... tn) where ti is ; a first order term. Similarly, a negative literal is of the ; form (~ P t1 t2 ... tn). Note that the leading symbol is the ; symbol '~. ;The following function tests whether two literals are complementary. (Defun complementary-lits (L1 L2) (cond ((Or (atom L1)(atom L2)) nil) ((eql '~ (car L1)) (eql (cadr L1)(car L2))) ((eql '~ (car L2)) (eql (cadr L2)(car L1))) (t nil))) ;The next function determines whether a logic variable appears ;anywhere within some arbitrary first order term. The first ;argument is assumed to the be the variable. Recall that terms ;are represented as (f t1 t2 ... tm) where each ti is itself a term. (Defun occurs (v term) (cond ((null term) t) ((atom term) (eql v term)) ((atom (car term)) (cond ((eql v (car term)) t) (t (occurs v (cdr term))))) (t (or (occurs v (car term)) (occurs v (cdr term)))))) ;The following function determines whether or not ;two literals are UNIFIABLE and returns the substitution list ;making them unified if so. ;This function assumes that the literals are complementary. (Defun Unifiable (L1 L2) (cond ((Or (atom L1)(atom L2)) (return nil)) ((eql '~ (car L1)) (Unify (cdr L1) L2)) (t (Unify L1 (cdr L2))))) ; The LISP function sublis takes two arguments, a literal and ; a substitution list of the form ( (vbl . term) (vbl . term) ...) ; It applies each substitution by replacing each occurrence of ; the logical variable by the appropriate term specified in the ; binding list. Note that the substitution list is a list of ; "dotted pairs" so that we can utilize LISP's sublis function. ;The following function returns a substitution list if two ;literals are unifiable. (Defun unify (L1 L2) ;recall that the leading ~ symbol has been stripped. ;Note that we follow the unification algorithm almost verbatim. This could ;be written to work a bit faster. (prog (Subst S i) (cond ((or (Atom L1)(Atom L2)) ;Step 1 If L1 or L2 are atoms (cond ((eql L1 L2) (return Nil)) ;Step 1.1 are they identical? ((Variablep L1) ;L1 Is a logic variable so (cond ((Occurs L1 L2) (return 'F)) ;If L1 occurs in L2 we fail (T (return (List (cons L1 L2)))))) ;Otherwise we create a binding ((Variablep L2) ;Step 1.3 ditto for L2 (cond ((Occurs L2 L1) (return 'F)) (T (return (List (cons L2 L1)))))))) ((Not (eql (length L1)(length L2))) (return 'F))) ;Step 2 ;Ok, we handled the simple cases. Now for the complex case, Step 3 and 4 (setq Subst nil i -1) ;Step 3. Note i is set to -1 so that it ;iterates from 0 to (length L1)-1. Step4 (setq i (+1 i)) (cond ((= i (length L1)) (return Subst))) ;Termination of loop (setq S (Unify (nth i L1) (nth i L2))) ;Unify i'th terms (cond ((eql S 'F) (return 'F)) ;Determine what happened ((not (null S)) (setf (nthcdr i L1) (sublis S (nthcdr i L1))) (setf (nthcdr i L2) (sublis S (nthcdr i L2))) ;If you wish, here you can apply the substitutions of S ;to Subst. But you need not do so. (setq Subst (append S Subst)))) (go Step4))) ;Remember that you need to define your own implementation of ;variablep, i.e. your own representation of logic variables. ;The representation of clauses is rather simple. ;Have fun. ;; TP.LISP : A FOL Theorem Prover. ;; ;; Implemented by Ya Yang for W4701, homework #3. ;; (and slightly modified by the TA) ;; ;; - Mauricio ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This lisp program is a First Order Logic Theorem Prover ;The input is in clause form and the output is either the ;steps of the proof or the claim that can't be proved ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function makes a copy of a list L to Lcp ;using recursion ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun copy (L) (let ( (Lcp Nil)) (cond ((null L) (setf Lcp Nil)) ((atom L) (setf Lcp L)) (t (setf Lcp (cons (first L) (copy (rest L)))))))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;Define the structure of a node which records one step of resolving ;that leads to the proof ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (defstruct Node resolvent ;the clause derived from resolution clause1 clause2 subst) ;the substitution list used by the resolution ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function decides whether or not a clause is one of those ;resolvent s in the record list ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun inrecord (clause) (let ((thenode nil)) (do ((i (- (length record) 1) (setf i (- i 1)))) ((= i -1)) (setf thenode (make-Node :resolvent (Node-resolvent (nth i record)) :clause1 (Node-clause1 (nth i record)) :clause2 (Node-clause2 (nth i record)) :subst (Node-subst (nth i record)))) (if (equal clause (Node-resolvent thenode)) (return-from inrecord thenode))) nil)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function trace the record list and prints out every ;step taken in the proof ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun trace-solution (record clause) (let ( (thenode Nil)) (cond ( (setf thenode (inrecord clause)) ;if the clause is a resolvent, recurse its two clauses (trace-solution record (Node-clause1 thenode)) (trace-solution record (Node-clause2 thenode)) (setf step (+ step 1)) (format T " ===step ~d ===: ~% ~s + ~s using ~s = ~s ~%" step (Node-clause1 thenode) (Node-clause2 thenode) (Node-subst thenode) (Node-resolvent thenode))) ;if the clause is an input, goes back (t 'back)))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function tests whether two literals are complementary ;i.e. one positive and one negative literal with the same predicate ;a positive literal is of the form (P t1 .. tn), and a negative ;literal is of the form (~ P t1 .. tn) where ti is a FO term ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun complementary-lits (L1 L2) (cond ((or (atom L1) (atom L2)) Nil) ((eql '~ (car L1)) (eql (cadr L1) (car L2))) ((eql '~ (car L2)) (eql (cadr L2) (car L1))) (t Nil))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function determines whether a logic variable "v" appears ;anywhere within some arbitrary first order term "term" ;a term is of the form (f t1 ..tm) where ti is itself a FO term ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun occurs (v term) (cond ((null term) Nil) ((atom term) (eql v term)) ((atom (car term)) (cond ((eql v (car term)) t) (t (occurs v (cdr term))))) (t (or (occurs v (car term)) (occurs v (cdr term)))))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function determines whether or not two literals are unifiable ;and returns the substitution list making them unified if so ;This functions assumes that the literals are complementary ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun Unifiable (L1 L2) (cond ((or (atom L1) (atom L2)) 'Fail) ((eql '~ (car L1)) (Unify (cdr L1) L2)) (t (Unify L1 (cdr L2))))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function returns a substitution list if two literals are ;unifiable. returns an 'fail is not unifiable.Note that the ;substitution list a list of "dotted pairs" ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun Unify (L1 L2) (prog (Subst s i) (cond ((or (atom L1) (atom L2)) ; if L1 or L2 are atoms (cond ((eql L1 L2) (return Nil)) ; identical, substitution is nil ((variablep L1) (cond ; L1 is a logic variable ((Occurs L1 L2) (return 'Fail)) ; L1 occurs in L2, can't unify (t (return (list (cons L1 L2)))))) ;otherwise, ;L1 binding to L2 ((variablep L2) (cond ; ditto for L2 ((Occurs L2 L1) (return 'Fail)) (t (return (list (cons L2 L1)))))) (t (return 'Fail)))) ;otherwise, not unifiable ((Not (eql (length L1) (length L2))) (return 'Fail))) (setq Subst Nil i -1) ; i iterates from 0 to (length L1)-1 Step4 (setq i(+ 1 i)) (cond ((= i (length L1)) (return Subst))) ; Termination of loop (setq S (Unify (nth i L1) (nth i L2))) ; Unify the ith term (cond ((eql S 'Fail) (return 'Fail)) ((not (null S)) (setf L1 (sublis S L1)) ;substitute L1 using S (setf L2 (sublis S L2)) ;substitute L2 using S (setq Subst (append S Subst)))) ;Add S to Subst (go Step4))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function determines whether an atom is a variable ;We define the form of a variable as ?x ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (defun variablep(x) (and (symbolp x) (eql (char (symbol-name x) 0) '#\?))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function resolve two clauses C1 and C2 having complementary and ;unifiable literals L1 and L2 with substitution list Subst. ;return the resolvent C ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun Resolve (C1 C2 index1 index2 Subst) (setf C nil) ;initialize the resolvent (do ((i1 0 (setf i1 (+ 1 i1)))) ;apply the substitution to C1 ((= i1 (length C1))) (if (/= i1 index1) ;if not one the complementary term (let () (setf (nth i1 C1) (sublis Subst (nth i1 C1))) (setf C (cons (nth i1 C1) C))))) (do ((i2 0 (setf i2 (+ 1 i2)))) ;apply the substitution to C2 ((= i2 (length C2))) (if (/= i2 index2) (let () (setf (nth i2 C2) (sublis Subst (nth i2 C2))) (setf C (cons (nth i2 C2) C))))) C) ; return the resolvent clause ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;The function checks whether or not there is a pair of literals ;in the two clauses that are unifiable. If not, return 'fail, otherwise ;resolve the two clauses ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun Check (C1 C2) (let ((i1 0) (i2 0) (L1 Nil) (L2 Nil)) (setf resolvent 'unsolved) (do ((i1 0 (setf i1 (+ 1 i1)))) ; check every literal of C1 ((= i1 (length C1))) (if (Not (eq resolvent 'unsolved)) (return resolvent)) ;complementary and unifiable ;literals found and resolved. ;Get out of the function (setf L1 (nth i1 C1)) ; check every literal of C2 (do ((i2 0 (setf i2 (+ 1 i2)))) ((= i2 (length C2))) (setf L2 (nth i2 C2)) (if (and (complementary-lits L1 L2) ;these two clauses are ; resolvable (Not (eql (setf Subst (Unifiable L1 L2)) 'Fail))) (let () (setf C1cp (copy C1)) ;To remain C1, C2 unchanged, (setf C2cp (copy C2)) ;apply resolving on their copies (setf resolvent (Resolve C1cp C2cp i1 i2 Subst)) ; Record the resolve by making a new node (setf newnode (make-Node :resolvent C :clause1 C1 :clause2 C2 :subst Subst)) ; Add the new node to the list (setf record (append record (list newnode))) (return resolvent)) ))) resolvent)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This is the main procedure which proves the theorem by using ;resolution. ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun Prove (Clauses) (let ((i1 0) (i2 0) (C1 Nil) (C2 Nil) ) ;choose the first clause from the end of the list (do ((i1 (- (length Clauses) 1) (setf i1 (- i1 1)))) ((= i1 (- NumOfPre 1))) ;inner loop over Clauses for ;the choice of C1 (setf C1 (nth i1 Clauses)) ;C1 is from the set of newly ;generated clauses ;choose the second clause from ;the beginning of the list (do ((i2 0 (setf i2 (+ 1 i2)))) ((= i2 NumOfPre)) ; outer loop over Clauses for the ; choice of C2 (setf C2 (nth i2 Clauses)) ;C2 is from the set of axioms ;check the two clauses to see ;whether we can resolve them (setf result (Check C1 C2)) ;if the newly derived resolvent ;is identical to someone already ;in the Clauses, we just go to the ;next clause without adding it ;to Clauses and starts all over again (if (and (Not (eq 'unsolved result)) (Not (member result Clauses :test 'equal))) ;get out of the function (return-from Prove result)))) (return-from prove 'Fail))) ; no clause can resolve, can't prove. ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;This function drives the whole theorem prover ;premises is the set of inputs clauses(axioms) ;negconclusion is the negated conclusion ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun ProveDriver (premise negconclusion) ;sort the premises so that they are in the increasing order of ;their lengths (setf premise (sort premise '(lambda (cl1 cl2) (< (length cl1) (length cl2))))) ;append the negated conclusion clause to the premises (setf CL (append premise (list negconclusion))) (setf done 0) (setf NumOfPre (- (length CL) 1)) ;to remember the number of axioms input (setf record Nil) ;initialize the record list (loop (setf done (Prove CL)) (cond ((eq done 'Fail) ; terminate with failure to prove (return "The theorem can't be proved")) ((eq done Nil) ;terminate with the proof (print "I found the proof") (setf step 0) (trace-solution record Nil) (return)) (t ;no resolvents this iteration (setf CL (append CL (list done))))))) ;keep on doing on expanded CL ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;Four test examples ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (Defun test0 () (ProveDriver '(( ( ~ E ?x) ( V ?x) (S ?x (f ?x))) ( ( ~ E ?u) ( V ?u) (C (f ?x))) ( (P c)) ( (E c)) ( (~ S c ?y) (P ?y)) ( (~ P ?z) (~ V ?z))) '( (~ p ?w) (~ C ?w)))) (Defun Test1 () (ProveDriver '( ((A)) ((M)) ((~ A) (~ C) (D)) ((~ M) (C))) '((~ D)))) (Defun Test2 () (ProveDriver '( ((mother amity betsy)) ((~ daughter ?u ?w) (mother ?w ?u)) ((daughter cindy betsy)) ((~ mother ?x ?y) (~ mother ?y ?z) (grandmother ?x ?z))) '((~ grandmother ?g cindy)))) (Defun Test3() (ProveDriver '( ((man marcus)) ((prompeian marcus)) ((~ prompeian ?x) (roman ?x)) ((ruler caesar)) ((~ roman ?x) (loyalto ?x caesar) (hate ?x caesar)) ((~ man ?x) (~ ruler ?y) (~ trytoassassinate ?x ?y) (~ loyalto ?x ?y)) ((trytoassassinate marcus caesar))) '((~ hate marcus caesar)))) Here is a (possibly) helpful little function for you to use for the theorem proving assignment. ;Function to copy a literal or clause and RENAME logic variables to ;completely distinct new variable names (defun copy-and-rename (expression variable-assoc) (if (or (atom expression) (variablep expression)) (renamed expression variable-assoc) (cons (copy-and-rename (first expression) variable-assoc) (if (rest expression) (copy-and-rename (rest expression) variable-assoc))))) ;variablep is true if expression is a logic variable ;variable-assoc is an association list of old-variable-names with new ; variable names ;renamed is a function that replaces an old variable name with its ;new version either generated by "gensym" or found on the variable ; association list. If renamed is sent an atom that is NOT a logic ;variable, it simply returns that atom and makes no change to variable-assoc. ;YOU MUST ENSURE THAT variable-assoc WILL BE CORRECTLY UPDATED WITH EACH ;CALL TO renamed when a new logic variable name is created! A quick word about the heuristics: There were many possible things you could have done to improve the strategy of selecting two clauses from the clause list. Here I describe very briefly some strategies and provide you with some references if you are interested in the details. Most of you implemented two of these. The refinements to resolution refutation can be divided into three categories: 1) Ordering Strategies - order the clauses in the list in such a way that we select certain clauses first. 2) Pruning Strategies - Eliminate certain clauses from further consideration. 3) Restriction Strategies - regulate the application of the resolution. ORDERING STRATEGIES: ------------------- 1) Bread-first search - you know this one. 2) Fewest literals first: The number of literals in a clause is called the length of the clause; the fewer the literals in it, the shorter the clause. Recall that resolving a clause of length i with a clause of length j produces a resolvent clause of length i+j-2. In this strategy, select clauses that *will resolve* in the shortest resolvent. The idea is that this strategy could lead to NIL faster. 3) Weights: Give weights to each clause based on your understanding of the particular problem. Use the weights to select clauses to resolve. This particular strategy needs your direct intervention and wasn't necessary for this HW. PRUNING STRATEGIES: ------------------ 1) Delete chaste clauses: If the set of clauses contains the literal (M) but no where in the set of clauses its complementary unifiable literal (~M) occurs, then the clause containing (M) can be removed from the set of clauses since it can't be resolved (well it can, but its descendant will not lead to NIL). 2) Delete tautologies: Delete clauses of the form [P(x) v .... v ~P(x)]. 3) Delete subsumed clauses: Let say you have the following set of clauses: C1 = [ A B ] C2 = [ A B C ] C3 = [ P(?x) Q(?y) ] C4 = [ P(a) Q(?z) ] You can reduce this set to: C1' = [ A B ] ;; C2 is redundant. C3' = [ P(?x) Q(?y) ] ;; x/a, y/z, C3 generalizes C4. RESTRICTION STRATEGIES: ---------------------- Here you first divide the set of clauses into: positive clauses: clauses containing only positive literals negative clauses: clauses containing only negative literals mixed clauses : clauses containing at least on positive and at least one negative literal. There are many strategies you can now follow. Here I only describe the "set of support" strategy, which is the one many of you followed when selecting initially the Goal clause and resolved with its descendents. The idea is to select a subset of the set of clauses and then use this subset to "drive" the rest of the resolution procedure. We call this subset the "set of support". To perform the resolution, you always select a clause from the "set of support" and a complimentary unifiable clause from the total set. In other words, we always select at least ONE clause from the "set of support". Some costumery approaches to selecting the set of support are: a) The set of clauses derived from the negation of the goal. b) The set of negative clauses. c) The set of positive clauses. References: o Shinghal, Rajjan. "Formal Concepts in Artificial Intelligence: Fundamentals". Chapman & Hall Computing, 1992. o Rich, Elaine. "Artificial Intelligence". McGraw-Hill, 1983.