Propositional clausal logic
Contents
2.1. Propositional clausal logic#
Informally, a proposition is any statement which is either true or false, such as ‘\(2 + 2 = 4\)’ or ‘the moon is made of green cheese’. These are the building blocks of propositional logic, the weakest form of logic.
Syntax#
Propositions are abstractly denoted by atoms, which are single words starting with a lowercase character. For instance, married
is an atom denoting the proposition ‘he/she is married’; similarly, man
denotes the proposition ‘he is a man’. Using the special symbols ‘:-
’ (if), ‘;
’ (or) and ‘,
’ (and), we can combine atoms to form clauses. For instance,
married;bachelor:-man,adult.
is a clause, with intended meaning: ‘somebody is married or a bachelor if he is a man and an adult’1. The part to the left of the if-symbol ‘:-
’ is called the head of the clause, and the right part is called the body of the clause. The head of a clause is always a disjunction (or) of atoms, and the body of a clause is always a conjunction (and).
Tip
This example, which is used throughout Chapters 2 and 3, assumes that bachelors are unmarried men. While this is still logically correct, it is perhaps time to look for a slightly less dated running example…
Translate the following statements into clauses, using the atoms person
, sad
and happy
:
persons are happy or sad;
no person is both happy and sad;
sad persons are not happy;
non-happy persons are sad.
A program is a set of clauses, each of them terminated by a period. The clauses are to be read conjunctively; for example, the program
woman;man:-human.
human:-woman.
human:-man.
has the intended meaning ‘(if someone is human then she/he is a woman or a man) and (if someone is a woman then she is human) and (if someone is a man then he is human)’, or, in other words, ‘someone is human if and only if she/he is a woman or a man’.
Semantics#
The Herbrand base of a program \(P\) is the set of atoms occurring in \(P\). For the above program, the Herbrand base is { woman
, man
, human
}. A Herbrand interpretation (or interpretation for short) for \(P\) is a mapping from the Herbrand base of \(P\) into the set of truth values { true, false }. For example, the mapping
{ woman
→ true, man
→ false, human
→ true }
is a Herbrand interpretation for the above program.
A Herbrand interpretation can be viewed as describing a possible state of affairs in the Universe of Discourse (in this case: ‘she is a woman, she is not a man, she is human’). Since there are only two possible truth values in the semantics we are considering, we could abbreviate such mappings by listing only the atoms that are assigned the truth value true; by definition, the remaining ones are assigned the truth value false. Under this convention, which we will adopt in this book, a Herbrand interpretation is simply a subset of the Herbrand base. Thus, the previous Herbrand interpretation would be represented as { woman
, human
}.
Since a Herbrand interpretation assigns truth values to every atom in a clause, it also assigns a truth value to the clause as a whole. The rules for determining the truth value of a clause from the truth values of its atoms are not so complicated, if you keep in mind that the body of a clause is a conjunction of atoms, and the head is a disjunction. Consequently, the body of a clause is true if every atom in it is true, and the head of a clause is true if at least one atom in it is true. In turn, the truth value of the clause is determined by the truth values of head and body. There are four possibilities:
the body is true, and the head is true;
the body is true, and the head is false;
the body is false, and the head is true;
the body is false, and the head is false.
The intended meaning of the clause is ‘if body then head’, which is obviously true in the first case, and false in the second case.
What about the remaining two cases? They cover statements like ‘if the moon is made of green cheese then \(2 + 2 = 4\)’, in which there is no connection at all between body and head. One would like to say that such statements are neither true nor false. However, our semantics is not sophisticated enough to deal with this: it simply insists that clauses should be assigned a truth value in every possible interpretation. Therefore, we consider the clause to be true whenever its body is false. It is not difficult to see that under these truth conditions a clause is equivalent with the statement ‘head or not body’. For example, the clause married;bachelor:-man,adult
can also be read as ‘someone is married or a bachelor or not a man or not an adult’. Thus, a clause is a disjunction of atoms, which are negated if they occur in the body of the clause. Therefore, the atoms in the body of the clause are often called negative literals, while those in the head of the clause are called positive literals.
To summarise: a clause is assigned the truth value true in an interpretation, if and only if at least one of the following conditions is true: (a) at least one atom in the body of the clause is false in the interpretation (cases 3 and 4), or (b) at least one atom in the head of the clause is true in the interpretation (cases 1 and 3). If a clause is true in an interpretation, we say that the interpretation is a model for the clause. An interpretation is a model for a program if it is a model for each clause in the program. For example, the above program has the following models: \(\emptyset\) (the empty model, assigning false to every atom), { woman
, human
}, { man
, human
}, and { woman
, man
, human
}. Since there are eight possible interpretations for a Herbrand base with three atoms, this means that the program contains enough information to rule out half of these.
Adding more clauses to the program means restricting its set of models. For instance, if we add the clause woman
(a clause with an empty body) to the program, we rule out the first and third model, which leaves us with the models { woman
, human
}, and { woman
, man
, human
}. Note that in both of these models, human
is true. We say that human
is a logical consequence of the set of clauses. In general, a clause \(C\) is a logical consequence of a program \(P\) if every model of the program is also a model of the clause; we write \(P \models C\).
Given the program
married;bachelor:-man,adult.
man.
:-bachelor.
determine which of the following clauses are logical consequences of this program:
married:-adult
;married:-bachelor
;bachelor:-man
;bachelor:-bachelor
.
Of the two remaining models, obviously { woman
, human
} is the intended one; but the program does not yet contain enough information to distinguish it from the non-intended model { woman
, man
, human
}. We can add yet another clause, to make sure that the atom man
is mapped to false. For instance, we could add
:-man.
(it is not a man) or
:-man,woman.
(nobody is both a man and a woman). However, explicitly stating everything that is false in the intended model is not always feasible. Consider, for example, an airline database consulted by travel agencies: we simply want to say that if a particular flight (i.e., a combination of plane, origin, destination, date and time) is not listed in the database, then it does not exist, instead of listing all the dates that a particular plane does not fly from Amsterdam to London.
So, instead of adding clauses until a single model remains, we want to add a rule to our semantics which tells us which of the several models is the intended one. The airline example shows us that, in general, we only want to accept something as true if we are really forced to, i.e. if it is true in every possible model. This means that we should take the intersection of every model of a program in order to construct the intended model. In the example, this is { woman
, human
}. Note that this model is minimal in the sense that no subset of it is also a model. Therefore, this semantics is called a minimal model semantics.
Unfortunately, this approach is only applicable to a restricted class of programs. Consider the following program:
woman;man:-human.
human.
This program has three models: { woman
, human
}, { man
, human
}, and { woman
, man
, human
}. The intersection of these models is { human
}, but this interpretation is not a model of the first clause! The program has in fact not one, but two minimal models, which is caused by the fact that the first clause has a disjunctive head. Such a clause is called indefinite, because it does not permit definite conclusions to be drawn.
On the other hand, if we would only allow definite clauses, i.e. clauses with a single positive literal, minimal models are guaranteed to be unique. We will deal with definite clauses in Section 2.4, because Prolog is based on definite clause logic. In principle, this means that clauses like woman;man:-human
are not expressible in Prolog. However, such a clause can be transformed into a ‘pseudo-definite’ clause by moving one of the literals in the head to the body, extended with an extra negation. This gives the following two possibilities:
woman:-human,not(man).
man:-human,not(woman).
In Prolog, we have to choose between these two clauses, which means that we have only an approximation of the original indefinite clause. Negation in Prolog is an important subject with many aspects. In Chapter 3, we will show how Prolog handles negation in the body of clauses. In Chapter 8, we will discuss particular applications of this kind of negation.
Proof theory#
Recall that a clause \(C\) is a logical consequence of a program \(P\) (\(P \models C\)) if every model of \(P\) is a model of \(C\). Checking this condition is, in general, unfeasible. Therefore, we need a more efficient way of computing logical consequences, by means of inference rules. If \(C\) can be derived from \(P\) by means of a number of applications of such inference rules, we say that \(C\) can be proved from \(P\). Such inference rules are purely syntactic, and do not refer to any underlying semantics.
The proof theory for clausal logic consists of a single inference rule called resolution. Resolution is a very powerful inference rule. Consider the following program:
married;bachelor:-man,adult.
has_wife:-man,married.
This simple program has no less than 26 models, each of which needs to be considered if we want to check whether a clause is a logical consequence of it.
The following clause is a logical consequence of this program:
has_wife;bachelor:-man,adult.
By means of resolution, it can be produced in a single step. This step represents the following line of reasoning: ‘if someone is a man and an adult, then he is a bachelor or married; but if he is married, he has a wife; therefore, if someone is a man and an adult, then he is a bachelor or he has a wife’. In this argument, the two clauses in the program are related to each other by means of the atom married
, which occurs in the head of the first clause (a positive literal) and in the body of the second (a negative literal). The derived clause, which is called the resolvent, consists of all the literals of the two input clauses, except married
(the literal resolved upon). The negative literal man
, which occurs in both input clauses, appears only once in the derived clause. This process is depicted in Figure 2.1.
Resolution is most easily understood when applied to definite clauses. Consider the following program:
square:-rectangle,equal_sides.
rectangle:-parallelogram,right_angles.
Applying resolution yields the clause
square:-parallelogram,right_angles,equal_sides.
That is, the atom rectangle
in the body of the first clause is replaced by the body of the second clause (which has rectangle
as its head). This process is also referred to as unfolding the second clause into the first one (Figure 2.2).
A resolvent resulting from one resolution step can be used as input for the next. A proof or derivation of a clause \(C\) from a program \(P\) is a sequence of clauses such that each clause is either in the program, or the resolvent of two previous clauses, and the last clause is \(C\). If there is a proof of \(C\) from \(P\), we write \(P \vdash C\).
Meta-theory#
It is easy to show that propositional resolution is sound: you have to establish that every model for the two input clauses is a model for the resolvent. In our earlier example, every model of married;bachelor:-man,adult
and has_wife:-man,married
must be a model of has_wife;bachelor:-man,adult
. Now, the literal resolved upon (in this case married
) is either assigned the truth value true or false. In the first case, every model of has_wife:-man,married
is also a model of has_wife:-man
; in the second case, every model of married;bachelor:-man,adult
is also a model of bachelor:-man,adult
. In both cases, these models are models of a subclause of the resolvent, which means that they are also models of the resolvent itself.
In general, proving completeness is more complicated than proving soundness. Still worse, proving completeness of resolution is impossible, because resolution is not complete at all! For instance, consider the clause a:-a
. This clause is a so-called tautology: it is true under any interpretation. Therefore, any model of an arbitrary program \(P\) is a model for it, and thus \(P \models\) a:-a
for any program \(P\). If resolution were complete, it would be possible to derive the clause a:-a
from some program \(P\) in which the literal a
doesn’t even occur! It is clear that resolution is unable to do this.
However, this is not necessarily bad, because although tautologies follow from any set of clauses, they are not very interesting. Resolution makes it possible to guide the inference process, by implementing the question ‘is \(C\) a logical consequence of \(P\)?’ rather than ‘what are the logical consequences of \(P\)?’. We will see that, although resolution is unable to generate every logical consequence of a set of clauses, it is complete in the sense that resolution can always determine whether a specific clause is a logical consequence of a set of clauses.
The idea is analogous to a proof technique in mathematics called ‘reduction to the absurd’. Suppose for the moment that \(C\) consists of a single positive literal a
; we want to know whether \(P \models\) a
, i.e. whether every model of \(P\) is also a model of a
. It is easily checked that an interpretation is a model of a
if, and only if, it is not a model of :-a
. Therefore, every model of \(P\) is a model of a
if, and only if, there is no interpretation which is a model of both :-a
and \(P\). In other words, a
is a logical consequence of \(P\) if, and only if, :-a
and \(P\) are mutually inconsistent (don’t have a common model). So, checking whether \(P \models\) a
is equivalent to checking whether \(P \cup\) { :-a
} is inconsistent.
Resolution provides a way to check this condition. Note that, since an inconsistent set of clauses doesn’t have a model, it trivially satisfies the condition that any model of it is a model of any other clause; therefore, an inconsistent set of clauses has every possible clause as its logical consequence. In particular, the absurd or empty clause, denoted by \(\square\)2, is a logical consequence of an inconsistent set of clauses. Conversely, if \(\square\) is a logical consequence of a set of clauses, we know it must be inconsistent. Now, resolution is complete in the sense that if P set of clauses is inconsistent, it is always possible to derive \(\square\) by resolution. Since resolution is sound, we already know that if we can derive \(\square\) then the input clauses must be inconsistent. So we conclude: a
is a logical consequence of \(P\) if, and only if, the empty clause can be deduced by resolution from \(P\) augmented with :-a
. This process is called proof by refutation, and resolution is called refutation complete.
This proof method can be generalised to the case where \(B\) is not a single atom. For instance, let us check by resolution that a:-a
is a tautology, i.e. a logical consequence of any set of clauses. Logically speaking, this clause is equivalent to ‘a
or not a
’, the negation of which is ‘not a
and a
’, which is represented by two separate clauses :-a
and a
. Since we can derive the empty clause from these two clauses in a single resolution step without using any other clauses, we have in fact proved that a:-a
is a logical consequence of an empty set of clauses, hence a tautology.
Prove by refutation that friendly:-has_friends
is a logical consequence of the following clauses:
happy:-has_friends.
friendly:-happy.
Finally, we mention that although resolution can always be used to prove inconsistency of a set of clauses it is not always fit to prove the opposite, i.e. consistency of a set of clauses. For instance, a
is not a logical consequence of a:-a
; yet, if we try to prove the inconsistency of :-a
and a:-a
(which should fail) we can go on applying resolution forever! The reason, of course, is that there is a loop in the system: applying resolution to :-a
and a:-a
again yields :-a
. In this simple case it is easy to check for loops: just maintain a list of previously derived clauses, and do not proceed with clauses that have been derived previously.
However, as we will see, this is not possible in the general case of full clausal logic, which is semi-decidable with respect to the question ‘is \(B\) a logical consequence of \(A\)’: there is an algorithm which derives, in finite time, a proof if one exists, but there is no algorithm which, for any \(A\) and \(B\), halts and returns ‘no’ if no proof exists. The reason for this is that interpretations for full clausal logic are in general infinite. As a consequence, some Prolog programs may loop forever (just like some Pascal programs). One might suggest that it should be possible to check, just by examining the source code, whether a program is going to loop or not, but, as Alan Turing showed, this is, in general, impossible (the Halting Problem). That is, you can write programs for checking termination of programs, but for any such termination checking program you can write a program on which it will not terminate itself!
Tip
prop_atom(married).
prop_atom(bachelor).
prop_atom(man).
prop_atom(adult).
% Propositional atoms are trivially ground
ground_atom(A):-prop_atom(A).