I

Logic and
Logic Programming

Logic Programming is the name of a programming paradigm which was developed in the 70s. Rather than viewing a computer program as a step-by-step description of an algorithm, the program is conceived as a logical theory, and a procedure call is viewed as a theorem of which the truth needs to be established. Thus, executing a program means searching for a proof. In traditional (imperative) programming languages, the program is a procedural specification of how a problem needs to be solved. In contrast, a logic program concentrates on a declarative specification of what the problem is. Readers familiar with imperative programming will find that Logic Programming requires quite a different way of thinking. Indeed, their knowledge of the imperative paradigm will be partly incompatible with the logic paradigm.

This is certainly true with regard to the concept of a program variable. In imperative languages, a variable is a name for a memory location which can store data of certain types. While the contents of the location may vary over time, the variable always points to the same location. In fact, the term ‘variable’ is a bit of a misnomer here, since it refers to a value that is well-defined at every moment. In contrast, a variable in a logic program is a variable in the mathematical sense, i.e. a placeholder that can take on any value. In this respect, Logic Programming is therefore much closer to mathematical intuition than imperative programming.

Imperative programming and Logic Programming also differ with respect to the machine model they assume. A machine model is an abstraction of the computer on which programs are executed. The imperative paradigm assumes a dynamic, state-based machine model, where the state of the computer is given by the contents of its memory. The effect of a program statement is a transition from one state to another. Logic Programming does not assume such a dynamic machine model. Computer plus program represent a certain amount of knowledge about the world, which is used to answer queries.

The first three chapters of the book are devoted to an introduction to Logic Programming. Chapter 1, A brief introduction to clausal logic, is an introductory chapter, introducing many concepts in Logic Programming by means of examples. These concepts get a more formal treatment in Chapter 2, Clausal logic and resolution: theoretical backgrounds. In Chapter 3, Logic Programming and Prolog, we take a closer look at Prolog as a logic programming language, explaining its main features and describing some common programming techniques.


1

A brief introduction to clausal logic

In this chapter, we will introduce clausal logic as a formalism for representing and reasoning with knowledge. The aim of this chapter is to acquaint the reader with the most important concepts, without going into too much detail. The theoretical aspects of clausal logic, and the practical aspects of Logic Programming, will be discussed in Chapters 2 and 3.

Our Universe of Discourse in this chapter will be the London Underground, of which a small part is shown in fig. 1.1. Note that this picture contains a wealth of information, about lines, stations, transit between lines, relative distance, etc. We will try to capture this information in logical statements. Basically, fig. 1.1 specifies which stations are directly connected by which lines. If we follow the lines from left to right (Northern downwards), we come up with the following 11 formulas:

connected(bond_street,oxford_circus,central).
connected(oxford_circus,tottenham_court_road,central).
connected(bond_street,green_park,jubilee).
connected(green_park,charing_cross,jubilee).
connected(green_park,piccadilly_circus,piccadilly).
connected(piccadilly_circus,leicester_square,piccadilly).
connected(green_park,oxford_circus,victoria).
connected(oxford_circus,piccadilly_circus,bakerloo).
connected(piccadilly_circus,charing_cross,bakerloo).
connected(tottenham_court_road,leicester_square,northern).
connected(leicester_square,charing_cross,northern).

Let’s define two stations to be nearby if they are on the same line, with at most one station in between. This relation can also be represented by a set of logical formulas:

nearby(bond_street,oxford_circus).
nearby(oxford_circus,tottenham_court_road).
nearby(bond_street,tottenham_court_road).
nearby(bond_street,green_park).
nearby(green_park,charing_cross).
nearby(bond_street,charing_cross).
nearby(green_park,piccadilly_circus).
nearby(piccadilly_circus,leicester_square).
nearby(green_park,leicester_square).
nearby(green_park,oxford_circus).
nearby(oxford_circus,piccadilly_circus).
nearby(piccadilly_circus,charing_cross).
nearby(oxford_circus,charing_cross).
nearby(tottenham_court_road,leicester_square).
nearby(leicester_square,charing_cross).
nearby(tottenham_court_road,charing_cross).

Figure 1.1. Part of the London Underground. Reproduced by permission of London Regional Transport (LRT Registered User No. 94/1954).

These 16 formulas have been derived from the previous 11 formulas in a systematic way. If X and Y are directly connected via some line L, then X and Y are nearby. Alternatively, if there is some Z in between, such that X and Z are directly connected via L, and Z and Y are also directly connected via L, then X and Y are also nearby. We can formulate this in logic as follows:

nearby(X,Y):-connected(X,Y,L).
nearby(X,Y):-connected(X,Z,L),connected(Z,Y,L).

In these formulas, the symbol ‘ :- ’ should be read as ‘if’, and the comma between connected(X,Z,L) and connected(Z,Y,L) should be read as ‘and’. The uppercase letters stand for universally quantified variables, such that, for instance, the second formula means:

For any values of X, Y, Z and L, X is nearby Y if X is directly connected to Z via L, and Z is directly connected to Y via L.

We now have two definitions of the nearby-relation, one which simply lists all pairs of stations that are nearby each other, and one in terms of direct connections. Logical formulas of the first type, such as

nearby(bond_street,oxford_circus)

will be called facts, and formulas of the second type, such as

nearby(X,Y):-connected(X,Z,L),connected(Z,Y,L)

will be called rules. Facts express unconditional truths, while rules denote conditional truths, i.e. conclusions which can only be drawn when the premises are known to be true. Obviously, we want these two definitions to be equivalent: for each possible query, both definitions should give exactly the same answer. We will make this more precise in the next section.

Exercise 1.1. Two stations are ‘not too far’ if they are on the same or a different line, with at most one station in between. Define rules for the predicate not_too_far.

not_too_far(X,Y):-true. % replace 'true' with your definition
not_too_far(X,Y):-true. % add more clauses as needed

1.1   Answering queries

A query like ‘which station is nearby Tottenham Court Road?’ will be written as

?-nearby(tottenham_court_road,W).

where the prefix ‘ ?- ’ indicates that this is a query rather than a fact. An answer to this query, e.g. ‘Leicester Square’, will be written { W leicester_square }, indicating a substitution of values for variables, such that the statement in the query, i.e.

?-nearby(tottenham_court_road,leicester_square).

is true. Now, if the nearby-relation is defined by means of a list of facts, answers to queries are easily found: just look for a fact that matches the query, by which is meant that the fact and the query can be made identical by substituting values for variables in the query. Once we have found such a fact, we also have the substitution which constitutes the answer to the query.

If rules are involved, query-answering can take several of these steps. For answering the query ?-nearby(tottenham_court_road,W), we match it with the conclusion of the rule

nearby(X,Y):-connected(X,Y,L)

yielding the substitution { X tottenham_court_road, Y W }. We then try to find an answer for the premises of the rule under this substitution, i.e. we try to answer the query

?-connected(tottenham_court_road,W,L).

That is, we can find a station nearby Tottenham Court Road, if we can find a station directly connected to it. This second query is answered by looking at the facts for direct connections, giving the answer { W leicester_square, L northern }. Finally, since the variable L does not occur in the initial query, we just ignore it in the final answer, which becomes { W leicester_square } as above. In fig. 1.2, we give a graphical representation of this process. Since we are essentially proving that a statement follows logically from some other statements, this graphical representation is called a proof tree.

Figure 1.2. A proof tree for the query ?-nearby(tottenham_court_road,W).

The steps in fig. 1.2 follow a very general reasoning pattern:

to answer a query ?- Q 1 , Q 2 ,, Q n , find a rule A :- B 1 ,, B m  such that A matches with Q 1 , and answer the query ?- B 1 ,, B m , Q 2 ,, Q n .

This reasoning pattern is called resolution, and we will study it extensively in Chapters 2 and 3. Resolution adds a procedural interpretation to logical formulas, besides their declarative interpretation (they can be either true or false). Due to this procedural interpretation, logic can be used as a programming language. In an ideal logic programming system, the procedural interpretation would exactly match the declarative interpretation: everything that is calculated procedurally is declaratively true, and vice versa. In such an ideal system, the programmer would just bother about the declarative interpretation of the formulas she writes down, and leave the procedural interpretation to the computer. Unfortunately, in current logic programming systems the procedural interpretation does not exactly match the declarative interpretation: for example, some things that are declaratively true are not calculated at all, because the system enters an infinite loop. Therefore, the programmer should also be aware of the procedural interpretation given by the computer to her logical formulas.

The resolution proof process makes use of a technique that is known as reduction to the absurd: suppose that the formula to be proved is false, and show that this leads to a contradiction, thereby demonstrating that the formula to be proved is in fact true. Such a proof is also called a proof by refutation. For instance, if we want to know which stations are nearby Tottenham Court Road, we negate this statement, resulting in ‘there are no stations nearby Tottenham Court Road’. In logic, this is achieved by writing the statement as a rule with an empty conclusion, i.e. a rule for which the truth of its premises would lead to falsity:

:-nearby(tottenham_court_road,W)

Thus, the symbols ‘ ?- ’ and ‘ :- ’ are in fact equivalent. A contradiction is found if resolution leads to the empty rule, of which the premises are always true (since there are none), but the conclusion is always false. Conventionally, the empty rule is written as ‘ ’.

At the beginning of this section, we posed the question: can we show that our two definitions of the nearby-relation are equivalent? As indicated before, the idea is that to be equivalent means to provide exactly the same answers to the same queries. To formalise this, we need some additional definitions. A ground fact is a fact without variables. Obviously, if G is a ground fact, the query ?-G never returns a substitution as answer: either it succeeds (G does follow from the initial assumptions), or it fails (G does not). The set of ground facts G for which the query ?-G succeeds is called the success set. Thus, the success set for our first definition of the nearby-relation consists simply of those 16 formulas, since they are ground facts already, and nothing else is derivable from them. The success set for the second definition of the nearby-relation is constructed by applying the two rules to the ground facts for connectedness. Thus we can say: two definitions of a relation are (procedurally) equivalent if they have the same success set (restricted to that relation).

Exercise 1.2. Construct the proof trees for the query
                                          ?-nearby(W,charing_cross).

1.2   Recursion

Until now, we have encountered two types of logical formulas: facts and rules. There is a special kind of rule which deserves special attention: the rule which defines a relation in terms of itself. This idea of ‘self-reference’, which is called recursion, is also present in most procedural programming languages. Recursion is a bit difficult to grasp, but once you’ve mastered it, you can use it to write very elegant programs, e.g.

IF N=0
THEN FAC:=1
ELSE FAC:=N*FAC(N-1).

is a recursive procedure for calculating the factorial of a given number, written in a Pascal-like procedural language. However, in such languages iteration (looping a pre-specified number of times) is usually preferred over recursion, because it uses memory more efficiently.

In Prolog, however, recursion is the only looping structure [1] . (This does not necessarily mean that Prolog is always less efficient than a procedural language, because there are ways to write recursive loops that are just as efficient as iterative loops, as we will see in section 3.6.) Perhaps the easiest way to think about recursion is the following: an arbitrarily large chain is described by describing how one link in the chain is connected to the next. For instance, let us define the relation of reachability in our underground example, where a station is reachable from another station if they are connected by one or more lines. We could define it by the following 20 ground facts:

reachable(bond_street,charing_cross).
reachable(bond_street,green_park).
reachable(bond_street,leicester_square).
reachable(bond_street,oxford_circus).
reachable(bond_street,piccadilly_circus).
reachable(bond_street,tottenham_court_road).
reachable(green_park,charing_cross).
reachable(green_park,leicester_square).
reachable(green_park,oxford_circus).
reachable(green_park,piccadilly_circus).
reachable(green_park,tottenham_court_road).
reachable(leicester_square,charing_cross).
reachable(oxford_circus,charing_cross).
reachable(oxford_circus,leicester_square).
reachable(oxford_circus,piccadilly_circus).
reachable(oxford_circus,tottenham_court_road).
reachable(piccadilly_circus,charing_cross).
reachable(piccadilly_circus,leicester_square).
reachable(tottenham_court_road,charing_cross).
reachable(tottenham_court_road,leicester_square).

Since any station is reachable from any other station by a route with at most two intermediate stations, we could instead use the following (non-recursive) definition:

reachable(X,Y):-connected(X,Y,L).

reachable(X,Y):-connected(X,Z,L1),connected(Z,Y,L2).

reachable(X,Y):-connected(X,Z1,L1),connected(Z1,Z2,L2),
connected(Z2,Y,L3).

Of course, if we were to define the reachability relation for the entire London underground, we would need a lot more, longer and longer rules. Recursion is a much more convenient and natural way to define such chains of arbitrary length:

reachable(X,Y):-connected(X,Y,L).
reachable(X,Y):-connected(X,Z,L),reachable(Z,Y).

The reading of the second rule is as follows: ‘ Y is reachable from X if Z is directly connected to X via line L, and Y is reachable from Z ’.

Figure 1.3. A proof tree for the query ?-reachable(bond_street,W).

We can now use this recursive definition to prove that Leicester Square is reachable from Bond Street (fig. 1.3). However, just as there are several routes from Bond Street to Leicester Square, there are several alternative proofs of the fact that Leicester Square is reachable from Bond Street. An alternative proof is given in fig. 1.4. The difference between these two proofs is that in the first proof we use the fact

connected(oxford_circus,tottenham_court_road,central)

while in the second proof we use

connected(oxford_circus,piccadilly_circus,bakerloo)

There is no reason to prefer one over the other, but since Prolog searches the given formulas top-down, it will find the first proof before the second. Thus, the order of the clauses determines the order in which answers are found. As we will see in Chapter 3, it sometimes even determines whether any answers are found at all.

Exercise 1.3. Give a third proof tree for the answer { W leicester_square }, and change the order of the facts for connectedness, such that this proof tree is constructed first.

Figure 1.4. Alternative proof tree for the query ?-reachable(bond_street,W).

In other words, Prolog’s query-answering process is a search process, in which the answer depends on all the choices made earlier. A important point is that some of these choices may lead to a dead-end later. For example, if the recursive formula for the reachability relation had been tried before the non-recursive one, the bottom part of fig. 1.3 would have been as in fig. 1.5. This proof tree cannot be completed, because there are no answers to the query ?-reachable(charing_cross,W), as can easily be checked. Prolog has to recover from this failure by climbing up the tree, reconsidering previous choices. This search process, which is called backtracking, will be detailed in Chapter 5.

1.3   Structured terms

Finally, we illustrate the way Prolog can handle more complex datastructures, such as a list of stations representing a route. Suppose we want to redefine the reachability relation, such that it also specifies the intermediate stations. We could adapt the non-recursive definition of reachable as follows:

reachable0(X,Y):-connected(X,Y,L).

reachable1(X,Y,Z):-connected(X,Z,L1),
connected(Z,Y,L2).

reachable2(X,Y,Z1,Z2):-connected(X,Z1,L1),
connected(Z1,Z2,L2),
connected(Z2,Y,L3).

The suffix of reachable indicates the number of intermediate stations; it is added to stress that relations with different number of arguments are really different relations, even if their names are the same. The problem now is that we have to know the number of intermediate stations in advance, before we can ask the right query. This is, of course, unacceptable.

Figure 1.5. A failing proof tree.

We can solve this problem by means of functors. A functor looks just like a mathematical function, but the important difference is that functor expressions are never evaluated to determine a value. Instead, they provide a way to name a complex object composed of simpler objects. For instance, a route with Oxford Circus and Tottenham Court Road as intermediate stations could be represented by

route(oxford_circus,tottenham_court_road)

Note that this is not a ground fact, but rather an argument for a logical formula. The reachability relation can now be defined as follows:

reachable(X,Y,noroute):-connected(X,Y,L).
reachable(X,Y,route(Z)):-connected(X,Z,L1),
                         connected(Z,Y,L2).
reachable(X,Y,route(Z1,Z2)):-connected(X,Z1,L1),
                             connected(Z1,Z2,L2),
                             connected(Z2,Y,L3).
reachable(X,Y,noroute):-connected(X,Y,L).
reachable(X,Y,route(Z,R)):-connected(X,Z,L),
                         connected(Z,Y,R).

The query ?-reachable(oxford_circus,charing_cross,R) now has three possible answers:

{ R route(piccadilly_circus) }
{ R route(tottenham_court_road,leicester_square) }
{ R route(piccadilly_circus,leicester_square) }

Figure 1.6. A complex object as a tree.

As argued in the previous section, we prefer the recursive definition of the reachability relation, in which case we use functors in a somewhat different way.

reachable(X,Y,noroute):-connected(X,Y,L).
reachable(X,Y,route(Z,R)):-connected(X,Z,L),
                           reachable(Z,Y,R).

At first sight, there does not seem to be a big difference between this and the use of functors in the non-recursive program. However, the query

?-reachable(oxford_circus,charing_cross,R)

now has the following answers:

{R route(tottenham_court_road,
route(leicester_square,noroute))}

{R route(piccadilly_circus,noroute)}

{R route(piccadilly_circus,
route(leicester_square,noroute))}

The functor route is now also recursive in nature: its first argument is a station, but its second argument is again a route. For instance, the object

route(tottenham_court_road,route(leicester_square,noroute))

can be pictured as in fig. 1.6. Such a figure is called a tree (we will have a lot more to say about trees in chapter 4). In order to find out the route represented by this complex object, we read the leaves of this tree from left to right, until we reach the ‘terminator’ noroute. This would result in a linear notation like

[tottenham_court_road,leicester_square].

Figure 1.7. The list [a,b,c] as a tree.

For user-defined functors, such a linear notation is not available. However, Prolog provides a built-in ‘datatype’ called lists, for which both the tree-like notation and the linear notation may be used. The functor for lists is . (dot), which takes two arguments: the first element of the list (which may be any object), and the rest of the list (which must be a list). The list terminator is the special symbol [], denoting the empty list. For instance, the term

.(a,.(b,.(c,[])))

denotes the list consisting of a followed by b followed by c (fig. 1.7). Alternatively, we may use the linear notation, which uses square brackets:

[a,b,c]

To increase readability of the tree-like notation, instead of

.(First,Rest)

one can also write

[First|Rest]

Note that Rest is a list: e.g., [a,b,c] is the same list as [a|[b,c]]. a is called the head of the list, and [b,c] is called its tail. Finally, to a certain extent the two notations can be mixed: at the head of the list, you can write any number of elements in linear notation. For instance,

[First,Second,Third|Rest]

denotes a list with three or more elements.

Exercise 1.4. A list is either the empty list [], or a non-empty list [First|Rest] where Rest is a list. Define a relation list(L), which checks whether L is a list. Adapt it such that it succeeds only for lists of (i) even length and (ii) odd length.

The recursive nature of such datastructures makes it possible to ignore the size of the objects, which is extremely useful in many situations. For instance, the definition of a route between two underground stations does not depend on the length of the route; all that matters is whether there is an intermediate station or not. For both cases, there is a clause. Expressing the route as a list, we can state the final definition of the reachability relation:

reachable(X,Y,[]):-connected(X,Y,L).
reachable(X,Y,[Z|R]):-connected(X,Z,L),
                      reachable(Z,Y,R).

The query ?-reachable(oxford_circus,charing_cross,R) now results in the following answers:

{ R [tottenham_court_road,leicester_square] }
{ R [piccadilly_circus] }
{ R [piccadilly_circus, leicester_square] }

Note that Prolog writes out lists of fixed length in the linear notation.

Should we for some reason want to know from which station Charing Cross can be reached via a route with four intermediate stations, we should ask the query

?-reachable(X,charing_cross,[A,B,C,D])

which results in two answers:

X bond_street , A green_park , B oxford_circus , C tottenham_court_road , D leicester_square }

X bond_street , A green_park , B oxford_circus , C piccadilly_circus , D leicester_square }.

Exercise 1.5. Construct a query asking for a route from Bond Street to Piccadilly Circus with at least two intermediate stations.

1.4   What else is there to know about clausal logic?

The main goal of this chapter has been to introduce the most important concepts in clausal logic, and how it can be used as a reasoning formalism. Needless to say, a subject like this needs a much more extensive and precise discussion than has been attempted here, and many important questions remain. To name a few:

•what are the limits of expressiveness of clausal logic, i.e. what can and what cannot be expressed?

•what are the limits of reasoning with clausal logic, i.e. what can and what cannot be (efficiently) computed?

•how are these two limits related: is it for instance possible to enhance reasoning by limiting expressiveness?

In order to start answering such questions, we need to be more precise in defining what clausal logic is, what expressions in clausal logic mean, and how we can reason with them. That means that we will have to introduce some theory in the next chapter. This theory will not only be useful for a better understanding of Logic Programming, but it will also be the foundation for most of the topics in Part III (Advanced reasoning techniques).

Another aim of Part I of this book is to teach the skill of programming in Prolog. For this, theory alone, however important, will not suffice. Like any programming language, Prolog has a number of built-in procedures and datastructures that you should know about. Furthermore, there are of course numerous programming techniques and tricks of the trade, with which the Prolog programmer should be familiar. These subjects will be discussed in Chapter 3. Together, Chapters 2 and 3 will provide a solid foundation for the rest of the book.


2

Clausal logic and resolution:
theoretical backgrounds

In this chapter we develop a more formal view of Logic Programming by means of a rigorous treatment of clausal logic and resolution theorem proving. Any such treatment has three parts: syntax, semantics, and proof theory. Syntax defines the logical language we are using, i.e. the alphabet, different kinds of ‘words’, and the allowed ‘sentences’. Semantics defines, in some formal way, the meaning of words and sentences in the language. As with most logics, semantics for clausal logic is truth-functional, i.e. the meaning of a sentence is defined by specifying the conditions under which it is assigned certain truth values (in our case: true or false). Finally, proof theory specifies how we can obtain new sentences (theorems) from assumed ones (axioms) by means of pure symbol manipulation (inference rules).

Of these three, proof theory is most closely related to Logic Programming, because answering queries is in fact no different from proving theorems. In addition to proof theory, we need semantics for deciding whether the things we prove actually make sense. For instance, we need to be sure that the truth of the theorems is assured by the truth of the axioms. If our inference rules guarantee this, they are said to be sound. But this will not be enough, because sound inference rules can be actually very weak, and unable to prove anything of interest. We also need to be sure that the inference rules are powerful enough to eventually prove any possible theorem: they should be complete.

Concepts like soundness and completeness are called meta-theoretical, since they are not expressed in the logic under discussion, but rather belong to a theory about that logic (‘meta’ means above). Their significance is not merely theoretical, but extends to logic programming languages like Prolog. For example, if a logic programming language is unsound, it will give wrong answers to some queries; if it is incomplete, it will give no answer to some other queries. Ideally, a logic programming language should be sound and complete; in practice, this will not be the case. For instance, in the next chapter we will see that Prolog is both unsound and incomplete. This has been a deliberate design choice: a sound and complete Prolog would be much less efficient. Nevertheless, any Prolog programmer should know exactly the circumstances under which Prolog is unsound or incomplete, and avoid these circumstances in her programs.

The structure of this chapter is as follows. We start with a very simple (propositional) logical language, and enrich this language in two steps to full clausal logic. For each of these three languages, we discuss syntax, semantics, proof theory, and meta-theory. We then discuss definite clause logic, which is the subset of clausal logic used in Prolog. Finally, we relate clausal logic to Predicate Logic, and show that they are essentially equal in expressive power.

2.1   Propositional clausal logic

Informally, a proposition is any statement which is either true or false, such as ‘2 + 2 = 4’ or ‘the moon is made of green cheese’. These are the building blocks of propositional logic, the weakest form of logic.

Syntax.   Propositions are abstractly denoted by atoms, which are single words starting with a lowercase character. For instance, married is an atom denoting the proposition ‘he/she is married’; similarly, man denotes the proposition ‘he is a man’. Using the special symbols ‘ :- ’ (if), ‘ ; ’ (or) and ‘ , ’ (and), we can combine atoms to form clauses. For instance,

married;bachelor:-man,adult

is a clause, with intended meaning: ‘somebody is married or a bachelor if he is a man and an adult’ [2] . The part to the left of the if-symbol ‘ :- ’ is called the head of the clause, and the right part is called the body of the clause. The head of a clause is always a disjunction (or) of atoms, and the body of a clause is always a conjunction (and).

Exercise 2.1. Translate the following statements into clauses, using the atoms person, sad and happy:
(a)    persons are happy or sad;
(b)    no person is both happy and sad;
(c)    sad persons are not happy;
(d)    non-happy persons are sad.

A program is a set of clauses, each of them terminated by a period. The clauses are to be read conjunctively; for example, the program

woman;man:-human.
human:-woman.
human:-man.

has the intended meaning ‘(if someone is human then she/he is a woman or a man) and (if someone is a woman then she is human) and (if someone is a man then he is human)’, or, in other words, ‘someone is human if and only if she/he is a woman or a man’.

Semantics.   The Herbrand base of a program P is the set of atoms occurring in P. For the above program, the Herbrand base is { woman, man, human }. A Herbrand interpretation (or interpretation for short) for P is a mapping from the Herbrand base of P into the set of truth values { true, false }. For example, the mapping { woman true, man false, human true } is a Herbrand interpretation for the above program. A Herbrand interpretation can be viewed as describing a possible state of affairs in the Universe of Discourse (in this case: ‘she is a woman, she is not a man, she is human’). Since there are only two possible truth values in the semantics we are considering, we could abbreviate such mappings by listing only the atoms that are assigned the truth value true; by definition, the remaining ones are assigned the truth value false. Under this convention, which we will adopt in this book, a Herbrand interpretation is simply a subset of the Herbrand base. Thus, the previous Herbrand interpretation would be represented as { woman, human }.

Since a Herbrand interpretation assigns truth values to every atom in a clause, it also assigns a truth value to the clause as a whole. The rules for determining the truth value of a clause from the truth values of its atoms are not so complicated, if you keep in mind that the body of a clause is a conjunction of atoms, and the head is a disjunction. Consequently, the body of a clause is true if every atom in it is true, and the head of a clause is true if at least one atom in it is true. In turn, the truth value of the clause is determined by the truth values of head and body. There are four possibilities:

(i)   the body is true, and the head is true;

(ii)  the body is true, and the head is false;

(iii) the body is false, and the head is true;

(iv) the body is false, and the head is false.

The intended meaning of the clause is ‘ if body then head’, which is obviously true in the first case, and false in the second case.

What about the remaining two cases? They cover statements like ‘ if the moon is made of green cheese then 2 + 2 = 4’, in which there is no connection at all between body and head. One would like to say that such statements are neither true nor false. However, our semantics is not sophisticated enough to deal with this: it simply insists that clauses should be assigned a truth value in every possible interpretation. Therefore, we consider the clause to be true whenever its body is false. It is not difficult to see that under these truth conditions a clause is equivalent with the statement ‘head or not body’. For example, the clause married;bachelor:-man,adult can also be read as ‘someone is married or a bachelor ornot a man ornot an adult’. Thus, a clause is a disjunction of atoms, which are negated if they occur in the body of the clause. Therefore, the atoms in the body of the clause are often called negative literals, while those in the head of the clause are called positive literals.

To summarise: a clause is assigned the truth value true in an interpretation, if and only if at least one of the following conditions is true: (a) at least one atom in the body of the clause is false in the interpretation (cases (iii) and (iv)), or (b) at least one atom in the head of the clause is true in the interpretation (cases (i) and (iii)). If a clause is true in an interpretation, we say that the interpretation is a model for the clause. An interpretation is a model for a program if it is a model for each clause in the program. For example, the above program has the following models: (the empty model, assigning false to every atom), { woman, human }, { man, human }, and { woman, man, human }. Since there are eight possible interpretations for a Herbrand base with three atoms, this means that the program contains enough information to rule out half of these.

Adding more clauses to the program means restricting its set of models. For instance, if we add the clause woman (a clause with an empty body) to the program, we rule out the first and third model, which leaves us with the models { woman, human }, and { woman, man, human }. Note that in both of these models, human is true. We say that human is a logical consequence of the set of clauses. In general, a clause C is a logical consequence of a program P if every model of the program is also a model of the clause; we write PC.

Exercise 2.2. Given the program
                                        married;bachelor:-man,adult.
                man.
                :-bachelor.

determine which of the following clauses are logical consequences of this program:
(a)    married:-adult;
(b)    married:-bachelor;
(c)    bachelor:-man;
(d)    bachelor:-bachelor.

Of the two remaining models, obviously { woman, human } is the intended one; but the program does not yet contain enough information to distinguish it from the non-intended model { woman, man, human }. We can add yet another clause, to make sure that the atom man is mapped to false. For instance, we could add

:-man

(it is not a man) or

:-man,woman

(nobody is both a man and a woman). However, explicitly stating everything that is false in the intended model is not always feasible. Consider, for example, an airline database consulted by travel agencies: we simply want to say that if a particular flight (i.e., a combination of plane, origin, destination, date and time) is not listed in the database, then it does not exist, instead of listing all the dates that a particular plane does not fly from Amsterdam to London.

So, instead of adding clauses until a single model remains, we want to add a rule to our semantics which tells us which of the several models is the intended one. The airline example shows us that, in general, we only want to accept something as true if we are really forced to, i.e. if it is true in every possible model. This means that we should take the intersection of every model of a program in order to construct the intended model. In the example, this is { woman, human }. Note that this model is minimal in the sense that no subset of it is also a model. Therefore, this semantics is called a minimal model semantics.

Unfortunately, this approach is only applicable to a restricted class of programs. Consider the following program:

woman;man:-human.
human.

This program has three models: { woman, human }, { man, human }, and { woman, man, human }. The intersection of these models is { human }, but this interpretation is not a model of the first clause! The program has in fact not one, but two minimal models, which is caused by the fact that the first clause has a disjunctive head. Such a clause is called indefinite, because it does not permit definite conclusions to be drawn.

On the other hand, if we would only allow definite clauses, i.e. clauses with a single positive literal, minimal models are guaranteed to be unique. We will deal with definite clauses in section 2.4, because Prolog is based on definite clause logic. In principle, this means that clauses like woman;man:-human are not expressible in Prolog. However, such a clause can be transformed into a ‘pseudo-definite’ clause by moving one of the literals in the head to the body, extended with an extra negation. This gives the following two possibilities:

woman:-human,not(man).
man:-human,not(woman).

In Prolog, we have to choose between these two clauses, which means that we have only an approximation of the original indefinite clause. Negation in Prolog is an important subject with many aspects. In Chapter 3, we will show how Prolog handles negation in the body of clauses. In Chapter 8, we will discuss particular applications of this kind of negation.

Proof theory.   Recall that a clause C is a logical consequence of a program P (P = C) if every model of P is a model of C. Checking this condition is, in general, unfeasible. Therefore, we need a more efficient way of computing logical consequences, by means of inference rules. If C can be derived from P by means of a number of applications of such inference rules, we say that C can be proved from P. Such inference rules are purely syntactic, and do not refer to any underlying semantics.

The proof theory for clausal logic consists of a single inference rule called resolution. Resolution is a very powerful inference rule. Consider the following program:

married;bachelor:-man,adult.
has_wife:-man,married.

This simple program has no less than 26 models, each of which needs to be considered if we want to check whether a clause is a logical consequence of it.

Exercise 2.3. Write down the six Herbrand interpretations that are not models of the program.

The following clause is a logical consequence of this program:

has_wife;bachelor:-man,adult

By means of resolution, it can be produced in a single step. This step represents the following line of reasoning: ‘if someone is a man and an adult, then he is a bachelor or married; but if he is married, he has a wife; therefore, if someone is a man and an adult, then he is a bachelor or he has a wife’. In this argument, the two clauses in the program are related to each other by means of the atom married, which occurs in the head of the first clause (a positive literal) and in the body of the second (a negative literal). The derived clause, which is called the resolvent, consists of all the literals of the two input clauses, except married (the literal resolved upon). The negative literal man, which occurs in both input clauses, appears only once in the derived clause. This process is depicted in fig. 2.1.

Figure 2.1. A resolution step.

Resolution is most easily understood when applied to definite clauses. Consider the following program:

square:-rectangle,equal_sides.
rectangle:-parallelogram,right_angles.

Applying resolution yields the clause

square:-parallelogram,right_angles,equal_sides

Figure 2.2. Resolution with definite clauses.

That is, the atom rectangle in the body of the first clause is replaced by the body of the second clause (which has rectangle as its head). This process is also referred to as unfolding the second clause into the first one (fig. 2.2).

A resolvent resulting from one resolution step can be used as input for the next. A proof or derivation of a clause C from a program P is a sequence of clauses such that each clause is either in the program, or the resolvent of two previous clauses, and the last clause is C. If there is a proof of C from P, we write PC.

Exercise 2.4. Give a derivation of friendly from the following program:
                                        happy;friendly:-teacher.
                friendly:-teacher,happy.
                teacher;wise.
               teacher:-wise.

Meta-theory.   It is easy to show that propositional resolution is sound: you have to establish that every model for the two input clauses is a model for the resolvent. In our earlier example, every model of married;bachelor:-man,adult and has_wife:-man,married must be a model of has_wife;bachelor:-man,adult. Now, the literal resolved upon (in this case married) is either assigned the truth value true or false. In the first case, every model of has_wife:-man,married is also a model of has_wife:-man; in the second case, every model of married;bachelor:-man,adult is also a model of bachelor:-man,adult. In both cases, these models are models of a subclause of the resolvent, which means that they are also models of the resolvent itself.

In general, proving completeness is more complicated than proving soundness. Still worse, proving completeness of resolution is impossible, because resolution is not complete at all! For instance, consider the clause a:-a. This clause is a so-called tautology: it is true under any interpretation. Therefore, any model of an arbitrary program P is a model for it, and thus P = a:-a for any program P. If resolution were complete, it would be possible to derive the clause a:-a from some program P in which the literal a doesn’t even occur! It is clear that resolution is unable to do this.

However, this is not necessarily bad, because although tautologies follow from any set of clauses, they are not very interesting. Resolution makes it possible to guide the inference process, by implementing the question ‘is C a logical consequence of P?’ rather than ‘what are the logical consequences of P?’. We will see that, although resolution is unable to generate every logical consequence of a set of clauses, it is complete in the sense that resolution can always determine whether a specific clause is a logical consequence of a set of clauses.

The idea is analogous to a proof technique in mathematics called ‘reduction to the absurd’. Suppose for the moment that C consists of a single positive literal a; we want to know whether P = a, i.e. whether every model of P is also a model of a. It is easily checked that an interpretation is a model of a if, and only if, it is not a model of :-a. Therefore, every model of P is a model of a if, and only if, there is no interpretation which is a model of both :-a and P. In other words, a is a logical consequence of P if, and only if, :-a and P are mutually inconsistent (don’t have a common model). So, checking whether P = a is equivalent to checking whether P { :-a } is inconsistent.

Resolution provides a way to check this condition. Note that, since an inconsistent set of clauses doesn’t have a model, it trivially satisfies the condition that any model of it is a model of any other clause; therefore, an inconsistent set of clauses has every possible clause as its logical consequence. In particular, the absurd or empty clause, denoted by [3] , is a logical consequence of an inconsistent set of clauses. Conversely, if

is a logical consequence of a set of clauses, we know it must be inconsistent. Now, resolution is complete in the sense that if P set of clauses is inconsistent, it is always possible to derive by resolution. Since resolution is sound, we already know that if we can derive then the input clauses must be inconsistent. So we conclude: a is a logical consequence of P if, and only if, the empty clause can be deduced by resolution from P augmented with :-a. This process is called proof by refutation, and resolution is called refutation complete.

This proof method can be generalised to the case where B is not a single atom. For instance, let us check by resolution that a:-a is a tautology, i.e. a logical consequence of any set of clauses. Logically speaking, this clause is equivalent to ‘ a or not a ’, the negation of which is ‘ not a and a ’, which is represented by two separate clauses :-a and a. Since we can derive the empty clause from these two clauses in a single resolution step without using any other clauses, we have in fact proved that a:-a is a logical consequence of an empty set of clauses, hence a tautology.

Exercise 2.5. Prove by refutation that friendly:-has_friends is a logical consequence of the following clauses:
                                        happy:-has_friends.
                friendly:-happy.

Finally, we mention that although resolution can always be used to prove inconsistency of a set of clauses it is not always fit to prove the opposite, i.e. consistency of a set of clauses. For instance, a is not a logical consequence of a:-a; yet, if we try to prove the inconsistency of :-a and a:-a (which should fail) we can go on applying resolution forever! The reason, of course, is that there is a loop in the system: applying resolution to :-a and a:-a again yields :-a. In this simple case it is easy to check for loops: just maintain a list of previously derived clauses, and do not proceed with clauses that have been derived previously.

However, as we will see, this is not possible in the general case of full clausal logic, which is semi-decidable with respect to the question ‘is B a logical consequence of A ’: there is an algorithm which derives, in finite time, a proof if one exists, but there is no algorithm which, for any A and B, halts and returns ‘no’ if no proof exists. The reason for this is that interpretations for full clausal logic are in general infinite. As a consequence, some Prolog programs may loop forever (just like some Pascal programs). One might suggest that it should be possible to check, just by examining the source code, whether a program is going to loop or not, but, as Alan Turing showed, this is, in general, impossible (the Halting Problem). That is, you can write programs for checking termination of programs, but for any such termination checking program you can write a program on which it will not terminate itself!

2.2   Relational clausal logic

Propositional clausal logic is rather coarse-grained, because it takes propositions (i.e. anything that can be assigned a truth value) as its basic building blocks. For example, it is not possible to formulate the following argument in propositional logic:

Peter likes all his students

Maria is one of Peter’s students

Therefore, Peter likes Maria

In order to formalise this type of reasoning, we need to talk about individuals like Peter and Maria, sets of individuals like Peter’s students, and relations between individuals, such as ‘likes’. This refinement of propositional clausal logic leads us into relational clausal logic.

Syntax.   Individual names are called constants; we follow the Prolog convention of writing them as single words starting with a lowercase character (or as arbitrary strings enclosed in single quotes, like 'this is a constant'). Arbitrary individuals are denoted by variables, which are single words starting with an uppercase character. Jointly, constants and variables are denoted as terms. A ground term is a term without variables [4] .

Relations between individuals are abstractly denoted by predicates (which follow the same notational conventions as constants). An atom is a predicate followed by a number of terms, enclosed in brackets and separated by commas, e.g. likes(peter,maria). The terms between brackets are called the arguments of the predicate, and the number of arguments is the predicate’s arity. The arity of a predicate is assumed to be fixed, and predicates with the same name but different arity are assumed to be different. A ground atom is an atom without variables.

All the remaining definitions pertaining to the syntax of propositional clausal logic, in particular those of literal, clause and program, stay the same. So, the following clauses are meant to represent the above statements:

likes(peter,S):-student_of(S,peter).
student_of(maria,peter).

The intended meaning of these clauses are, respectively, ‘ if S is a student of Peter then Peter likes S ’, ‘Maria is a student of Peter’, and ‘Peter likes Maria’. Clearly, we want our logic to be such that the third clause follows logically from the first two, and we want to be able to prove this by resolution. Therefore, we must extend the semantics and proof theory in order to deal with variables.

Semantics.   The Herbrand universe of a program P is the set of ground terms (i.e. constants) occurring in it. For the above program, the Herbrand universe is { peter, maria }. The Herbrand universe is the set of all individuals we are talking about in our clauses. The Herbrand base of P is the set of ground atoms that can be constructed using the predicates in P and the ground terms in the Herbrand universe. This set represents all the things we can say about the individuals in the Herbrand universe.

The Herbrand base of the above program is

likes(peter,peter) , likes(peter,maria) ,
likes(maria,peter) , likes(maria,maria) ,
student_of(peter,peter) , student_of(peter,maria) ,
student_of(maria,peter) , student_of(maria,maria) }

As before, a Herbrand interpretation is the subset of the Herbrand base whose elements are assigned the truth value true. For instance,

{ likes(peter,maria) , student_of(maria,peter) }

is an interpretation of the above program.

Logical variables

Variables in clausal logic are very similar to variables in mathematical formulas: they are placeholders that can be substituted by arbitrary ground terms from the Herbrand universe. It is very important to notice that logical variables are global within a clause (i.e. if the variable occurs at several positions within a clause, it should be substituted everywhere by the same term), but not within a program. This can be clearly seen from the semantics of relational clausal logic, where grounding substitutions are applied to clauses rather than programs. As a consequence, variables in two different clauses are distinct by definition,
even if they have the same name. It will sometimes be useful to rename the variables in clauses, such that no two clauses share a variable; this is called standardising the clauses apart.

Clearly, we want this interpretation to be a model of the program, but now we have to deal with the variables in the program. A substitution is a mapping from variables to terms. For example, { S maria } and { S X } are substitutions. A substitution can be applied to a clause, which means that all occurrences of a variable occurring on the lefthand side in a substitution are replaced by the term on the righthand side. For instance, if C is the clause

likes(peter,S):-student_of(S,peter)

then the above substitutions yield the clauses

likes(peter,maria):-student_of(maria,peter)

likes(peter,X):-student_of(X,peter)

Notice that the first clause is ground; it is said to be a ground instance of C, and the substitution { S maria } is called a grounding substitution. All the atoms in a ground clause occur in the Herbrand base, so reasoning with ground clauses is just like reasoning with propositional clauses. An interpretation is a model for a non-ground clause if it is a model for every ground instance of the clause. Thus, in order to show that

M = { likes(peter,maria) , student_of(maria,peter) }

is a model of the clause C above, we have to construct the set of the ground instances of C over the Herbrand universe { peter, maria }, which is

{ likes(peter,maria):-student_of(maria,peter) ,
likes(peter,peter):-student_of(peter,peter) }

and show that M is a model of every element of this set.

Exercise 2.6. How many models does C have over the Herbrand universe
{ peter, maria }?

Proof theory.   Because reasoning with ground clauses is just like reasoning with propositional clauses, a naive proof method in relational clausal logic would apply grounding substitutions to every clause in the program before applying resolution. Such a method is naive, because a program has many different grounding substitutions, most of which do not lead to a resolution proof. For instance, if the Herbrand universe contains four constants, then a clause with two distinct variables has 16 different grounding substitutions, and a program consisting of three such clauses has 4096 different grounding substitutions.

Instead of applying arbitrary grounding substitutions before trying to apply resolution, we will derive the required substitutions from the clauses themselves. Recall that in order to apply propositional resolution, the literal resolved upon should occur in both input clauses (positive in one clause and negative in the other). In relational clausal logic, atoms can contain variables. Therefore, we do not require that exactly the same atom occurs in both clauses; rather, we require that there is a pair of atoms which can be made equal by substituting terms for variables. For instance, let P be the following program:

likes(peter,S):-student_of(S,peter).
student_of(maria,T):-follows(maria,C),teaches(T,C).

The second clause is intended to mean: ‘Maria is a student of any teacher who teaches a course she follows’. From these two clauses we should be able to prove that ‘Peter likes Maria if Maria follows a course taught by Peter’. This means that we want to resolve the two clauses on the student_of literals.

The two atoms student_of(S,peter) and student_of(maria,T) can be made equal by replacing S by maria and T by peter, by means of the substitution { S maria, T peter }. This process is called unification, and the substitution is called a unifier. Applying this substitution yields the following two clauses:

likes(peter,maria):-student_of(maria,peter).
student_of(maria,peter):-follows(maria,C),
                        teaches(peter,C).

(Note that the second clause is not ground.) We can now construct the resolvent in the usual way, by dropping the literal resolved upon and combining the remaining literals, which yields the required clause

likes(peter,maria):-follows(maria,C),teaches(peter,C).

Exercise 2.7. Write a clause expressing that Peter teaches all the first-year courses, and apply resolution to this clause and the above resolvent.

Consider the following two-clause program P :

likes(peter,S):-student_of(S,peter).
student_of(X,T):-follows(X,C),teaches(T,C).

which differs from the previous program P in that the constant maria in the second clause has been replaced by a variable. Since this generalises the applicability of this clause from Maria to any of Peter’s students, it follows that any model for P over a Herbrand universe including maria is also a model for P, and therefore P = P. In particular, this means that all the logical consequences of P are also logical consequences of P. For instance, we can again derive the clause

likes(peter,maria):-follows(maria,C),teaches(peter,C).

from P by means of the unifier { S maria, X maria, T peter }.

Unifiers are not necessarily grounding substitutions: the substitution { X S, T peter } also unifies the two student_of literals, and the two clauses then resolve to

likes(peter,S):-follows(S,C),teaches(peter,C).

The first unifier replaces more variables by terms than strictly necessary, while the second contains only those substitutions that are needed to unify the two atoms in the input clauses. As a result, the first resolvent is a special case of the second resolvent, that can be obtained by means of the additional substitution { S maria }. Therefore, the second resolvent is said to be more general than the first [5] . Likewise, the second unifier is called a more general unifier than the first.

As it were, more general resolvents summarise a lot of less general ones. It therefore makes sense to derive only those resolvents that are as general as possible, when applying resolution to clauses with variables. This means that we are only interested in a most general unifier (mgu) of two literals. Such an mgu, if it exists, is always unique, apart from an arbitrary renaming of variables (e.g. we could decide to keep the variable X, and replace S by X). If a unifier does not exist, we say that the two atoms are not unifiable. For instance, the atoms student_of(maria,peter) and student_of(S,maria) are not unifiable.

As we have seen before, the actual proof method in clausal logic is proof by refutation. If we succeed in deriving the empty clause, then we have demonstrated that the set of clauses is inconsistent under the substitutions that are needed for unification of literals. For instance, consider the program

likes(peter,S):-student_of(S,peter).
student_of(S,T):-follows(S,C),teaches(T,C).
teaches(peter,ai_techniques).
follows(maria,ai_techniques).

If we want to find out if there is anyone whom Peter likes, we add to the program the negation of this statement, i.e. ‘Peter likes nobody’ or :-likes(peter,N); this clause is called a query or a goal. We then try to refute this query by finding an inconsistency by means of resolution. A refutation proof is given in fig. 2.3. In this figure, which is called a proof tree, two clauses on a row are input clauses for a resolution step, and they are connected by lines to their resolvent, which is then again an input clause for a resolution step, together with another program clause. The mgu’s are also shown. Since the empty clause is derived, the query is indeed refuted, but only under the substitution { N maria }, which constitutes the answer to the query.

Figure 2.3. A refutation proof which finds someone whom Peter likes.

In general, a query can have several answers. For instance, suppose that Peter does not only like his students, but also the people his students like (and the people those people like, and …):

likes(peter,S):-student_of(S,peter).
likes(peter,Y):-likes(peter,X),likes(X,Y).
likes(maria,paul).
student_of(S,T):-follows(S,C),teaches(T,C).
teaches(peter,ai_techniques).
follows(maria,ai_techniques).

The query

?-likes(peter,N).

will now have two answers.

Exercise 2.8. Draw the proof tree for the answer { N paul }.

Meta-theory.   As with propositional resolution, relational resolution is sound (i.e. it always produces logical consequences of the input clauses), refutation complete (i.e. it always detects an inconsistency in a set of clauses), but not complete (i.e. it does not always generate every logical consequence of the input clauses). An important characteristic of relational clausal logic is that the Herbrand universe (the set of individuals we can reason about) is always finite. Consequently, models are finite as well, and there are a finite number of different models for any program. This means that, in principle, we could answer the question ‘is C a logical consequence of P?’ by enumerating all the models of P, and checking whether they are also models of C. The finiteness of the Herbrand universe will ensure that this procedure always terminates. This demonstrates that relational clausal logic is decidable, and therefore it is (in principle) possible to prevent resolution from looping if no more answers can be found. As we will see in the next section, this does not hold for full clausal logic.

2.3   Full clausal logic

Relational logic extends propositional logic by means of the logical variable, which enables us to talk about arbitrary un-named individuals. However, consider the following statement:

Everybody loves somebody.

The only way to express this statement in relational clausal logic, is by explicitly listing every pair of persons such that the first loves the second, e.g.

loves(peter,peter).
loves(anna,paul).
loves(paul,anna).

First of all, this is not a precise translation of the above statement into logic, because it is too explicit (e.g. the fact that Peter loves himself does not follow from the original statement). Secondly, this translation works only for finite domains, while the original statement also allows infinite domains. Many interesting domains are infinite, such as the set of natural numbers. Full clausal logic allows us to reason about infinite domains by introducing more complex terms besides constants and variables. The above statement translates into full clausal logic as

loves(X,person_loved_by(X))

The fact loves(peter,person_loved_by(peter)) is a logical consequence of this clause. Since we know that everybody loves somebody, there must exist someone whom Peter loves. We have given this person the abstract name

person_loved_by(peter)

without explicitly stating whom it is that Peter loves. As we will see, this way of composing complex names from simple names also gives us the possibility to reflect the structure of the domain in our logical formulas.

Exercise 2.9. Translate to clausal logic:
(a)    every mouse has a tail;
(b)    somebody loves everybody;
(c)    every two numbers have a maximum.

Syntax.   A term is either simple or complex. Constants and variables are simple terms. A complex term is a functor (which follows the same notational conventions as constants and predicates) followed by a number of terms, enclosed in brackets and separated by commas, e.g. eldest_child_of(anna,paul). The terms between brackets are called the arguments of the functor, and the number of arguments is the functor’s arity. Again, a ground term is a term without variables. All the other definitions (atom, clause, literal, program) are the same as for relational clausal logic.

Semantics.   Although there is no syntactic difference in full clausal logic between terms and atoms, their meaning and use is totally different, a fact which should be adequately reflected in the semantics. A term always denotes an individual from the domain, while an atom denotes a proposition about individuals, which can get a truth value. Consequently, we must change the definition of the Herbrand universe in order to accomodate for complex terms: given a program P, the Herbrand universe is the set of ground terms that can be constructed from the constants and functors in P (if P contains no constants, choose an arbitrary one). For instance, let P be the program

plus(0,X,X).
plus(s(X),Y,s(Z)):-plus(X,Y,Z).

then the Herbrand universe of P is { 0, s(0), s(s(0)), s(s(s(0))), …}. Thus, as soon as a program contains a functor, the Herbrand universe (the set of individuals we can reason about) is an infinite set.

Exercise 2.10. Determine the Herbrand universe of the following program:

listlength([],0).
listlength([_X|Y],s(L)):-listlength(Y,L).

(Hint: recall that [] is a constant, and that [X|Y] is an alternative notation for the complex term .(X,Y) with binary functor ‘ . ’!)

The Herbrand base of P remains the set of ground atoms that can be constructed using the predicates in P and the ground terms in the Herbrand universe. For the above program, the Herbrand base is

{ plus(0,0,0) , plus(s(0),0,0) , …,
plus(0,s(0),0) , plus(s(0),s(0),0) , …,
…,
plus(s(0),s(s(0)),s(s(s(0)))) , …}

As before, a Herbrand interpretation is a subset of the Herbrand base, whose elements are assigned the truth value true. For instance,

{ plus(0,0,0) , plus(s(0),0,s(0)) , plus(0,s(0),s(0)) }

is an interpretation of the above program.

Unification vs. evaluation

Functors should not be confused with mathematical functions. Although both can be viewed as mappings from objects to objects, an expression containing a functor is not evaluated to determine the value of the mapping, as in mathematics. Rather, the outcome of the mapping is a name, which is determined by unification. For instance, given the complex term person_loved_by(X), if we want to know the name of the object to which Peter is mapped, we unify X with peter to get person_loved_by(peter); this ground term is not evaluated any further.

This approach has the disadvantage that we introduce different names for individuals that might turn out to be identical, e.g. person_loved_by(peter) might be the same as peter. Consequently, reasoning about equality (of different names for the same object) is a problem in clausal logic. Several possible solutions exist, but they fall outside the scope of this book.

Is this interpretation also a model of the program? As in the propositional case, we define an interpretation to be a model of a program if it is a model of every ground instance of every clause in the program. But since the Herbrand universe is infinite, there are an infinite number of grounding substitutions, hence we must generate the ground clauses in a systematic way, e.g.

plus(0,0,0)
plus(s(0),0,s(0)):-plus(0,0,0)
plus(s(s(0)),0,s(s(0))):-plus(s(0),0,s(0))
plus(s(s(s(0))),0,s(s(s(0)))):-plus(s(s(0)),0,s(s(0)))

plus(0,s(0),s(0))
plus(s(0),s(0),s(s(0))):-plus(0,s(0),s(0))
plus(s(s(0)),s(0),s(s(s(0)))):-plus(s(0),s(0),s(s(0)))

plus(0,s(s(0)),s(s(0)))
plus(s(0),s(s(0)),s(s(s(0)))):-plus(0,s(s(0)),s(s(0)))
plus(s(s(0)),s(s(0)),s(s(s(s(0))))):-
                             plus(s(0),s(s(0)),s(s(s(0))))

Now we can reason as follows: according to the first ground clause, plus(0,0,0) must be in any model; but then the second ground clause requires that plus(s(0),0,s(0)) must be in any model, the third ground clause requires plus(s(s(0)),0,s(s(0))) to be in any model, and so on. Likewise, the second group of ground clauses demands that

plus(0,s(0),s(0))
plus(s(0),s(0),s(s(0)))
plus(s(s(0)),s(0),s(s(s(0))))

are in the model; the third group of ground clauses requires that

plus(0,s(s(0)),s(s(0)))
plus(s(0),s(s(0)),s(s(s(0))))
plus(s(s(0)),s(s(0)),s(s(s(s(0)))))

are in the model, and so forth.

In other words, every model of this program is necessarily infinite. Moreover, as you should have guessed by now, it contains every ground atom such that the number of s ’s in the third argument equals the number of s ’s in the first argument plus the number of s ’s in the second argument. The way we generated this infinite model is particularly interesting, because it is essentially what was called the naive proof method in the relational case: generate all possible ground instances of program clauses by applying every possible grounding substitution, and then apply (propositional) resolution as long as you can. While, in the case of relational clausal logic, there inevitably comes a point where applying resolution will not give any new results (i.e. you reach a fixpoint), in the case of full clausal logic with infinite Herbrand universe you can go on applying resolution forever. On the other hand, as we saw above, we get a clear idea of what the infinite model [6] we’re constructing looks like, which means that it is still a fixpoint in some sense. There are mathematical techniques to deal with such infinitary fixpoints, but we will not dive into this subject here.

Although the introduction of only a single functor already results in an infinite Herbrand universe, models are not necessarily infinite. Consider the following program:

reachable(oxford,charing_cross,piccadilly).
reachable(X,Y,route(Z,R)):-
	connected(X,Z,_L),
	reachable(Z,Y,R).
connected(bond_street,oxford,central).

with intended meaning ‘Charing Cross is reachable from Oxford Circus via Piccadilly Circus’, ‘ if X is connected to Z by line L and Y is reachable from Z via R then Y is reachable from X via a route consisting of Z and R ’ and ‘Bond Street is connected to Oxford Circus by the Central line’. The minimal model of this program is the finite set

{ connected(bond_street,oxford,central) ,
reachable(oxford,charing_cross,piccadilly) ,
reachable(bond_street,charing_cross,route(oxford,piccadilly)) }

A Prolog program for constructing models of a given set of clauses (or submodels if the models are infinite) can be found in section 5.4.

Proof theory.   Resolution for full clausal logic is very similar to resolution for relational clausal logic: we only have to modify the unification algorithm in order to deal with complex terms. For instance, consider the atoms

plus(s(0),X,s(X))

and

plus(s(Y),s(0),s(s(Y)))

Their mgu is { Y 0, X s(0) }, yielding the atom

plus(s(0),s(0),s(s(0)))

In order to find this mgu, we first of all have to make sure that the two atoms do not have any variables in common; if needed some of the variables should be renamed. Then, after making sure that both atoms contain the same predicate (with the same arity), we scan the atoms from left to right, searching for the first subterms at which the two atoms differ. In our example, these are 0 and Y. If one of these subterms is not a variable, then the two atoms are not unifiable; otherwise, substitute the other term for all occurrences of the variable in both atoms, and remember this partial substitution (in the above example: { Y 0 }), because it is going to be part of the unifier we are constructing. Then, proceed with the next subterms at which the two atoms differ. Unification is finished when no such subterms can be found (the two atoms are made equal).

Although the two atoms initially have no variables in common, this may change during the unification process. Therefore, it is important that, before a variable is replaced by a term, we check whether the variable already occurs in that term; this is called the occur check. If the variable does not occur in the term by which it is to be replaced, everything is in order and we can proceed; if it does, the unification should fail, because it would lead to circular substitutions and infinite terms. To illustrate this, consider again the clause

loves(X,person_loved_by(X))

We want to know whether this implies that someone loves herself; thus, we add the query :-loves(Y,Y) to this clause and try to apply resolution. To this end, we must unify the two atoms. The first subterms at which they differ are the first arguments, so we apply the partial substitution Y X to the two atoms, resulting in

loves(X,person_loved_by(X))

and

loves(X,X)

The next subterms at which these atoms differ are their second arguments, one of which is a variable. Suppose that we ignore the fact that this variable, X, already occurs in the other term; we construct the substitution X person_loved_by(X). Now, we have reached the end of the two atoms, so unification has succeeded, we have derived the empty clause, and the answer to the query is

X person_loved_by(person_loved_by(person_loved_by(…)))

which is an infinite term.

Now we have two problems. The first is that we did not define any semantics for infinite terms, because there are no infinite terms in the Herbrand base. But even worse, the fact that there exists someone who loves herself is not a logical consequence of the above clause! That is, this clause has models in which nobody loves herself. So, unification without occur check would make resolution unsound.

Exercise 2.11. If possible, unify the following pairs of terms:
(a)    plus(X,Y,s(Y)) and plus(s(V),W,s(s(V)));
(b)    length([X|Y],s(0)) and length([V],V);
(c)    larger(s(s(X)),X) and larger(V,s(V)).

The disadvantage of the occur check is that it can be computationally very costly. Suppose that you need to unify X with a list of thousand elements, then the complete list has to be searched in order to check whether X occurs somewhere in it. Moreover, cases in which the occur check is needed often look somewhat exotic. Since the developers of Prolog were also taking the efficiency of the Prolog interpreter into consideration, they decided to omit the occur check from Prolog’s unification algorithm. On the whole, this makes Prolog unsound; but this unsoundness only occurs in very specific cases, and it is the duty of the programmer to avoid such cases. In case you really need sound unification, most available Prolog implementations provide it as a library routine, but you must build your own Prolog interpreter in order to incorporate it. In Chapter 3, we will see that this is in fact amazingly simple: it can even be done in Prolog!

Meta-theory.   Most meta-theoretical results concerning full clausal logic have already been mentioned. Full clausal resolution is sound (as long as unification is performed with the occur check), refutation complete but not complete. Moreover, due to the possibility of infinite interpretations full clausal logic is only semi-decidable: that is, if A is a logical consequence of B, then there is an algorithm that will check this in finite time; however, if A is not a logical consequence of B, then there is no algorithm which is guaranteed to check this in finite time for arbitrary A and B.Consequently, there is no general way to prevent Prolog from looping if no (further) answers to a query can be found.

2.4   Definite clause logic

In the foregoing three sections, we introduced and discussed three variants of clausal logic, in order of increasing expressiveness. In this section, we will show how an additional restriction on each of these variants will significantly improve the efficiency of a computational reasoning system for clausal logic. This is the restriction to definite clauses, on which Prolog is based. On the other hand, this restriction also means that definite clause logic is less expressive than full clausal logic, the main difference being that clausal logic can handle negative information. If we allow negated literals in the body of a definite clause then we obtain a so-called general clause, which is probably the closest we can get to full clausal logic without having to sacrifice efficiency.

Consider the following program:

married(X);bachelor(X):-man(X),adult(X).
man(peter).
adult(peter).
:-married(maria).
:-bachelor(maria).
man(paul).
:-bachelor(paul).

There are many clauses that are logical consequences of this program. In particular, the following three clauses can be derived by resolution:

married(peter);bachelor(peter)
:-man(maria),adult(maria)
married(paul):-adult(paul)

Exercise 2.12. Draw the proof tree for each of these derivations.

In each of these derivations, the first clause in the program is used in a different way. In the first one, only literals in the body are resolved away; one could say that the clause is used from right to left. In the second derivation the clause is used from left to right, and in the third one literals from both the head and the body are resolved away. The way in which a clause is used in a resolution proof cannot be fixed in advance, because it depends on the thing we want to prove (the query in refutation proofs).

On the other hand, this indeterminacy substantially increases the time it takes to find a refutation. Let us decide for the moment to use clauses only in one direction, say from right to left. That is, we can only resolve the negative literals away in a clause, as in the first derivation above, but not the positive literals. But now we have a problem: how are we going to decide whether Peter is married or a bachelor? We are stuck with a clause with two positive literals, representing a disjunctive or indefinite conclusion.

This problem can in turn be solved by requiring that clauses have exactly one positive literal, which leads us into definite clause logic. Consequently, a definite clause 

A :- B 1 ,, B n  

will always be used in the following way: A is proved by proving each of B 1 ,…, B n . This is called the procedural interpretation of definite clauses, and its simplicity makes the search for a refutation much more efficient than in the indefinite case. Moreover, it allows for an implementation which limits the amount of memory needed, as will be explained in more detail in Chapter 5.

But how do we express in definite clause logic that adult men are bachelors or married? Even if we read the corresponding indefinite clause from right to left only, it basically has two different procedural interpretations:

(i)   to prove that someone is married, prove that he is a man and an adult, and prove that he is not a bachelor;

(ii)  to prove that someone is a bachelor, prove that he is a man and an adult, and prove that he is not married.

We should first choose one of these procedural interpretations, and then convert it into a ‘pseudo-definite’ clause. In case (i), this would be

married(X):-man(X),adult(X),not bachelor(X)

and case (ii) becomes

bachelor(X):-man(X),adult(X),not married(X)

These clauses do not conform to the syntax of definite clause logic, because of the negation symbol not. We will call them general clause s.

If we want to extend definite clause logic to cover general clauses, we should extend resolution in order to deal with negated literals in the body of a clause. In addition, we should extend the semantics. This topic will be addressed in section 8.2. Without going into too much detail here, we will demonstrate that preferring a certain procedural interpretation corresponds to preferring a certain minimal model. Reconsider the original indefinite clause

married(X);bachelor(X):-man(X),adult(X)

Supposing that john is the only individual in the Herbrand universe, and that man(john) and adult(john) are both true, then the models of this clause are

{ man(john) , adult(john) , married(john) }
{
man(john) , adult(john) , bachelor(john) }
{
man(john) , adult(john) , married(john) , bachelor(john) }

Note that the first two models are minimal, as is characteristic for indefinite clauses. If we want to make the clause definite, we should single out one of these two minimal models as the intended model. If we choose the first model, in which John is married but not a bachelor, we are actually preferring the general clause

married(X):-man(X),adult(X),not bachelor(X)

Likewise, the second model corresponds to the general clause

bachelor(X):-man(X),adult(X),not married(X)

Exercise 2.13. Write a clause for the statement ‘somebody is innocent unless proven guilty’, and give its intended model (supposing that john is the only individual in the Herbrand universe).

An alternative approach to general clauses is to treat not as a special Prolog predicate, as will be discussed in the next chapter. This has the advantage that we need not extend the proof theory and semantics to incorporate general clauses. However, a disadvantage is that in this way not can only be understood procedurally.

2.5   The relation between clausal logic and Predicate Logic

Clausal logic is a formalism especially suited for automated reasoning. However, the form of logic usually presented in courses on Symbolic Logic is (first-order) Predicate Logic. Predicate logic is more expressive in the sense that statements expressed in Predicate Logic often result in shorter formulas than would result if they were expressed in clausal logic. This is due to the larger vocabulary and less restrictive syntax of Predicate Logic, which includes quantifiers (‘for all’ ( ) and ‘there exists’ ( )), and various logical connectives (conjunction ( ), disjunction ( ), negation ( ¬ ), implication ( ), and equivalence ( )) which may occur anywhere within a formula.

Being syntactically quite different, clausal logic and Predicate Logic are semantically equivalent in the following sense: every set of clauses is, after minor modifications, a formula in Predicate Logic, and conversely, every formula in Predicate Logic can be rewritten to an ‘almost’ equivalent set of clauses. Why then bother about Predicate Logic at all in this book? The main reason is that in Chapter 8, we will discuss an alternative semantics of logic programs, defined in terms of Predicate Logic. In this section, we will illustrate the semantic equivalence of clausal logic and Predicate Logic. We will assume a basic knowledge of the syntax and semantics of Predicate Logic.

We start with the propositional case. Any clause like

married;bachelor:-man,adult

can be rewritten by reversing head and body and replacing the ‘ :- ’ sign by an implication ‘ ’, replacing ‘ , ’ by a conjunction ‘ ’, and replacing ‘ ; ’ by a disjunction ‘ ’, which yields

man adult married bachelor

By using the logical laws A B ¬ A B and ¬ (C D) ¬ C ∨¬ D, this can be rewritten into the logically equivalent formula

¬man ¬adult married bachelor

which, by the way, clearly demonstrates the origin of the terms negative literal and positive literal!

A set of clauses can be rewritten by rewriting each clause separately, and combining the results into a single conjunction, e.g.

married;bachelor:-man,adult.
has_wife:-man,married.

becomes

(¬man ¬adult married bachelor)
(¬man ¬married has_wife)

Formulas like these, i.e. conjunctions of disjunctions of atoms and negated atoms, are said to be in conjunctive normal form (CNF).

The term ‘normal form’ here indicates that every formula of Predicate Logic can be rewritten into a unique equivalent formula in conjunctive normal form, and therefore to a unique equivalent set of clauses. For instance, the formula

(married ¬child) (adult (man woman))

can be rewritten into CNF as (replace A B by ¬ A B, push negations inside by means of De Morgan’s laws: ¬ (C D) ¬ C ∨¬ D and ¬ (C D) ¬ C ∧¬ D, and distribute over by means of (A B) C (A C) (B C)):

(¬married adult) (¬married man woman)
(child adult) (child man woman)

and hence into clausal form as

adult:-married.
man;woman:-married.
child;adult.
child;man;woman.

Using a normal form has the advantage that the language contains no redundancy: formulas are only equivalent if they are identical (up to the order of the subformulas). A slight disadvantage is that normal forms are often longer and less understandable (the same objection can be made against resolution proofs).

The order of logics

A logic with propositions (statements that can be either true or false) as basic building blocks is called a propositional logic; a logic built on predicates is called a Predicate Logic. Since propositions can be viewed as nullary predicates (i.e. predicates without arguments), any propositional logic is also a Predicate Logic.

A logic may or may not have variables for its basic building blocks. If it does not include such variables, both the logic and its building blocks are called first-order; this is the normal case. Thus, in first-order Predicate Logic, there are no
predicate variables, but only first-order predicates.

Otherwise, an n th  order logic has variables (and thus quantifiers) for its (n -1) th  order building blocks. For instance, the statement

X Y: equal(X,Y) ( P: P(X) P(Y))

defining two individuals to be equal if they have the same properties,
is a statement from second-order Predicate Logic, because P is a
variable ranging over first-order predicates.

Another example of a statement from second-order Predicate Logic is

P: transitive(P) ( X Y Z: P(X,Y) P(Y,Z) P(X,Z))

This statement defines the transitivity of binary relations. Since transitive has a second-order variable as argument, it is called a second-order predicate.

For rewriting clauses from full clausal logic to Predicate Logic, we use the same rewrite rules as for propositional clauses. Additionally, we have to add universal quantifiers for every variable in the clause. For example, the clause

reachable(X,Y,route(Z,R)):-
connected(X,Z,L),
reachable(Z,Y,R).

becomes

X Y Z R L: ¬connected(X,Z,L) ¬reachable(Z,Y,R)
reachable(X,Y,route(Z,R))

The reverse process of rewriting a formula of Predicate Logic into an equivalent set of clauses is somewhat complicated if existential quantifiers are involved (the exact procedure is given as a Prolog program in Appendix B.1). An existential quantifier allows us to reason about individuals without naming them. For example, the statement ‘everybody loves somebody’ is represented by the Predicate Logic formula

X Y: loves(X,Y)

Recall that we translated this same statement into clausal logic as

loves(X,person_loved_by(X))

These two formulas are not logically equivalent! That is, the Predicate Logic formula has models like { loves(paul,anna) } which are not models of the clause. The reason for this is, that in clausal logic we are forced to introduce abstract names, while in Predicate Logic we are not (we use existential quantification instead). On the other hand, every model of the Predicate Logic formula, if not a model of the clause, can always be converted to a model of the clause, like { loves(paul,person_loved_by(paul)) }. Thus, we have that the formula has a model if and only if the clause has a model (but not necessarily the same model).

So, existential quantifiers are replaced by functors. The arguments of the functor are given by the universal quantifiers in whose scope the existential quantifier occurs. In the above example, Y occurs within the scope of X, so we replace Y everywhere in the formula by person_loved_by(X), where person_loved_by should be a new functor, not occurring anywhere else in the clause (or in any other clause). This new functor is called a Skolem functor, and the whole process is called Skolemisation. Note that, if the existential quantifier does not occur inside the scope of a universal quantifier, the Skolem functor does not get any arguments, i.e. it becomes a Skolem constant. For example, the formula

X Y: loves(X,Y)

(‘somebody loves everybody’) is translated to the clause

loves(someone_who_loves_everybody,X)

Finally, we illustrate the whole process of converting from Predicate Logic to clausal logic by means of an example. Consider the sentence ‘Everyone has a mother, but not every woman has a child’. In Predicate Logic, this can be represented as

Y X: mother_of(X,Y) ¬ Z W: woman(Z) mother_of(Z,W)

First, we push the negation inside by means of the equivalences ¬∀ X: F ≡ ∃ X: ¬ F and ¬∃ Y: G ≡ ∀ Y: ¬ G, and the previously given propositional equivalences, giving

Y X: mother_of(X,Y) Z W: woman(Z) ¬mother_of(Z,W)

The existential quantifiers are Skolemised: X is replaced by mother(Y), because it is in the scope of the universal quantifier Y. Z, however, is not in the scope of a universal quantifier; therefore it is replaced by a Skolem constant childless_woman. The universal quantifiers can now be dropped:

mother_of(mother(Y),Y) woman(childless_woman) ¬mother_of(childless_woman,W)

This formula is already in CNF, so we obtain the following set of clauses:

mother_of(mother(Y),Y).
woman(childless_woman).
:-
mother_of(childless_woman,W).

Exercise 2.14. Translate to clausal logic:
(a)     X Y: mouse(X) tail_of(Y,X);
(b)     X Y: loves(X,Y) ( Z: loves(Y,Z));
(c)     X Y Z: number(X) number(Y) maximum(X,Y,Z).

Further reading

Many (but not all) aspects of Artificial Intelligence are amenable to logical analysis. An early advocate of this approach is Kowalski (1979). Overviews of different types of logics used in Artificial Intelligence can be found in (Turner, 1984; Genesereth & Nilsson, 1987; Ramsay, 1988). Bläsius and Bürckert (1989) discuss more technical aspects of automated theorem proving.

The main source for theoretical results in Logic Programming is (Lloyd, 1987). Hogger (1990) gives a more accessible introduction to this theory. (Mendelson, 1987) is an excellent introduction to Predicate Logic.

K.H. Bläsius & H.J. Bürckert (eds) ( 1989) , Deduction Systems in Artificial Intelligence, Ellis Horwood.

M.R. Genesereth & N.J. Nilsson ( 1987) , Logical Foundations of Artificial Intelligence, Morgan Kaufmann.

C.J. Hogger ( 1990) , Essentials of Logic Programming, Oxford University Press.

R.A. Kowalski ( 1979) , Logic for Problem Solving, North-Holland.

J.W. Lloyd ( 1987) , Foundations of Logic Programming, Springer-Verlag, second edition.

E. Mendelson ( 1987) , Introduction to Mathematical Logic, Wadsworth & Brooks/Cole, third edition.

R. Turner ( 1984) , Logics for Artificial Intelligence, Ellis Horwood.

A. Ramsay ( 1988) , Formal Methods in Artificial Intelligence, Cambridge University Press.


3

Logic Programming and Prolog

In the previous chapters we have seen how logic can be used to represent knowledge about a particular domain, and to derive new knowledge by means of logical inference. A distinct feature of logical reasoning is the separation between model theory and proof theory: a set of logical formulas determines the set of its models, but also the set of formulas that can be derived by applying inference rules. Another way to say the same thing is: logical formulas have both a declarative meaning and a procedural meaning. For instance, declaratively the order of the atoms in the body of a clause is irrelevant, but procedurally it may determine the order in which different answers to a query are found.

Because of this procedural meaning of logical formulas, logic can be used as a programming language. If we want to solve a problem in a particular domain, we write down the required knowledge and apply the inference rules built into the logic programming language. Declaratively, this knowledge specifies what the problem is, rather than how it should be solved. The distinction between declarative and procedural aspects of problem solving is succinctly expressed by Kowalski’s equation

algorithm = logic + control  

Here, logic refers to declarative knowledge, and control refers to procedural knowledge. The equation expresses that both components are needed to solve a problem algorithmically.

In a purely declarative programming language, the programmer would have no means to express procedural knowledge, because logically equivalent programs would behave identical. However, Prolog is not a purely declarative language, and therefore the procedural meaning of Prolog programs cannot be ignored. For instance, the order of the literals in the body of a clause usually influences the efficiency of the program to a large degree. Similarly, the order of clauses in a program often determines whether a program will give an answer at all. Therefore, in this chapter we will take a closer look at Prolog’s inference engine and its built-in features (some of which are non-declarative). Also, we will discuss some common programming techniques.

3.1   SLD-resolution

Prolog’s proof procedure is based on resolution refutation in definite clause logic. Resolution refutation has been explained in the previous chapter. In order to turn it into an executable proof procedure, we have to specify how a literal to resolve upon is selected, and how the second input clause is found. Jointly, this is called a resolution strategy. Consider the following program:

student_of(X,T):-follows(X,C),teaches(T,C).
follows(paul,computer_science).
follows(paul,expert_systems).
follows(maria,ai_techniques).
teaches(adrian,expert_systems).
teaches(peter,ai_techniques).
teaches(peter,computer_science).

The query

 ?-student_of(S,peter).

has two possible answers: { S paul } and { S maria }. In order to find these answers, we first resolve the query with the first clause, yielding

?-follows(S,C),teaches(peter,C).

Now we have to decide whether we will search for a clause which resolves on follows(S,C), or for a clause which resolves on teaches(peter,C). This decision is governed by a selection rule. Prolog’s selection rule is left to right, thus Prolog will search for a clause with a positive literal unifying with follows(S,C). There are three of these, so now we must decide which one to try first. Prolog searches the clauses in the program top-down, so Prolog finds the answer { S paul } first. Note that the second choice leads to a dead end: the resolvent is

?-teaches(peter,expert_systems).

which doesn’t resolve with any clause in the program.

This process is called SLD-resolution: S for selection rule, L for linear resolution (which refers to the shape of the proof trees obtained), and D for definite clauses. Graphically, SLD-resolution can be depicted as in fig. 3.1. This SLD-tree should not be confused with a proof tree: first, only the resolvents are shown (no input clauses or unifiers), and second, it contains every possible resolution step. Thus, every leaf of an SLD-tree which contains the empty clause  corresponds to a refutation and hence to a proof tree; such a leaf is also called a success branch. An underlined leaf which does not contain represents a failure branch.

Exercise 3.1. Draw the proof trees for the two success branches in fig. 3.1.

As remarked already, Prolog searches the clauses in the program top-down, which is the same as traversing the SLD-tree from left to right. This not only determines the order in which answers (i.e. success branches) are found: it also determines whether any answers are found at all, because an SLD-tree may contain infinite branches, if some predicates in the program are recursive. As an example, consider the following program:

brother_of(X,Y):-brother_of(Y,X).
brother_of(paul,peter).

Figure 3.1. An SLD-tree.

Figure 3.2. An SLD-tree with infinite branches.


An SLD-tree for the query

?-brother_of(peter,B).

is depicted in fig. 3.2. If we descend this tree taking the left branch at every node, we will never reach a leaf. On the other hand, if we take the right branch at every node, we almost immediately reach a success branch. Taking right branches instead of left branches in an SLD-tree corresponds to searching the clauses from bottom to top. The same effect would be obtained by reversing the order of the clauses in the program, and the SLD-tree clearly shows that this is enough to prevent Prolog from looping on this query. This is a rule of thumb that applies to most cases: put non-recursive clauses before recursive ones.

However, note that, even after this modification, the program still has some problems. For one thing, the query ?-brother_of(peter,B) will be answered an infinite number of times, because there are infinitely many refutations of it. But, even worse, consider a query that does not have an answer, like

?-brother_of(peter,maria).

No matter the order in which the SLD-tree is descended, Prolog will never discover that the query has in fact no answer, simply because the SLD-tree is infinite. So, one should be careful with programs like the above, which define a predicate to be symmetric.

Figure 3.3. An SLD-tree with infinite branches and expanding resolvents.

Another property of predicates which can cause similar problems is transitivity.Consider the following program:

brother_of(paul,peter).
brother_of(peter,adrian).
brother_of(X,Y):-brother_of(X,Z),brother_of(Z,Y).

The third clause ensures that ?-brother_of(paul,adrian) is a logical consequence of the program. The SLD-tree for the query

?-brother_of(paul,B).

is depicted in fig. 3.3. Not only is this SLD-tree infinite, but the resolvents get longer and longer on deeper levels in the tree.

We have encountered two problems with SLD-resolution: (i) we might never reach a success branch in the SLD-tree, because we get ‘trapped’ into an infinite subtree, and (ii) any infinite SLD-tree causes the inference engine to loop if no (more) answers are to be found. The first problem means that Prolog is incomplete: some logical consequences of a program may never be found. Note carefully that this incompleteness is not caused by the inference rule of resolution, which is refutation complete. Indeed, for any program and any query, all the possible answers will be represented by success branches in the SLD-tree. The incompleteness of SLD-resolution is caused by the way the SLD-tree is searched.

There exists a solution to this problem: if we descend the tree layer by layer rather than branch-by-branch, we will find any leaf before we descend to the next level. However, this also means that we must keep track of all the resolvents on a level, instead of just a single one. Therefore, this breadth-first search strategy needs much more memory than the depth-first strategy used by Prolog. In fact, Prolog’s incompleteness was a deliberate design choice, sacrifying completeness in order to obtain an efficient use of memory [7] . As we saw above, this problem can often be avoided by ordering the clauses in the program in a specific way (which means that we have to take the procedural meaning of the program into account).

As for the second problem, we already saw that this is due to the semi-decidability of full clausal logic, which means that there is no general solution to it.

Exercise 3.2. Draw the SLD-tree for the following program:

list([]).
list([_H|T]):-list(T).

and the query

?-list(L).

3.2   Pruning the search by means of cut

As shown in the previous section, Prolog constantly searches the clauses in a program in order to reach a success branch in the SLD-tree for a query. If a failure branch is reached (i.e., a non-empty resolvent which cannot be reduced any further), Prolog has to ‘unchoose’ the last-chosen program clause, and try another one. This amounts to going up one level in the SLD-tree, and trying the next branch to the right. This process of reconsidering previous choices is called backtracking. Note that backtracking requires that all previous resolvents are remembered for which not all alternatives have been tried yet, together with a pointer to the most recent program clause that has been tried at that point. Because of Prolog’s depth-first search strategy, we can easily record all previous resolvents in a goal stack: backtracking is then implemented by popping the upper resolvent from the stack, and searching for the next program clause to resolve with.

As an illustration, consider again the SLD-tree in fig. 3.1. The resolvent in the middle branch

:-teaches(peter,expert_systems)

cannot be reduced any further, and thus represents a failure branch. At that point, the stack contains (top-down) the previous resolvents

:-follows(S,C),teaches(peter,C)
?-student_of(S,peter)

The top one is popped from the stack; it has been most recently resolved with follows(paul,expert_systems), so we continue searching the program from that clause, finding follows(maria,ai_techniques) as the next alternative.

A node in the SLD-tree which is not a leaf is called a choice point, because the subtree rooted at that node may contain several success branches, each of which may be reached by a different choice for a program clause to resolve with. Now, suppose a subtree contains only one success branch, yielding an answer to our query. If we want to know whether there are any alternative answers, we can force Prolog to backtrack. However, since the rest of the subtree does not contain any success branches, we might as well skip it altogether, thus speeding up backtracking. But how do we tell Prolog that a subtree contains only one success branch? For this, Prolog provides a control device which is called cut (written !), because it cuts away (or prunes) part of the SLD-tree.

Figure 3.4. SLD-tree for the query
?-parent(john,C).

To illustrate the effect of cut, consider the following program.

parent(X,Y):-father(X,Y).
parent(X,Y):-mother(X,Y).
father(john,paul).
mother(mary,paul).

The SLD-tree for the query

?-parent(john,C).

is given in fig. 3.4. The answer given by Prolog is { C paul }. By asking whether there are any other answers, we force Prolog to backtrack to the most recent choice point for which there are any alternatives left, which is the root of the SLD-tree (i.e. the original query). Prolog tries the second clause for parent, but discovers that this leads to a failure branch.

Of course, we know that this backtracking step did not make sense: if John is a father of anyone, he can’t be a mother. We can express this by adding a cut to the first parent clause:

parent(X,Y):-father(X,Y),!.
parent(X,Y):-mother(X,Y).
father(john,paul).
mother(mary,paul).

Figure 3.5. The effect of cut.

The cut says: once you’ve reached me, stick to all the variable substitutions you’ve found after you entered my clause. That is: don’t try to find any alternative solutions to the literals left of the cut, and also: don’t try any alternative clauses for the one in which the cut is found. Given this modified program, the SLD-tree for the same query is shown in fig. 3.5. Since ! is true by definition, the resolvent :-! reduces to the empty clause. The shaded part represents the part of the SLD-tree which is pruned as a result of the cut. That is: every alternative at choice points below and including ?-parent(john,C), which are on the stack when the cut is reached, are pruned. Note carefully that a cut does not prune every choice point. First of all, pruning does not occur above the choice point containing the head of the clause in which the cut is found. Secondly, choice points created by literals to the right of the cut, which are below the cut in the SLD-tree but are not yet on the stack when the cut is reached, are not pruned either (fig. 3.6).

A cut is harmless if it does not cut away subtrees containing success branches. If a cut prunes success branches, then some logical consequences of the program are not returned as answers, resulting in a procedural meaning different from the declarative meaning. Cuts of the first kind are called green cuts, while cuts of the second kind are called red cuts. A green cut merely stresses that the conjunction of literals to its left is deterministic: it does not give alternative solutions. In addition, it signifies that if those literals give a solution, the clauses below it will not result in any alternatives.

This seems to be true for the above program: John is the father of only one child, and no-one is both a father and a mother. However, note that we only analysed the situation with regard to a particular query. We can show that the cut is in fact red by asking the query

?-parent(P,paul).

The answer { P mary } is pruned by the cut (fig. 3.7). That is, the literal father(X,Y) left to the cut is only deterministic if X is instantiated (is substituted by a non-variable value).

Figure 3.6. Cut prunes away alternative solutions for s, but not for t. Also, choice points above :‑q(X,Y) are not pruned.

Note that success branches are also pruned for the first query if John has several children:

parent(X,Y):-father(X,Y),!.
parent(X,Y):-mother(X,Y).
father(john,paul).
father(john,peter).
mother(mary,paul).
mother(mary,peter).

The SLD-tree for the query

?-parent(john,C).

is given in fig. 3.8. Indeed, the second answer { C peter } is pruned by the cut. This clearly shows that the effect of a cut is not only determined by the clause in which it occurs but also by other clauses. Therefore, the effect of a cut is often hard to understand.

Figure 3.7. A success branch is pruned.

Figure 3.8. Another success branch is pruned.

Programs with cuts are not only difficult to understand; this last example also shows that their procedural interpretation (the set of answers they produce to a query) may be different from their declarative interpretation (the set of its logical consequences). Logically, cut has no meaning: it always evaluates to true, and therefore it can always be added or removed from the body of a clause without affecting its declarative interpretation. Procedurally, cut may have many effects, as the preceding examples show. This

 incompatibility between declarative and procedural interpretation makes it a very problematic concept. Much research in Logic Programming aims at replacing it by higher-level constructs which have cleaner declarative meanings and which are easier to understand. The most important of these will be considered in the next two sections.

Exercise 3.3. Draw the SLD-tree for the query

?-likes(A,B).

given the following program:

likes(peter,Y):-friendly(Y).
likes(T,S):-student_of(S,T).
student_of(maria,peter).
student_of(paul,peter).
friendly(maria).

Add a cut in order to prune away one of the answers { A peter, B maria }, and indicate the result in the SLD-tree. Can this be done without pruning away the third answer?

3.3   Negation as failure

The following program computes the maximum of two integers:

max(M,N,M):- M >= N.
max(M,N,N):- M =< N.

>= and =< are built-in predicates with meaning ‘greater than or equal’ and ‘less than or equal’, respectively [8] . Declaratively, the program captures the intended meaning, but procedurally there are two different ways to solve queries of the form ?-max(N,N,M). The reason for this is that the bodies of the two clauses are not exclusive: they both succeed if the first two values of the max predicate are equal. We could of course remove one of the equality symbols, but suppose that we use a cut instead:

max(M,N,M):- M >= N,!.
max(_M,N,N).

With a red cut, this program can only be understood procedurally. The question is: does the procedural meaning correspond to the intended meaning? Perhaps surprisingly, the answer is no! For instance, the query

?-max(5,3,3).

succeeds: the cut is never reached, because the literal in the query does not unify with the head of the first clause. The second program is in fact a very bad program: the declarative and procedural meanings differ, and neither of them captures the intended meaning.

Exercise 3.4. Show that this cut is red, by drawing an SLD-tree in which a success branch is pruned.

The procedural meaning of the program would be correct if its use is restricted to queries with uninstantiated third argument. It illustrates a very common use of cut: to ensure that the bodies of the clauses are mutually exclusive. In general, if we have a program of the form

p:-q,!,r.
p:-s.

its meaning is something like

p:-q,r.
p:-not_q,s.

How should not_q be defined, in order to make the second program work? If q succeeds, not_q should fail. This is expressed by the following clause:

not_q:-q,fail

where fail is a built-in predicate, which is always false. If q fails, not_q should succeed. This can be realised by the program

not_q:-q,!,fail.
not_q.

The cut in the first clause is needed to prevent backtracking to the second clause when q succeeds.

This approach is not very practical, because it only works for a single proposition symbol, without variables. We would like to treat the literal to be negated as a parameter, as in

not(Goal):- /* execute Goal, */ !,fail.
not(Goal).

The problem now is to execute a goal which is passed to the predicate not as a term. Prolog provides two facilities for this. One is the built-in predicate call, which takes a goal as argument and succeeds if and only if execution of that goal succeeds. The second facility [9] is merely a shorthand for this: instead of writing call(Goal), one may simply write Goal, as in

not(Goal):- Goal,!,fail.
not(_Goal).

This is a slight abuse of the syntax rules, because a variable (a term) occurs in a position where only atoms are allowed. As long as the variable is instantiated to a goal before it is reached, this will, however, cause no problem (if it is not correctly instantiated, Prolog will generate an error-message). Predicates like not and call are called meta-predicates, that take formulas from the same logical language in which they are written as arguments. As we will see in later chapters, meta-predicates play an important role in this book.

Figure 3.9. SLD-tree with not.


Figure 3.10. Equivalent SLD-tree with cut.


We illustrate the operation of not by means of the following propositional program:

p:-q,r.
p:-not(q),s.
s.

and the query ?-p. The SLD-tree is shown in fig. 3.9. The first clause for p leads to a failure branch, because q cannot be proved. The second clause for p is tried, and not(q) is evaluated by trying to prove q. Again, this fails, which means that the second clause for not is tried, which succeeds. Thus, not(q) is proved by failing to prove q! Therefore, this kind of negation is called negation as failure.

Fig. 3.9 shows, that Prolog tries to prove q twice. Consequently, the program with not is slightly less efficient than the version with cut:

p:-q,!,r.
p:-s.
s.

which leads to the SLD-tree shown in fig. 3.10. Here, q is tried only once. However, in general we prefer the use of not, because it leads to programs of which the declarative meaning corresponds more closely to the procedural meaning.

Figure 3.11. :-not(q) fails because :-q succeeds.

In the following program, :-not(q) fails because :-q succeeds:

p:-not(q),r.
p:-q.
q.
r.

The SLD-tree for the query ?-p is shown in fig. 3.11. Since q succeeds, fail ensures that not(q) fails. The cut is needed to ensure that everything following the not is pruned, even if it contains a success branch.

The implementation of not illustrated above can lead to problems if variables are involved. Take a look at the following program:

bachelor(X):-not(married(X)),man(X).
man(fred).
man(peter).
married(fred).

Exercise 3.5. Draw the SLD-trees for the queries ?-bachelor(fred) and
?-bachelor(peter).

Figure 3.12. There are no bachelors?!

Consider the query

?-bachelor(X).

for which the SLD-tree is depicted in fig. 3.12. According to negation as failure, Prolog tries to prove not(married(X)) by trying married(X) first. Since this succeeds for X = fred, the cut is reached and the success branch to the right (representing the correct answer { X peter }) is pruned. Thus, :‑not(married(X)) fails because :‑married(X) succeeds for one value of X. That is, not(married(X)) is interpreted as ‘it is false that somebody is married’, or equivalently, ‘nobody is married’. But this means that the clause

bachelor(X):-not(married(X)),man(X)

is interpreted as ‘ X is a bachelor if nobody is married and X is a man’, which is of course not as intended.

Negation as failure vs. logical negation

Negation as failure is not the same as logical negation: if we cannot prove q, we know that q is not a logical consequence of the program, but this does not mean that its negation :-q is a logical consequence of the program. Adopting negation as failure is similar to saying ‘I cannot prove that God exists, therefore I conclude God does not exist’. It is a kind of reasoning that is applicable in some contexts, but inadequate in others. Logical negation can only be expressed by
indefinite clauses, as in the following program:

                           p:-q,r.
                           p;q:-s.
                           s.

Semantically speaking, if we don’t have enough information to conclude that a formula F is true or false, the truth value of its logical negation will also be undecided, but not (F) will be true. This property of negation as failure can be very useful when dealing with exceptions to rules: if we don’t know that something is an exception to a rule, we assume that it’s not, so we only have to list the exceptions and not the normal cases. This approach will be extensively discussed in Chapter 8 on reasoning with incomplete information.

Thus, if G is instantiated to a goal containing variables at the time not(G) is called, the result may be not in accordance with negation as failure. It is the programmer’s responsibility to avoid this. A simple remedy that will often work is to ensure the grounding of G by literals preceding not(G) in the body of the clause, i.e.

bachelor(X):-man(X),not(married(X)).
man(fred).
man(peter).
married(fred).

Exercise 3.6. Show that the modified program produces the right answer, by drawing the SLD-tree for the query ?-bachelor(X).

Thus, we see that changing the order of the literals in the body of a clause does not only affect the order in which answers to a query are found, but it may also change the set of answers! Of course, this is very much against the spirit of declarative programming, because the declarative interpretation of a clause does not depend on the order of the literals. Therefore, some Prolog interpreters provide a mechanism which defers the evaluation of not(G) until G is ground. However, with standard Prolog it is the programmer’s duty to ensure that not is never called with a non-ground argument.

Let’s summarise the points made about negation in Prolog. It is often used to ensure that only one of several possible clauses is applicable. The same effect can be achieved by means of cut, but in general we prefer the use of not, although it is somewhat less efficient [10] . not is supplied by Prolog as a meta-predicate (i.e. a predicate which takes formulas from the same logical language in which it is written as arguments). It is only a partially correct implementation of negation as failure, since it does not operate correctly when its argument is a goal containing variables.

3.4   Other uses of cut

Consider the following propositional program:

p:-q,r,s,!,t.
p:-q,r,u.
q.
r.
u.

Exercise 3.7. Show that the query ?-p succeeds, but that q and r are tried twice.

This inefficiency can be avoided by putting s,! at the beginning of the body of the first clause. However, in full clausal logic the goals preceding s might supply necessary variable bindings, which requires them to be called first. A possible solution would be the introduction of an extra proposition symbol:

p:-q,r,if_s_then_t_else_u.
if_s_then_t_else_u:-s,!,t.
if_s_then_t_else_u:-u.

Exercise 3.8. Show that q and r are now tried only once.

Just as we did with not, we can rewrite this new proposition symbol to a generally applicable meta-predicate:

if_then_else(S,T,U):-S,!,T.
if_then_else(S,T,U):-U.

Note that we can nest applications of if_then_else, for instance

if_then_else_else(P,Q,R,S,T):-
if_then_else(P,Q,if_then_else(R,S,T)).

Unfolding the definition of if_then_else yields

if_then_else_else(P,Q,R,S,T):-P,!,Q.
if_then_else_else(P,Q,R,S,T):-R,!,S.
if_then_else_else(P,Q,R,S,T):-T.

which clearly shows the meaning of the predicate: ‘if P then Q else if R then S else T ’. This resembles the CASE-statement of procedural languages, only the above notation is much more clumsy. Most Prolog interpreters provide the notation P->Q;R for if-then-else; the nested variant then becomes P->Q;(R->S;T). The parentheses are not strictly necessary, but in general the outermost if-then-else literal should be enclosed in parentheses. A useful lay-out is shown by the following program:

diagnosis(Patient,Condition):-
temperature(Patient,T),
( T=<37     -> blood_pressure(Patient,Condition)
; T>37,T<38 -> Condition=ok
; otherwise -> diagnose_fever(Patient,Condition)
).

otherwise is always assigned the truthvalue true, so the last rule applies if all the others fail.

not and if-then-else show that many uses of cut can be replaced by higher-level constructs, which are easier to understand. However, this is not true for every use of cut. For instance, consider the following program:

play(Board,Player):-
lost(Board,Player).

play(Board,Player):-
find_move(Board,Player,Move),
make_move(Board,Move,NewBoard),
next_player(Player,Next),
play(NewBoard,Next).

This program plays a game by recursively looking for best moves. Suppose one game has been finished; that is, the query ?-play(Start,First) (with appropriate instantiations of the variables) has succeeded. As usual, we can ask Prolog whether there are any alternative solutions. Prolog will start backtracking, looking for alternatives for the most recent move, then for the move before that one, and so on. That is, Prolog has maintained all previous board situations, and every move made can be undone. Although this seems a desirable feature, in reality it is totally unpractical because of the memory requirements: after a few moves you would get a stack overflow. In such cases, we tell Prolog not to reconsider any previous moves, by placing a cut just before the recursive call. This way, we pop the remaining choice points from the stack before entering the next recursion. In fact, this technique results in a use of memory similar to that of iterative loops in procedural languages.

Note that this only works if the recursive call is the last call in the body. In general, it is advisable to write your recursive predicates like play above: the non-recursive clause before the recursive one, and the recursive call at the end of the body. A recursive predicate written this way is said to be tail recursive. If in addition the literals before the recursive call are deterministic (yield only one solution), some Prolog interpreters may recognise this and change recursion into iteration. This process is called tail recursion optimisation. As illustrated above, you can force this optimisation by placing a cut before the recursive call.

3.5   Arithmetic expressions

In Logic Programming, recursion is the only looping control structure. Consequently, recursive datatypes such as lists can be expressed very naturally. Natural numbers also have a recursive nature: ‘0 is a natural number, and if X is a natural number, then the successor of X is also a natural number’. In Prolog, this is expressed as

nat(0).
nat(s(X)):-nat(X).

Addition of natural numbers is defined in terms of successors:

add(0,X,X).
add(s(X),Y,s(Z)):-add(X,Y,Z).

The following query asks for the sum of two and three:

?-add(s(s(0)),s(s(s(0))),Z).

Z = s(s(s(s(s(0)))))

We can also find an X such that the sum of X and Y is Z (i.e., subtract Y from Z):

?-add(X,s(s(s(0))),s(s(s(s(s(0)))))).

X = s(s(0))

We can even find all X and Y which add up to a given sum. Thus, this program is fully declarative. Similarly, multiplication is repeated addition:

mul(0,_X,0).
mul(s(X),Y,Z):-mul(X,Y,Z1),add(Y,Z1,Z).

There are two problems with this approach to representing and manipulating natural numbers. First, naming natural numbers by means of the constant symbol 0 and the functor s is very clumsy, especially for large numbers. Of course, it would be possible to write a translator from decimal notation to successor notation, and back. However, the second problem is more fundamental: multiplication as repeated addition is extremely inefficient compared to the algorithm for multiplicating numbers in decimal notation. Therefore, Prolog has built-in arithmetic facilities, which we will discuss now.

Consider the arithmetic expression 5+7-3. Prolog will view this expression as the term +(5,-(7,3)), with the functors + and - written as infix operators. We want to evaluate this expression, i.e. we want a single numerical value which represents somehow the same number as the expression. A program for doing this would look something like

is(V,E1+E2):-
is(V1,E1),is(V2,E2),
fast_add(V1,V2,V).

is(V,E1-E2):-
is(V1,E1,),is(V2,E2),
fast_sub(V1,V2,V).

is(E,E):-
number(E).

Here, fast_add and fast_sub represent the fast, built-in procedures for addition and subtraction, which are not directly available to the user. These procedures are not reversible: its first two arguments must be instantiated. Therefore, the predicate is will include a test for groundness of its second argument (the arithmetic expression), and will quit with an error-message if this test fails.

Operators

In Prolog, functors and predicates are collectively called operators. An operator is declared by the query ?-op(Priority,Type,Name), where Priority is a number between 0 and 1200 (lower priority binds stronger), and Type is fx or fy for prefix, xfx, xfy or yfx for infix, and xf or yf for postfix. The x and y determine associativity: for instance, xfx means not associative (you cannot write X op Y op Z, but must either write (X op Y) op Z or X op (Y op Z)), xfy means right-associative (X op Y op Z means op(X,op(Y,Z))), and yfx means left-associative (X op Y op Z means op(op(X,Y),Z)). Every special symbol of Prolog, such as ‘ :- ’ and ‘ , ’ (conjunction in the body of a clause), is a predefined operator. The interpretation of operators can be visualised by means of the predicate display, which writes a term without operators. For instance,
the query ?-display((p:-q,r,s)) writes :-(p,','(q,','(r,s))).
The extra parentheses are needed because :- binds very weakly.

The is predicate is a built-in feature of Prolog, and is declared as an infix operator. Its behaviour is illustrated by the following queries:

?-X is 5+7-3
X = 9

?-9 is 5+7-3
Yes

?-9 is X+7-3
Error in arithmetic expression

?-X is 5*3+7/2
X = 18.5

% Try these queries here.

The last example shows, that arithmetic expressions obey the usual precedence rules (which can be overruled using parentheses). Also, note that the is predicate can handle real numbers.

Prolog also provides a built-in predicate =, but this predicate behaves quite differently from is, since it performs unification rather than arithmetic evaluation (see also section 2.3). The following queries illustrate the operation of =:

?-X = 5+7-3
X = 5+7-3

?-9 = 5+7-3
No

?-9 = X+7-3
No

?-X = Y+7-3
X = _947+7-3
Y = _947

?-X = f(X)
X = f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f
(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(f(
Error: term being written is too deep

% Try these queries here. Is the answer to the last query really as described here?

The first query just unifies X with the term 5+7-3 (i.e. +(5,-(7,3))), which of course succeeds. In the second and third query, we try to unify a constant with a complex term, which fails. The fourth query succeeds, leaving Y unbound (_947 is an internal variable name, generated by Prolog).

The fifth query illustrates that Prolog indeed omits the occur check (section 2.3) in unification: the query should have failed, but instead it succeeds, resulting in the circular binding { X f(X) }. The problem only becomes apparent when Prolog tries to write the resulting term, which is infinite. Just to stress that Prolog quite happily constructs circular bindings, take a look at the following strange program:

strange:-X=f(X).

The query ?-strange succeeds, and since there is no answer substitution, it is not apparent that there is a circular binding involved.

Exercise 3.9. Write a predicate zero(A,B,C,X) which, given the coefficients a, b and c, calculates both values of x for which ax 2 + bx + c =0.

Finally, we mention that Prolog provides a number of other useful arithmetic predicates, including the inequality tests < and >, and their reflexive counterparts =< and >=. For these tests, both arguments should be instantiated to numbers.

3.6   Accumulators

The condition that the righthand-side of is should not contain variables sometimes determines the ordering of literals in the body of the clause. For instance, in the program below, which computes the length of a list, the is literal should be placed after the recursive length call, which instantiates M. This means that the resolvent first collects as many is literals as there are elements in the list, before doing the actual calculation. Each of these literals contains some ‘local’ variables that require some space in memory. The total memory requirements are thus proportional to the depth of the recursion.

naive_length([],0).
naive_length([_H|T],N):-naive_length(T,M),N is M+1.

Exercise 3.10. Draw the proof tree for the query ?-naive_length([a,b,c],N).

Programs with tail recursion need less memory because they do all the work on one recursive level before proceeding to the next. There is a common trick to transform even the length predicate above into a tail recursive program, using an auxiliary argument called an accumulator.

length_acc(L,N):-length_acc(L,0,N).

length_acc([],N,N).
length_acc([_H|T],N0,N):-N1 is N0+1,length_acc(T,N1,N).

length_acc(L,N0,N) is true if N is the number of elements in L plus N0. Initialising N0 to 0 results in N returning the length of L. Note that the actual counting is done by the second argument: only when the list is empty is the third argument unified with the second argument. The main point is that, since the accumulator is given an initial value of 0, it is always instantiated, such that the is literal can be placed before the recursive call.

Exercise 3.11. Draw the proof tree for the query ?-length_acc([a,b,c],N).

Accumulators can be used in very many programs. Suppose we want to reverse the order of elements in a list. We could do this by recursively reversing the tail of the list, and putting the head at the end of the result:

naive_reverse([],[]).
naive_reverse([H|T],R):-naive_reverse(T,R1),append(R1,[H],R).

append([],Y,Y).
append([H|T],Y,[H|Z]):-append(T,Y,Z).

This predicate is called ‘naive’ because a lot of unnecessary work is done by the append calls in the recursive clause.

Exercise 3.12. Draw the proof tree for the query ?-naive_reverse([a,b,c],R).

By using an accumulator, we can get rid of the append predicate, as follows:

reverse(X,Y):- reverse(X,[],Y).

reverse([],Y,Y).
reverse([H|T],Y0,Y):- reverse(T,[H|Y0],Y).

reverse(X,Y0,Y) is true if Y consists of the reversal of X followed by Y0. Initialising Y0 to [] results in Y returning the reversal of X.

The use of an accumulator in this more efficient program for reversing a list is closely related to another programming trick for increasing the efficiency of list handling. The idea is not to represent a list by a single term, but instead by a pair of terms L1-L2, such that the list actually represented is the difference between L1 and L2. The term L1-L2 is appropriately called a difference list; L1 is called the plus list, and L2 is called the minus list. For instance, the difference list [a,b,c,d]-[d] represents the simple list [a,b,c], as does the difference list [a,b,c,1234,5678]-[1234,5678], and even the difference list [a,b,c|X]-X. The last difference list can be seen as summarising every possible difference list representing the same simple list, by introducing a variable for the part which is not contained in the simple list.

As was remarked above, reverse(X,Y0,Y) is true if Y consists of the reversal of X followed by Y0. Another way to say the same thing is that the reversal of X is the difference between Y and Y0. That is, the reversal of X is represented by the difference list Y-Y0! We can make this explicit by a small syntactic change to reverse, resulting in the following program:

reverse_dl(X,Y):- reverse(X,Y-[]).

reverse([],Y-Y).
reverse([H|T],Y-Y0):- reverse(T,Y-[H|Y0]).

For instance, the third clause in this program says: if the reversal of T is represented by the difference list Y-[H|Y0], then adding H to the head of T is the same as removing H from the minus list in the difference list.

If the minus list is a variable, it can be used as a pointer to the end of the represented list. It is this property which makes difference lists so useful. For instance, if we unify [a,b,c|X]-X with Y-[d,e], we get Y=[a,b,c,d,e] — we have managed to append two lists together in a single unification step! In this example, the second term is not a difference list, nor is the result. If we want to append two difference lists

[a,b,c|XMinus]-XMinus

and

[d,e|YMinus]-YMinus

we must unify XMinus with [d,e|YMinus] (the plus list of the second difference list), such that the first difference list becomes

[a,b,c,d,e|YMinus]-[d,e|YMinus]

Combining the plus list of this difference list with YMinus, we get exactly what we want.

Figure 3.13. Appending two difference lists: the ‘length’ of XMinus is adjusted by unification with YPlus, the result is given by XPlus-YMinus.

In general, given two difference lists XPlus-XMinus and YPlus-YMinus, we unify XMinus with YPlus, and the result is given by XPlus-YMinus (fig. 3.13):

append_dl(XPlus-XMinus,YPlus-YMinus,XPlus-YMinus):-
                                             XMinus=YPlus.

or even shorter

append_dl(XPlus-YPlus,YPlus-YMinus,XPlus-YMinus).

Appending a simple list to another simple list of n elements requires n resolution steps; appending two difference lists requires no resolution at all, just one unification. Using difference lists is almost always a good idea if you have to do a lot of list processing.

Exercise 3.13. In the naive_reverse predicate, represent the reversed list by a difference list, use append_dl instead of append, and show that this results in the predicate reverse_dl by unfolding the definition of append_dl.

3.7   Second-order predicates

Suppose we need a program to determine, given two lists of persons of equal length, whether a person in the first list is the parent of the corresponding person in the second list. The following program will do the job:

parents([],[]).

parents([P|Ps],[C|Cs]):-
parent(P,C),
parents(Ps,Cs).

We can generalise this program by including the relation which must hold between corresponding elements of the two lists as a parameter:

rel(R,[],[]).

rel(R,[X|Xs],[Y|Ys]):-
R(X,Y),
rel(R,Xs,Ys).

A term like R(X,Y) is allowed at the position of an atom in the body of a clause, as long as it is correctly instantiated at the time it is called.

Some Prolog interpreters don’t allow this, in which case you must explicitly construct the literal by means of the built-in predicate ‘ =.. ’ (sometimes called univ). It is a fully declarative predicate, which can both be used to construct a term from a list of arguments preceded by a functor, or to decompose a term into its constituents:

?-Term =.. [parent,X,peter]
Term = parent(X,peter)

?-parent(maria,Y) =.. List
List = [parent,maria,Y]

=.. ’ is declared as an infix operator in Prolog.

Exercise 3.14. Rewrite the program for rel, using =..

Global datastructures in Prolog

Since Prolog variables do not have a scope outside the clause in which they occur (section 2.2), pure Prolog does not provide any support for global datastructures. However, Prolog provides access to its internal database where it stores the program clauses, by means of the built-in predicates assert and retract. The query ?-assert(Clause) results in the addition of Clause (which must be instantiated to a valid Prolog clause) to your program; the query ?‑retract(Clause) removes the first clause which unifies with Clause from your program. These predicates are fairly low-level, and should be used with care.

The predicate rel is called a second-order predicate, because it takes a (first-order) predicate as an argument [11] . We can now define the parents predicate as

parents(Ps,Cs):-rel(parent,Ps,Cs).

Suppose now you have the following facts in your program, and you want to collect all the children of a particular parent in a list:

parent(john,peter).
parent(john,paul).
parent(john,mary).
parent(mick,davy).
parent(mick,dee).
parent(mick,dozy).

Of course, it is easy to generate all the children upon backtracking; the problem is to collect them in a global list. To this end, Prolog provides the second-order predicates findall, bagof, and setof. For instance, we could use the following program and query:

children(Parent,Children):- findall(C,parent(Parent,C),Children).
?-children(john,Children).

Children = [peter,paul,mary]

In general, the query

?-findall(X,Goal,ListofX)

generates all the possible solutions of the query ?‑Goal, recording the substitutions for X for each of these solutions in the list ListofX (Goal must be instantiated to a term representing a Prolog goal).

The bagof predicate acts similarly. However, its behaviour is different when the goal contains free variables. Consider the query

?-bagof(C,parent(P,C),L)

in which the variable P is unbound. This query has two possible interpretations: ‘find a parent and a list of his children’, and ‘find the list of children that have a parent ’. In the first case, we get a possible value for P and a list of P ’s children, which means that there are two solutions:

?-bagof(C,parent(P,C),L).

C = _951
P = john
L = [peter,paul,mary];

C = _951
P = mick
L = [davy,dee,dozy]

In the second case, the goal to prove is ‘there exists a P such that parent(P,C) is true’, which means that the variable P is existentially quantified. This is signalled by prefixing the goal with P^:

?-bagof(C,P^parent(P,C),L).

C = _957
P = _958
L = [peter,paul,mary,davy,dee,dozy]

The query

?-findall(C,parent(P,C),L).

(without existential quantification) can only generate this second solution.

Finally, Prolog provides the predicate setof, which acts just like bagof, except that the resulting list is sorted and does not contain duplicates. Thus, setof is slightly less efficient than bagof, and the latter is preferred in cases where the list of solutions is known not to contain duplicates.

Exercise 3.15. Write a program which sorts and removes duplicates from a list, using setof.

3.8   Meta-programs

Prolog represents a clause Head:-Body in the same way as a term :-(Head,Body). Thus, it is easy to write programs that manipulate clauses. In the first case, ‘ :- ’ is treated as a predicate, and in the second case it is treated as a functor. The combination of these two interpretations occurs frequently in Prolog programs, and can be applied to any predicate p. Such programs are called meta-programs; the interpretation of p as a predicate occurs on the object-level, and the interpretation as a functor occurs on the meta-level. (Note that the difference between meta-predicates and higher-order predicates is that meta-predicates take object-level clauses as arguments, while the latter take lower-order predicates as arguments.)

For instance, suppose we have the following biological knowledge, expressed as propositional if-then rules:

% if A and B then C means if(then(and(A,B),C))
:-op(900,fx,if).
:-op(800,xfx,then).
:-op(700,yfx,and).
% object-level rules
if has_feathers and lays_eggs then is_bird.
if has_gills and lays_eggs then is_fish.
if tweety then has_feathers.
if tweety then lays_eggs.

Suppose we want to prove that Tweety is a bird. That is, we want to show that the rule

if tweety then is_bird

follows logically from the given rules This can be done by a meta-program, which manipulates the rules on the object-level:

% meta-program
derive(if Assumptions then Goal):-
	if Body then Goal,
	derive(if Assumptions then Body).
derive(if Assumptions then Goal1 and Goal2):-
	derive(if Assumptions then Goal1),
	derive(if Assumptions then Goal2).
derive(if Assumptions then Goal):-
	assumed(Goal,Assumptions).
	
assumed(A,A).
assumed(A,A and _As).
assumed(A,_B and As):- assumed(A,As).

The three clauses for the derive predicate represent the three possible cases:

(i)   a goal matches the head of a rule, in which case we should proceed
with the body;

(ii)  a goal is a conjunction (for instance, because it was produced in
the previous step), of which each conjunct is derived separately;

(iii) a goal is among the assumptions.

As explained above, if is a predicate on the object-level, and a functor on the meta-level.

Exercise 3.16. Draw the SLD-tree for the query
                                          ?-derive(if tweety then is_bird).

Since propositional definite clauses are similar to the above if-then rules, one could view this program as a propositional Prolog simulator. In fact, it is possible to push the resemblance closer, by adopting the Prolog-representation of clauses at the object-level. One minor complication is that the clause constructor ‘ :- ’ is not directly available as an object-level predicate. Instead, Prolog provides the built-in predicate clause: a query ?‑clause(H,B) succeeds if H:-B unifies with a clause in the internal Prolog database (if H unifies with a fact, B is unified with true). A further modification with respect to the above program is that Prolog queries do not have the form if Assumptions then Goal; instead, the Assumptions are added to the object-level program, from which a proof of Goal is attempted.

Following these observations, the predicate derive is changed as follows:

prove(Goal):-
clause(Goal,Body),
prove(Body).

prove((Goal1,Goal2)):-
prove(Goal1),
prove(Goal2).

prove(true).

This program nicely reflects the process of constructing a resolution proof:

(i)   if the resolvent contains a single atom, find a clause with that atom in the head and proceed with its body;

(ii)  if the resolvent contains various atoms, start with the first and proceed with the rest;

(iii) if the resolvent is empty, we’re done.

Some Prolog interpreters have problems if clause is called with the first argument instantiated to true or a conjunction, because true and ‘ , ’ (comma) are built-in predicates. To avoid these problems, we should add the conditions not A=true and not A=(X,Y) to the first clause. A less declarative solution is to reorder the clauses and use cuts:

prove(true):-!.

prove((A,B)):-!,
prove(A),
prove(B).

prove(A):-
/* not A=true, not A=(X,Y) */
clause(A,B),
prove(B).

We will adopt this less declarative version for pragmatic reasons: it is the one usually found in the literature. As this program illustrates, whenever you use cuts it is normally a good idea to add a declarative description of their effect between comment brackets.

A meta-program interpreting programs in the same language in which it is written is called a meta-interpreter. In order to ‘lift’ this propositional meta-interpreter to clauses containing variables, it is necessary to incorporate unification into the third clause. Suppose we are equipped with predicates unify and apply, such that unify(T1,T2,MGU,T) is true if T is the result of unifying T1 and T2 with most general unifier MGU, and apply(T,Sub,TS) is true if TS is the term obtained from T by applying substitution Sub. The meta-interpreter would then look like this:

prove_var(true):-!.

prove_var((A,B)):-!,
prove(A),
prove(B).

prove_var(A):-
clause(Head,Body),
unify(A,Head,MGU,Result),
apply(Body,MGU,NewBody),
prove_var(NewBody).

Prolog’s own unification predicate = does not return the most general unifier explicitly, but rather unifies the two original terms implicitly. Therefore, if we want to use the built-in unification algorithm in our meta-interpreter, we do not need the apply predicate, and we can write the third clause as

prove_var(A):-
clause(Head,Body),
A=Head,
prove_var(Body)

Figure 3.14. The prove meta-interpreter embodies a declarative implementation of the resolution proof procedure, making use of built-in unification.

If we now change the explicit unification in the body of this clause to an implicit unification in the head, we actually obtain the propositional meta-interpreter again! That is, while this program is read declaratively as a meta-interpreter for propositional programs, it nevertheless operates procedurally as an interpreter of first-order clauses (fig. 3.14).

Exercise 3.17. Draw the SLD-tree for the query ?-prove(is_bird(X)), given the following clauses:
                                          is_bird(X):-has_feathers(X),lays_eggs(X).
                is_fish(X):-has_gills(X),lays_eggs(X).
                has_feathers(tweety).
                lays_eggs(tweety).

Note that this meta-interpreter is able to handle only ‘pure’ Prolog programs, without system predicates like cut or is, since there are no explicit clauses for such predicates.

A variety of meta-interpreters will be encountered in this book. Each of them is a variation of the above ‘canonical’ meta-interpreter in one of the following senses:

(i)   application of a different search strategy;

(ii)  application of a different proof procedure;

(iii) enlargement of the set of clauses that can be handled;

(iv) extraction of additional information from the proof process.

The first variation will be illustrated in section 5.3, where the meta-interpreter adopts a breadth-first search strategy. In the same section, this meta-interpreter is changed to an interpreter for full clausal logic (iii). Different proof procedures are extensively used in Chapters 8 and 9. Here, we will give two example variations. In the first example, we change the meta-interpreter in order to handle general clauses by means of negation as failure (iii). All we have to do is to add the following clause:

prove(not A):-
not prove(A)

This clause gives a declarative description of negation as failure.

The second variation extracts additional information from the SLD proof procedure by means of a proof tree (iv). To this end, we need to make a slight change to the meta-interpreter given above. The reason for this is that the second clause of the original meta-interpreter breaks up the current resolvent if it is a conjunction, whereas in a proof tree we want the complete resolvent to appear.

% meta-interpreter with complete resolvent

prove_r(true):-!.

prove_r((A,B)):-!,
clause(A,C),
conj_append(C,B,D),
prove_r(D).

prove_r(A):-
clause(A,B),
prove_r(B).

%%% conj_append/3: see Appendix A.2

We now extend prove_r/1 with a second argument, which returns the proof tree as a list of pairs p(Resolvent,Clause):

% display a proof tree
prove_p(A):-prove_p(A,P),write_proof(P).

% prove_p(A,P) <- P is proof tree of A
prove_p(true,[]):-!.
prove_p((A,B),[p((A,B),(A:-C))|Proof]):-!,
	clause(A,C),
	conj_append(C,B,D),
	prove_p(D,Proof).
prove_p(A,[p(A,(A:-B))|Proof]):-
	clause(A,B),
	prove_p(B,Proof).

write_proof([]):-
	write('...............[]'),nl.
write_proof([p(A,B)|Proof]):-
	write((:-A)),nl,
	write('.....|'),write('..........'),write(B),nl,
	write('.....|'),write('..................../'),nl,
	write_proof(Proof).

For instance, given the following clauses:

student_of(S,T):-teaches(T,C),follows(S,C).
teaches(peter,cs).
teaches(peter,ai).
follows(maria,cs).
follows(paul,ai).

and the query ?-prove_p(student_of(S,T)), the program writes the following proof trees:

:-student_of(peter,maria)

    |         student_of(peter,maria):-teaches(peter,cs),follows(maria,cs)

    |                   /

:-(teaches(peter,cs),follows(maria,cs))

    |         teaches(peter,cs):-true

    |                   /

:-follows(maria,cs)

    |         follows(maria,cs):-true

    |                   /

              []

:-student_of(peter,paul)

    |         student_of(peter,paul):-teaches(peter,ai),follows(paul,ai)

    |                   /

:-(teaches(peter,ai),follows(paul,ai))

    |         teaches(peter,ai):-true

    |                   /

:-follows(paul,ai)

    |         follows(paul,ai):-true

    |                   /

              []

Note that these are propositional proof trees, in the sense that all substitutions needed for the proof have already been applied. If we want to collect the uninstantiated program clauses in the proof tree then we should make a copy of each clause, before it is used in the proof:

prove_p((A,B),[p((A,B),Clause)|Proof]):-!,
clause(A,C),
copy_term((A:-C),Clause), % make copy of the clause
conj_append(C,B,D),
prove_p(D,Proof)

The predicate copy_term/2 makes a copy of a term, with all variables replaced by new ones. It is a built-in predicate in many Prolog interpreters, but could be defined by means of assert/2 and retract/2 (see Appendix A.2 for details).

3.9   A methodology of Prolog programming

At the end of this chapter, we spend a few words on the methodology of writing Prolog programs. Given a problem to solve, how do I obtain the program solving the problem? This is the fundamental problem of software engineering. Here, we can only scratch the surface of this question: we will concentrate on the subtask of writing relatively simple predicates which use no more than two other predicates.

Consider the following problem: define a predicate which, given a number n, partitions a list of numbers into two lists: one containing numbers smaller than n, and the other containing the rest. So we need a predicate partition/4:

% partition(L,N,Littles,Bigs) <- Littles contains numbers
%                               in L smaller than N,
%                               Bigs contains the rest

Since the only looping structure of Prolog is recursion, simple predicates like this will typically be recursive. This means that

(i)   there is a base case, and one or more recursive clauses;

(ii)  there is a recursion argument distinguishing between the base case and the recursive clauses.

For list predicates, the recursion argument is typically a list, and the distinction is typically between empty and non-empty lists. For the partition/4 predicate, the recursion argument is the first list. The base case is easily identified: the empty list is partitioned in two empty lists, no matter the value of N. This gives us the following skeleton:

partition([],N,[],[]).

partition([Head|Tail],N,?Littles,?Bigs):-
/* do something with Head */
partition(Tail,N,Littles,Bigs).

The question marks denote output arguments, whose relation to the variables in the recursive call still has to be decided. It should be noted that not all predicates are tail recursive, so it is not yet known whether the recursive call will be last indeed. Notice also that the output arguments in the recursive call have been given meaningful names, which is, in general, a good idea.

Once we have ‘chopped off’ the first number in the list, we have to do something with it. Depending on whether it is smaller than N or not, it has to be added to the Littles or the Bigs. Suppose Head is smaller than N:

partition([Head|Tail],N,?Littles,?Bigs):-
Head < N,
partition(Tail,N,Littles,Bigs)

Thus, Head must be added to Littles. In this case, it does not matter in which position it is added: obviously, the most simple way is to add it to the head of the list:

?Littles = [Head|Littles]

In such cases, where output arguments are simply constructed by unification, the unification is performed implicitly in the head of the clause (the fourth argument remains unchanged):

partition([Head|Tail],N,[Head|Littles],Bigs):-
Head < N,
partition(Tail,N,Littles,Bigs)

A second recursive clause is needed to cover the case that Head is larger than or equal to N, in which case it must be added to Bigs. The final program looks as follows:

% partition(L,N,Littles,Bigs) <- Littles contains numbers 
%                                in L smaller than N, 
%                                Bigs contains the rest
partition([],_N,[],[]).
partition([Head|Tail],N,[Head|Littles],Bigs):-
	Head < N,
	partition(Tail,N,Littles,Bigs).
partition([Head|Tail],N,Littles,[Head|Bigs]):-
	Head >= N,
	partition(Tail,N,Littles,Bigs).

The approach taken here can be formulated as a general strategy for writing Prolog predicates. The steps to be performed according to this strategy are summarised below:

(i)   write down a declarative specification;

(ii)  identify the recursion argument, and the output arguments;

(iii) write down a skeleton;

(iv) complete the bodies of the clauses;

(v)  fill in the output arguments.

Notice that step (iv) comprises most of the work, while the other steps are meant to make this work as easy as possible.

Exercise 3.18. Implement a predicate permutation/2, such that permutation(L,P) is true if P contains the same elements as the list L but (possibly) in a different order, following these steps. (One auxiliary predicate is needed.)

As a second example, consider the problem of sorting a list of numbers. The declarative specification is as follows:

%mySort(L,S) <- S is a sorted permutation of list L

Note that this specification can immediately be translated to Prolog:

mySort(L,S):-
permutation(L,S),
sorted(S).

This program first guesses a permutation of L, and then checks if the permutation happens to be sorted. Declaratively, this program is correct; procedurally, it is extremely inefficent since there are n! different permutations of a list of length n. Thus, we have to think of a more efficient algorithm.

The recursion and output arguments are easily identified as the first and second argument, respectively. The base case states that the empty list is already sorted, while the recursive clause states that a non-empty list is sorted by sorting its tail separately:

mySort([],[]).

mySort([Head|Tail],?Sorted):-
/* do something with Head */
mySort(Tail,Sorted).

It remains to decide what the relation is between ?Sorted, Head and Sorted. Obviously, Head cannot be simply added to the front of Sorted, but has to be inserted in the proper place. We thus need an auxiliary predicate insert/3, to add Head at the proper position in Sorted. Note that tail recursion is not applicable in this case, since we have to insert Head in an already sorted list. We thus arrive at the following definition:

mySort([],[]).
mySort([Head|Tail],WholeSorted):-
	mySort(Tail,Sorted),
	insert(Head,Sorted,WholeSorted).

In order to implement insert/3, we follow the same steps. The second argument is the recursion argument, and the third is the output argument. This gives the following skeleton:

insert(X,[],?Inserted).

insert(X,[Head|Tail],?Inserted):-
/* do something with Head */
insert(X,Tail,Inserted).

The base case is simple: ?Inserted = [X]. In the recursive clause, we have to compare X and Head. Suppose X is greater than Head:

insert(X,[Head|Tail],?Inserted):-
X > Head,
insert(X,Tail,Inserted)

We have to construct the output argument ?Inserted. Since X has already been properly inserted to Tail, it remains to add Head to the front of Inserted:

?Inserted = [Head|Inserted]

A third clause is needed if X is not greater than Head (note that this clause, being non-recursive, is a second base case):

insert(X,[Head|Tail],?Inserted):-
X =< Head

In this case, X should be added before Head:

?Inserted = [X,Head|Tail]

The complete program is given below:

insert(X,[],[X]).
insert(X,[Head|Tail],[Head|Inserted]):-
	X > Head,
	insert(X,Tail,Inserted).
insert(X,[Head|Tail],[X,Head|Tail]):-
	X =< Head.

Exercise 3.19. Implement an alternative to this sorting method by using the partition/4 predicate.

Further reading

There are many introductory and advanced textbooks on Prolog programming. (Bratko, 1990) is a particularly practical introduction. (Sterling & Shapiro, 1986) offers a slightly more advanced presentation. (Nilsson & Maluszynski, 1990) is one of the few books dealing with both the theoretical and practical aspects of programming in Prolog. (Ross, 1989) and (O’Keefe, 1990) discuss advanced issues in the practice of Prolog programming.

Those eager to learn more about the implementation of Prolog interpreters are referred to (Maier & Warren, 1988). (Bowen & Kowalski, 1982) is an early source on meta-programs in Logic Programming. The slogan Algorithm = Logic + Control was put forward by Kowalski (1979). A discussion of the relation between declarative and procedural programming can be found in (Kowalski, 1993).

K.A. Bowen & R.A. Kowalski ( 1982) , ‘Amalgamating language and metalanguage in Logic Programming’. In Logic Programming, K.L. Clark & S. Tärnlund (eds.), Academic Press.

I. Bratko ( 1990) , Prolog Programming for Artificial Intelligence, Addison-Wesley,
second edition.

R.A. Kowalski ( 1979) , ‘Algorithm = Logic + Control’, Communications of the ACM 22 (7): 424-436.

R.A. Kowalski ( 1993) , ‘Logic Programming’. In Encyclopedia of Computer Science,
A. Ralston & E.D. Reilly (eds), pp. 778-783, Van Nostrand Reinhold,
third edition.

D. Maier & D.S. Warren ( 1988) , Computing with Logic: Logic Programming with Prolog, Benjamin/Cummings.

U. Nilsson & J. Maluszynski ( 1990) , Logic, Programming and Prolog, John Wiley.

R.A. O’Keefe ( 1990) , The Craft of Prolog, MIT Press.

P. Ross ( 1989) , Advanced Prolog: Techniques and Examples, Addison-Wesley.

L.S. Sterling & E.Y. Shapiro ( 1986) , The Art of Prolog, MIT Press.



[1] If we take Prolog’s procedural behaviour into account, there are alternatives to recursive loops such as the so-called failure-driven loop (see Exercise 7.5).

[2] It is often more convenient to read a clause in the opposite direction:
if somebody is a man and an adult then he is married or a bachelor’.

[3] is called the empty clause because it has empty body and head, and therefore it is not satisfiable by any interpretation.

[4] In relational clausal logic, ground terms are necessarily constants. However, this is not the case in full clausal logic, as we will see in section 2.3.

[5] We will have more to say about the generality of clauses in Chapter 9.

[6] For definite clauses this method of bottom-up model construction always yields the unique minimal model of the program.

[7] The efficiency and completeness of search strategies will be discussed in Chapters 5 and 6.

[8] Written this way to distinguish them from the arrows => and <=.

[9] This is not allowed by every Prolog interpreter.

[10] Since efficiency is an implementation issue, it is suggested that not is replaced by ! only in the final stage of program development.

[11] Recall the discussion about the order of a logic in section 2.5.

Additional material

Section 3.1

plist([]).
plist([H|T]):-p(H),plist(T).

p(1).
p(2).

Section 3.6

% fib(N,F) <- F is the N-th Fibonacci number
% inefficient doubly-recursive version
fib(1,1).
fib(2,1).
fib(N,F):-
	N>2,N1 is N-1,N2 is N-2,
	fib(N1,F1),fib(N2,F2),
	F is F1+F2.

% We can get a more efficient version 
% by solving a more general problem!

% fibn(N,Na,Nb,F) <- F is the N-th Fibonacci number
%                   in the sequence starting with Na, Nb
fibn(1,Na,_,Na).
fibn(2,_,Nb,Nb).
fibn(N,Na,Nb,F):-
	N>2, N1 is N-1,
	Nc is Na+Nb,
	fibn(N1,Nb,Nc,F).

fibn(N,F):-
	fibn(N,1,1,F).

Section 3.7

biglist(Low,High,L):-
    bagof(X,between(Low,High,X),L).
    
between(Low,_High,Low).
between(Low,High,Number):-
	Low < High,
	NewLow is Low+1,
	between(NewLow,High,Number).

Section 3.9

%%% Good generator, but not tail-recursive 
powerset([],[[]]).
powerset([H|T],PowerSet):-
	powerset(T,PowerSetOfT),             % generator (GOOD) 
	extend_pset(H,PowerSetOfT,PowerSet). % not tail-recursive
	
extend_pset(_,[],[]).
extend_pset(H,[List|MoreLists],[List,[H|List]|More]):-
	extend_pset(H,MoreLists,More).
	
%%% Bad generator, tail-recursive
powerset1([],[[]]).
powerset1([H|T],PowerSet):-
	extend_pset(H,PowerSetOfT,PowerSet), % generator (BAD)
	powerset1(T,PowerSetOfT).            % tail-recursive

%%% Good generator, tail-recursive
powerset2([],PowerSet,PowerSet).
powerset2([H|T],Acc,PowerSet):-
	extend_pset(H,Acc,Acc1),             % generator (GOOD) 
	powerset2(T,Acc1,PowerSet).          % tail-recursive

powerset2(Set,PowerSet):-powerset2(Set,[[]],PowerSet).

Simply Logical

Peter Flach,
University of Bristol

Intelligent Reasoning by Example

This book discusses methods to implement intelligent reasoning by means of Prolog programs. The book is written from the shared viewpoints of Computational Logic, which aims at automating various kinds of reasoning, and Artificial Intelligence, which seeks to implement aspects of intelligent behaviour on a computer.

Read more »


© Peter Flach 2015–2016