Now, the validity of 9 crucially depends on the meaning of the phrase to the north of , in particular, the fact that it is an asymmetric relation:.
Unfortunately, we can't express such rules in propositional logic: the smallest elements we have to play with are atomic propositions, and we cannot "look inside" these to talk about relations between individuals x and y. The best we can do in this case is capture a particular case of the asymmetry.
To say that Freedonia is not to the north of Sylvania , we write -FnS That is, we treat not as equivalent to the phrase it is not the case that So now we can write the implication in 10 as. How about giving a version of the complete argument? We will replace the first sentence of 9 by two formulas of propositional logic: SnF , and also the implication in 11 , which expresses rather poorly our background knowledge of the meaning of to the north of.
We'll write [A1, This leads to the following as a representation of argument 9 :. By contrast, if FnS were true, this would conflict with our understanding that two objects cannot both be to the north of each other in any possible situation. Arguments can be tested for "syntactic validity" by using a proof system. We will say a little bit more about this later on in 3. Logical proofs can be carried out with NLTK's inference module, for example via an interface to the third-party theorem prover Prover9.
The inputs to the inference mechanism first have to be converted into logical expressions. Here's another way of seeing why the conclusion follows. If SnF is true, then -SnF cannot also be true; a fundamental assumption of classical logic is that a sentence cannot be both true and false in a situation. Consequently, -FnS must be true. Recall that we interpret sentences of a logical language relative to a model, which is a very simplified version of the world.
A model for propositional logic needs to assign the values True or False to every possible formula.
Philosophy of Mathematics and Its Logic: Introduction
We do this inductively: first, every propositional symbol is assigned a value, and then we compute the value of complex formulas by consulting the meanings of the boolean operators i. A Valuation is a mapping from basic expressions of the logic to their values. Here's an example:. We initialize a Valuation with a list of pairs, each of which consists of a semantic symbol and a semantic value. The resulting object is essentially just a dictionary that maps logical expressions treated as strings to appropriate values. As we will see later, our models need to be somewhat more complicated in order to handle the more complex logical forms discussed in the next section; for the time being, just ignore the dom and g parameters in the following declarations.
Now let's initialize a model m that uses val :. Every model comes with an evaluate method, which will determine the semantic value of logical expressions, such as formulas of propositional logic; of course, these values depend on the initial truth values we assigned to propositional symbols such as P , Q and R. Your Turn: Experiment with evaluating different formulas of propositional logic. Does the model give the values that you expected? Up until now, we have been translating our English sentences into propositional logic. Because we are confined to representing atomic sentences with letters like P and Q , we cannot dig into their internal structure.
In effect, we are saying that there is nothing of logical interest to dividing atomic sentences into subjects, objects and predicates. However, this seems wrong: if we want to formalize arguments such as 9 , we have to be able to "look inside" basic sentences. As a result, we will move beyond Propositional Logic to a something more expressive, namely First-Order Logic.
This is what we turn to in the next section. In the remainder of this chapter, we will represent the meaning of natural language expressions by translating them into first-order logic. Not all of natural language semantics can be expressed in first-order logic. But it is a good choice for computational semantics because it is expressive enough to represent a good deal, and on the other hand, there are excellent systems available off the shelf for carrying out automated inference in first order logic.
Our next step will be to describe how formulas of first-order logic are constructed, and then how such formulas can be evaluated in a model. First-order logic keeps all the boolean operators of Propositional Logic.
- My Boring-Ass Life (New Edition): The Uncomfortably Candid Diary.
- The Afghan.
- The Foundations of Analysis: A Straightforward Introduction: Book 1 Logic, Sets and Numbers.
But it adds some important new mechanisms. To start with, propositions are analyzed into predicates and arguments, which takes us a step closer to the structure of natural languages. The standard construction rules for first-order logic recognize terms such as individual variables and individual constants, and predicates which take differing numbers of arguments. For example, Angus walks might be formalized as walk angus and Angus sees Bertie as see angus, bertie.
We will call walk a unary predicate , and see a binary predicate. The symbols used as predicates do not have intrinsic meaning, although it is hard to remember this. Returning to one of our earlier examples, there is no logical difference between 13a and 13b. By itself, first-order logic has nothing substantive to say about lexical semantics — the meaning of individual words — although some theories of lexical semantics can be encoded in first-order logic. Whether an atomic predication like see angus, bertie is true or false in a situation is not a matter of logic, but depends on the particular valuation that we have chosen for the constants see , angus and bertie.
For this reason, such expressions are called non-logical constants. By contrast, logical constants such as the boolean operators always receive the same interpretation in every model for first-order logic. It is often helpful to inspect the syntactic structure of expressions of first-order logic, and the usual way of doing this is to assign types to expressions.
Following the tradition of Montague grammar, we will use two basic types : e is the type of entities, while t is the type of formulas, i. Given these two basic types, we can form complex types for function expressions. The logical expression can be processed with type checking. Although the type-checker will try to infer as many types as possible, in this case it has not managed to fully specify the type of walk , since its result type is unknown. To help the type-checker, we need to specify a signature , implemented as a dictionary that explicitly associates types with non-logical constants:.
Although this is the type of something which combines first with an argument of type e to make a unary predicate, we represent binary predicates as combining directly with their two arguments. For example, the predicate see in the translation of Angus sees Cyril will combine with its arguments to give the result see angus, cyril.
- The Stockholm Logic Seminar.
- Edited by Stewart Shapiro.
- Epitaxy of Nanostructures?
- A DYING BUSINESS: A Comedy in Two Acts - Plus a Funeral?
- Common Sense Journal of the Edinburgh Conference of Socialist Economists vol 21.
In first-order logic, arguments of predicates can also be individual variables such as x , y and z. In NLTK, we adopt the convention that variables of type e are all lowercase.
K G Binmore (Author of Mathematical Analysis South Asian Edition)
Individual variables are similar to personal pronouns like he , she and it , in that we need to know about the context of use in order to figure out their denotation. One way of interpreting the pronoun in 14 is by pointing to a relevant individual in the local context. Another way is to supply a textual antecedent for the pronoun he , for example by uttering 15a prior to Here, we say that he is coreferential with the noun phrase Cyril. As a result, 14 is semantically equivalent to 15b.
Cyril is Angus's dog. Cyril disappeared. Consider by contrast the occurrence of he in 16a. In this case, it is bound by the indefinite NP a dog , and this is a different relationship than coreference.
If we replace the pronoun he by a dog , the result 16b is not semantically equivalent to 16a. Angus had a dog but he disappeared. Angus had a dog but a dog disappeared. Corresponding to 17a , we can construct an open formula 17b with two occurrences of the variable x. We ignore tense to simplify exposition. He is a dog and he disappeared. At least one entity is a dog and disappeared. A dog disappeared.
Everything has the property that if it is a dog, it disappears. Every dog disappeared. Although 20a is the standard first-order logic translation of 20c , the truth conditions aren't necessarily what you expect.
The Stockholm Logic Seminar
The formula says that if some x is a dog, then x disappears — but it doesn't say that there are any dogs. So in a situation where there are no dogs, 20a will still come out true. Now you might argue that every dog disappeared does presuppose the existence of dogs, and that the logic formalization is simply wrong. But it is possible to find other examples which lack such a presupposition.
Related The Foundations of Analysis: A Straightforward Introduction: Book 1: Logic, Sets and Numbers
Copyright 2019 - All Right Reserved