COS 333: Chapter 16, Part 1

Willem van Heerden
6 Sept 202042:45

Summary

TLDRThe lecture introduces logic programming, emphasizing its importance in exams and urging students to practice Prolog code. It covers the theoretical foundations, focusing on predicate calculus, which is essential for understanding Prolog's construction. The non-procedural nature of logic programming is highlighted, contrasting it with imperative and functional programming. The lecture also discusses the inefficiency of logic programming for certain tasks, such as sorting, due to the potential need to generate and test all permutations.

Takeaways

  • πŸ“š The lecture introduces Chapter 16 on logic programming and Prolog, emphasizing its importance for exams and tests.
  • 🎯 Students are expected to write Prolog code and understand the complexity of practical logic programming tasks.
  • πŸ” The theoretical underpinnings of logic programming, specifically predicate calculus, are essential to grasp Prolog programming.
  • πŸ“ Logic programming is based on a declarative paradigm, contrasting with the procedural nature of imperative and functional programming languages.
  • πŸ€– The inferencing engine in logic programming languages like Prolog automatically handles the procedure to achieve desired results based on logical rules.
  • πŸ“ˆ Predicate calculus, including proposition formation, relationships, and symbolic logic, is the core concept underlying logic programming.
  • πŸ”‘ First-order predicate calculus is the specific form of symbolic logic used in logic programming, involving constants, variables, and compound terms.
  • πŸ” Atomic and compound propositions, along with logical operators, are used to construct the logic within Prolog programs.
  • πŸ”„ The resolution principle and unification process are key to inferring new facts from existing axiomatic propositions in logic programming.
  • πŸ“‰ The use of clausal form and Horn clauses simplifies the resolution process in logic programming, making it more computationally efficient.
  • πŸš€ The non-procedural nature of logic programming allows for simpler program writing by focusing on the desired outcomes rather than the steps to achieve them.

Q & A

  • What is the focus of Chapter 16 in the lecture series?

    -Chapter 16 focuses on logic programming, the Prolog programming language, and its theoretical underpinnings. It includes an introduction to logic programming, predicate calculus, and an overview of how logic programming works.

  • Why is Chapter 16 considered important for students?

    -Chapter 16 is important because it carries significant weight in the second semester test and the exam, with many marks dedicated to it. Students are expected to write Prolog code and understand the complexity of programs presented in practical questions and example programs.

  • What is the fundamental concept that logic programming is based on?

    -The fundamental concept that logic programming is based on is predicate calculus, which is used to prove theorems and construct Prolog programs.

  • What is the difference between logic programming and imperative or functional programming?

    -Logic programming is declarative and non-procedural, meaning programmers specify what results should look like without detailing the procedure to achieve those results. This contrasts with imperative and functional programming, which are more focused on the steps or procedures to produce results.

  • What is a proposition in the context of logic programming?

    -A proposition is a simple logical statement that may be true or untrue. It consists of objects and defines relationships among those objects, which are abstract entities that the logic system reasons about.

  • Can you explain the concept of symbolic logic in logic programming?

    -Symbolic logic in logic programming allows the expression of propositions, relationships between propositions, and the inference of new propositions from existing ones using logical rules. It forms the basis for the logic programming paradigm.

  • What are the two types of terms used to represent objects in propositions within first-order predicate calculus?

    -The two types of terms used to represent objects in propositions are constants, which represent specific objects, and variables, which represent different objects at different times and are used within rules.

  • What is the significance of clausal form in first-order predicate calculus?

    -Clausal form is a standard form used to express propositions in first-order predicate calculus. It simplifies the expression of logical propositions and is designed to reduce the ambiguity that can arise from multiple ways of writing the same compound proposition.

  • What is the resolution process in logic programming, and why is it used?

    -The resolution process is an inference principle used in logic programming to infer new propositions from given ones. It computes these new propositions by matching and unifying variables in propositions, allowing for the derivation of conclusions from premises.

  • What are Horn clauses, and how do they simplify the resolution process?

    -Horn clauses are a restricted form of expressions in first-order predicate calculus that simplifies the resolution process. They can be either headed, with a single atomic proposition as the consequent, or headless, with an empty consequent, and are used to state facts or rules in a simplified manner.

  • Why might a logic programming approach to sorting a list be considered inefficient?

    -A logic programming approach to sorting a list might be inefficient because it could potentially generate and test every possible permutation of the list to find a sorted version. This can be a very time-consuming process compared to traditional sorting algorithms.

Outlines

00:00

πŸ“˜ Introduction to Logic Programming and Prolog

This paragraph introduces Chapter 16, focusing on logic programming and Prolog. It emphasizes the importance of this chapter for the second semester test and final exam, noting that students are expected to write Prolog code. The lecturer outlines the topics to be covered, including the theoretical foundations of logic programming, predicate calculus, and specifics of the Prolog language. The paragraph also explains that logic programming is a form of declarative programming, where programmers specify desired results rather than the procedures to achieve them, and introduces the concept of an inferencing engine that deduces results from logical statements.

05:03

πŸ” Delving into Predicate Calculus and Symbolic Logic

The second paragraph delves into predicate calculus, a core concept underlying logic programming. It defines propositions and their role in logic programming, explaining how they can represent objects and relationships. The paragraph further explores symbolic logic, detailing how it allows the expression of propositions, relationships between them, and the inference of new propositions based on existing ones. First-order predicate calculus is highlighted as the specific system used in logic programming, with objects represented by constants and variables within propositions.

10:06

πŸ“š Propositions and Their Structures in Logic Programming

This paragraph examines the structure of propositions in logic programming, distinguishing between atomic and compound propositions. Atomic propositions are the simplest, formed by compound terms that represent mathematical relations. The paragraph provides examples of how these terms are constructed using functors and parameters. It also discusses the two forms propositions can take: as facts or queries, which ask the logic system to prove or disprove their validity.

15:06

πŸ”„ Understanding Logical Operators and Quantifiers

The fourth paragraph explains the logical operators used to connect atomic propositions in compound propositions, such as negation, conjunction, disjunction, equivalence, and implication. It also introduces quantifiers, which allow for the introduction of variables into propositions, either universally or existentially. Examples are provided to illustrate how these quantifiers can be used to express general truths or the existence of specific instances.

20:07

πŸ“‰ Clausal Form and Resolution Principle in Predicate Calculus

The focus of this paragraph is on clausal form, a standardized way to express propositions in predicate calculus to reduce ambiguity. It discusses the use of Horn clauses, which simplify the resolution process, a fundamental inference principle for deriving new propositions from existing ones. The paragraph also explains the unification process, which is necessary when variables are present in propositions, and how instantiation and backtracking work in the resolution process.

25:08

πŸ“ Theoretical Foundation of Logic Programming

This paragraph discusses the philosophical underpinnings of logic programming, contrasting it with imperative programming. It highlights the declarative and non-procedural nature of logic programming, where the focus is on what the result should be rather than how to compute it. The paragraph also touches on the simplicity of semantics in logic programming languages compared to imperative ones, and the implications of context independence in logic programs.

30:10

πŸ”„ Non-Procedural Approach to Sorting in Logic Programming

The seventh paragraph illustrates the non-procedural nature of logic programming with a sorting example. Instead of detailing an algorithm, as in imperative programming, logic programming describes the characteristics of a sorted list. The paragraph explains how a sorted list can be defined in terms of an unsorted list, emphasizing the declarative approach where the program specifies the desired outcome without dictating the process to achieve it.

35:13

🚧 Inefficiency in Logic Programming: The Sorting Example

The final paragraph addresses a potential drawback of logic programming: inefficiency. Using the sorting example, it explains that a logic programming language like Prolog might generate and test every possible permutation of a list to find a sorted version, which is not practical. This highlights the challenge of expressing efficient solutions in logic programming, where the inferencing process may not always follow the most efficient path to a solution.

Mindmap

Keywords

πŸ’‘Logic Programming

Logic programming is a type of programming paradigm that is based on formal logic. It is declarative rather than procedural, meaning that the programmer specifies what the program should accomplish rather than how to accomplish it. In the context of the video, logic programming is the main theme, with a focus on the Prolog programming language. The script discusses the theoretical underpinnings of logic programming, its difference from imperative and functional programming, and the use of predicates and inference engines to derive results.

πŸ’‘Prolog

Prolog, short for 'programming in logic', is a widely-used logic programming language that is central to the video's discussion. It is noted for its use of predicates and its inference engine that automatically deduces results from given logical statements. The script emphasizes the importance of understanding Prolog for the course, as it is a significant part of the curriculum and will be featured in practical exercises and exams.

πŸ’‘Predicate Calculus

Predicate calculus, also known as first-order logic, is a foundation of logic programming discussed in the script. It extends propositional logic to include quantified variables, allowing for more complex logical expressions. The video explains that predicate calculus is used to construct Prolog programs and to prove theorems, which is a core concept in logic programming. An example given in the script is using predicate calculus to express relationships and properties, like 'Mary is female' or 'Mary is married to Joe'.

πŸ’‘Inference Engine

An inference engine is a key component in logic programming that applies logical reasoning to derive new conclusions from a set of asserted facts and rules. In the script, it is mentioned that Prolog programs are declarative, and the inference engine works behind the scenes to produce results based on the logical statements provided by the programmer. This engine is crucial for the execution of Prolog programs and for generating outputs from the logical predicates defined by the programmer.

πŸ’‘Declarative Programming

Declarative programming is a paradigm where the programmer expresses the logic of the program without describing its control flow. The script explains that logic programming languages, including Prolog, are declarative, focusing on the 'what' rather than the 'how'. This is in contrast to imperative programming, which details step-by-step procedures. Declarative programming languages like Prolog allow for writing programs that are more about specifying the conditions to satisfy rather than the steps to perform.

πŸ’‘Propositions

In the context of the video, a proposition is a fundamental concept in logic programming and predicate calculus. It is a statement that can be true or false and forms the basis of logical reasoning. The script discusses two types of propositions: atomic, which are the simplest form, and compound, which are more complex and consist of multiple atomic propositions connected by logical operators. Examples from the script include propositions like 'Peter is the father of Joe' and 'Peter is male'.

πŸ’‘Resolution

Resolution is an inference principle used in logic programming for automated theorem proving. The script explains that it allows for the derivation of new propositions from given ones by matching and unifying the antecedents and consequents of propositions. The process is illustrated in the script with an example involving propositions about relationships and implications, leading to the inference of new logical conclusions.

πŸ’‘Unification

Unification is a process in logic programming that involves finding a substitution for variables that makes two propositions identical. It is essential when performing resolution with variables, as it allows the matching process to succeed. The script mentions unification in the context of the resolution process, explaining how it assigns temporary values to variables to facilitate the logical deduction of new facts.

πŸ’‘Horn Clauses

Horn clauses are a restricted form of logical expressions used in logic programming, particularly in Prolog. The script explains that they simplify the resolution process by limiting the structure of rules to two forms: headed and headless. Headed horn clauses have an implication with a single atomic proposition on the left and multiple atomic propositions on the right, while headless horn clauses have the consequent part empty, typically used to state facts.

πŸ’‘Non-Procedural

Non-procedural refers to the nature of logic programming where programs do not state how a result is computed but rather what the result should look like. In the script, this concept is highlighted to contrast with procedural programming, where step-by-step algorithms are defined. The video uses the example of sorting a list in Prolog, where instead of implementing a sorting algorithm, the programmer describes the characteristics of a sorted list, and the inference engine finds a solution that matches this description.

Highlights

Introduction to Chapter 16 on logic programming and Prolog, emphasizing its importance in the curriculum and exams.

Expectation for students to write Prolog code and complete practical exercises to understand logic programming complexity.

Theoretical underpinnings of logic programming languages, focusing on Prolog, and its declarative nature.

Predicate calculus as the core concept of logic programming, with a simplified introduction for Prolog understanding.

The role of an inferencing engine in logic programming to produce results from symbolic logic.

Difference between logic programming and imperative or functional programming in terms of specifying results versus procedures.

Explanation of propositions as simple logical statements and their role in logic programming.

Introduction to symbolic logic and its three components: propositions, relationships, and inference rules.

First-order predicate calculus as the specific symbolic logic system used in logic programming.

Use of constants and variables in propositions within first-order predicate calculus.

Atomic and compound propositions construction and their significance in logic programming.

Logical operators for connecting atomic propositions and their use in compound propositions.

Quantifiers in first-order predicate calculus for introducing variables within propositions.

Standardization of propositions into clausal form to simplify the resolution process in logic programming.

Resolution principle as an inference process for deriving new propositions from given ones.

Unification process for variables during the resolution process to allow matching and inference.

Use of Horn clauses to simplify the expression of propositions and improve resolution efficiency.

Philosophy of logic programming as a declarative, non-procedural approach to programming.

Illustration of sorting a list in logic programming versus imperative programming to demonstrate the declarative nature.

Potential inefficiency in logic programming due to the inferencing process generating all permutations to find a solution.

Transcripts

play00:01

we will now be moving on to chapter 16

play00:04

which deals with logic programming and

play00:07

the prologue

play00:08

programming language and this is the

play00:10

first

play00:11

of three lectures that will dedicate to

play00:14

these topics as with chapter 15

play00:17

where we covered functional programming

play00:20

and scheme

play00:21

this is a very important chapter

play00:24

and there will be a lot of marks

play00:26

dedicated to chapter

play00:28

16 in the second semester test

play00:31

as well as in the exam so please pay

play00:35

attention to this chapter

play00:37

once again i will definitely be asking

play00:40

questions that will require you to write

play00:42

prologue code

play00:44

so be sure that you spend enough time

play00:46

reviewing this chapter

play00:48

and also once the practical for

play00:51

logic programming has been released

play00:53

please be

play00:54

sure that you do that practical and that

play00:58

you can write programs of

play01:01

the level of complexity that

play01:04

is presented within those practical

play01:07

questions

play01:08

as well as the example programs that are

play01:11

provided in

play01:12

this chapter

play01:15

these are the topics that we'll be

play01:17

discussing in today's

play01:19

lecture so we will be looking

play01:22

at the theoretical underpinnings behind

play01:25

logic programming languages

play01:27

and prologue in particular and then

play01:30

in the second and third lectures

play01:34

of this chapter we will be looking at

play01:37

specifics related to the prologue

play01:40

programming language so we will begin

play01:42

then with an introduction

play01:44

to logic programming we'll then

play01:48

look at predicate calculus and do a

play01:50

quick introduction

play01:52

on that and we will only be looking at

play01:54

predicate calculus

play01:56

on the level that is required in order

play01:58

to understand

play02:00

how prologue programs are constructed

play02:03

so this won't go to the kind of level

play02:06

that it might be

play02:07

covered in a mathematics course for

play02:09

example

play02:11

we'll then look at how predicate

play02:12

calculus is used to

play02:14

prove theorems and this is the

play02:17

core concept that logic programming is

play02:20

based on

play02:21

and then we'll finish off with a general

play02:23

overview on

play02:25

logic programming and how it works

play02:28

this will then lay the foundation for

play02:30

the

play02:31

next part of the discussion which will

play02:33

start in the next lecture

play02:35

where we will actually look at practical

play02:38

logic programming

play02:39

and the prologue programming language

play02:43

now when we talk about logic programming

play02:46

languages

play02:47

we are generally referring to the

play02:50

prologue programming language which

play02:52

we'll be focusing on

play02:53

in the next few lectures there are other

play02:57

logic programming languages that exist

play02:59

within this paradigm however they're not

play03:01

very widely used

play03:03

and so in general when we refer to logic

play03:07

programming we're referring either to

play03:09

prologue in its original formation

play03:11

or a dialect of prologue so logic

play03:15

programming then is also often referred

play03:17

to

play03:17

as declarative programming and these

play03:21

programming languages consequently are

play03:23

often referred to then as declarative

play03:25

programming languages so programs then

play03:29

written in a logic programming language

play03:32

are expressed in the form of symbolic

play03:35

logic which we'll be focusing on

play03:37

in this lecture and then behind the

play03:40

scenes

play03:41

there is what is referred to as an

play03:44

inferencing

play03:45

engine which uses a logical inferencing

play03:48

process

play03:48

in order to produce results so what this

play03:52

thing

play03:52

means is that our programming is

play03:55

declarative in nature

play03:56

rather than procedural and what this

play03:59

means

play04:00

is that in our programs we specify what

play04:03

the results should look like but we

play04:06

don't specify

play04:07

in detail the procedure that should be

play04:09

followed to produce those results

play04:12

because the procedure is handled

play04:14

entirely by

play04:15

the logical inferencing engine that runs

play04:18

behind the scenes

play04:20

so this is therefore a very drastic

play04:23

departure from

play04:24

the kind of philosophy that we have

play04:27

followed with imperative

play04:28

programming languages as well as

play04:31

functional programming languages

play04:36

we will now move our discussion onto

play04:38

predicate calculus

play04:39

which is the core concept that underlies

play04:43

logic programming but before we can

play04:46

discuss predicate calculus itself

play04:49

we need to define a number of terms

play04:52

so the first concept we'll look at is

play04:54

the proposition

play04:56

a proposition is a simple logical

play04:59

statement

play05:00

that may be true or untrue

play05:03

and propositions are the things that

play05:06

logic languages

play05:07

reason about so proposition then

play05:10

consists

play05:10

of one or more objects and then

play05:14

it also defines relationships among

play05:17

objects

play05:18

now these objects are abstract things

play05:21

that the logic system reasons about

play05:25

they don't have intrinsic meaning other

play05:27

than to

play05:28

us as programmers so we may for example

play05:32

have an object called mary and we

play05:35

as humans may then associate mary with

play05:38

a specific person however as far as

play05:41

a logic programming language goes mary

play05:44

is simply an abstract object that it

play05:47

reasons

play05:48

about an example of a proposition

play05:51

related to mary

play05:53

may then be that mary is female for

play05:56

example

play05:57

and this then specifies a relationship

play06:00

to other objects that may not be female

play06:03

for example

play06:05

we can also create a proposition such

play06:08

as mary is married to joe for example

play06:13

the there are then two objects that are

play06:16

incorporated within this proposition

play06:19

mary and joe

play06:20

and the relationship that is specified

play06:22

is that they

play06:23

are married once again this relationship

play06:26

is an abstract notion so

play06:30

female and married have a

play06:33

meaning for us as humans but to the

play06:36

logic

play06:36

programming language these are just

play06:38

abstract relationships

play06:40

that it reasons about

play06:44

next we have the concept of symbolic

play06:47

logic

play06:48

so a symbolic logic system allows us to

play06:51

specify

play06:52

three things which are the basic

play06:54

building blocks

play06:55

of formal logic so first of all the

play06:58

symbolic logic system allows us

play07:00

to express propositions and we've spoken

play07:02

about propositions a moment ago

play07:05

so a proposition might be something like

play07:08

peter is the father of joe or

play07:11

peter is male then a symbolic logic

play07:16

system also allows us to express

play07:17

relationships between propositions

play07:20

so we can link the previous two example

play07:22

propositions

play07:23

using a logical and so we can say

play07:26

peter is the father of joe and

play07:30

peter is also male and then

play07:33

finally a symbolic logic system allows

play07:36

us to describe how new propositions can

play07:38

be

play07:38

inferred from other propositions so

play07:42

we would use simple rules to describe

play07:45

these kinds of inferences

play07:47

and we might say something like if x is

play07:50

the father of

play07:51

someone else then it implies

play07:55

that x must also be male

play07:59

so first order predicate calculus then

play08:02

is a particular form of symbolic logic

play08:05

and it is the symbolic logic system that

play08:08

is used for logic programming so

play08:11

we will from this point on and

play08:14

throughout the remainder of chapter 15

play08:17

focus on

play08:18

first order predicate calculus

play08:22

now we've spoken about objects within

play08:26

a symbolic logic system such as first

play08:28

order predicate calculus

play08:30

a little earlier in this lecture now

play08:33

within

play08:34

propositions objects are represented

play08:37

by simple terms simple terms can take on

play08:41

two forms so firstly we have constants

play08:43

and secondly we have

play08:45

variables so a constant then is a symbol

play08:49

in other words some kind of name that

play08:51

represents

play08:52

a specific object within the domain that

play08:55

we are reasoning about

play08:57

so if we go back to my previous example

play09:00

peter

play09:01

and joe are both examples of constants

play09:05

once again to us as humans they

play09:07

represent specific

play09:09

people with specific characteristics

play09:11

however as far

play09:12

as first order predicate calculus goes

play09:15

these are just

play09:16

abstract objects that the system

play09:19

reasons then secondly we have

play09:23

variables now it's very important to

play09:25

understand that these

play09:26

variables are not like variables

play09:29

in imperative programming languages like

play09:31

c plus

play09:32

so they don't represent memory spaces

play09:35

that we can assign values to

play09:38

instead they are symbols so

play09:41

once again names that represent

play09:44

different objects

play09:45

at different times and we will use

play09:48

variables within rules

play09:50

so if we go back to my previous example

play09:54

we specified a rule

play09:57

which stated that if x is the father of

play10:01

somebody else

play10:02

then x must also be male

play10:05

so x in this case then is a variable it

play10:09

represents

play10:10

any object that satisfies the

play10:13

original condition that x must be the

play10:16

father of somebody else

play10:18

and then we can conclude something from

play10:21

that particular condition

play10:27

constants and variables can be used

play10:29

within

play10:30

propositions recall that a proposition

play10:34

is a logical statement of something that

play10:37

might be true

play10:38

or not true so we'll turn our attention

play10:43

once again to propositions but look at

play10:45

them in some more detail

play10:47

and there are two types of propositions

play10:50

that we

play10:51

can construct firstly atomic

play10:53

propositions

play10:54

and then secondly compound propositions

play10:57

so we'll look first of all at atomic

play11:00

propositions which are simpler

play11:02

and then in a moment we'll get on to

play11:05

compound

play11:06

propositions so atomic propositions then

play11:09

are the simplest kinds of propositions

play11:12

that we can construct

play11:14

and they are represented by means of

play11:17

what we call compound

play11:18

terms so a compound term

play11:21

represents a mathematical relation

play11:25

and it is written like a mathematical

play11:27

function

play11:30

so let's take a closer look at these

play11:33

compound

play11:33

terms that are used to define atomic

play11:36

propositions

play11:38

a compound term is composed of two parts

play11:41

so firstly we have the functor which is

play11:44

simply the

play11:44

name of the relation then following the

play11:48

functor we have a pair of parentheses

play11:50

and within the parentheses we have an

play11:52

ordered list of parameters

play11:54

these parameters together form a tuple

play11:57

so if there's only one parameter then we

play11:59

have a one tuple

play12:00

there two parameters we have a two tuple

play12:02

and so on and so forth

play12:04

there's no limit on the number of

play12:07

parameters that a compound term

play12:09

can have although a compound term must

play12:12

have

play12:13

at least one parameter present

play12:16

so let's look then at some examples of

play12:19

compound terms that define

play12:20

atomic propositions and starting with

play12:23

the first

play12:24

one we see that the functi in this case

play12:27

is student now note

play12:30

that our functors always start with

play12:32

lowercase letters

play12:34

and we will be using this convention

play12:36

from now on

play12:37

because prologue requires functions to

play12:40

be named with a word that starts with a

play12:43

lowercase

play12:43

letter then we can see that

play12:47

we have a set of parentheses and there's

play12:50

only one parameter

play12:51

contained within the parentheses so this

play12:53

means we're dealing with a

play12:54

one tuple now

play12:58

john then in this case is a constant so

play13:01

in other words

play13:02

it represents a specific object that we

play13:05

can

play13:06

reason about note also that

play13:09

our constants start with lowercase

play13:12

letters

play13:13

and again this is a convention that we

play13:15

will be following

play13:16

because prologue enforces this kind of

play13:19

naming so if we then

play13:22

look at our first compound term we are

play13:25

essentially specifying some sort of

play13:28

property

play13:29

or characteristic for the constant

play13:33

john now again whatever meaning is

play13:36

associated with the

play13:38

relation and the constant john

play13:42

these are attached by humans

play13:45

so we would in other words typically

play13:47

interpret this

play13:48

as john is a student but as far

play13:51

as our logic system goes

play13:55

there is no intrinsic meaning associated

play13:58

with either so

play13:59

john could represent anything it's just

play14:02

a generic

play14:03

object that is reasoned about and

play14:05

student could

play14:06

represent any kind of characteristic or

play14:09

property

play14:10

associated with john so it may represent

play14:13

something entirely different like john

play14:16

is male

play14:16

for example as far as the logic system

play14:20

is concerned so in the next three

play14:23

examples over here we again have a

play14:26

function in each case and in all three

play14:28

cases the function is

play14:30

like and we can see then that we have

play14:33

two parameters in each case separated by

play14:36

a comma

play14:37

once again all of the parameters

play14:41

are constants so in other words they

play14:43

represent

play14:44

individual unique objects that we can

play14:47

reason about

play14:49

so because these are then two tuples we

play14:52

are specifying some kind of relationship

play14:54

between the two parameters now of course

play14:57

as humans we would typically interpret

play14:59

this

play15:00

as seth likes os x

play15:03

but once again as far as the logic

play15:05

system is concerned

play15:07

there's no specific intrinsic meaning

play15:09

associated with any of this

play15:12

so this relation could then mean that os

play15:15

x likes sith or sith is

play15:19

a subclass of os x or the two

play15:22

are related in some other kind of way as

play15:25

far as the logic system is concerned it

play15:27

doesn't really

play15:28

care what this relation is it's just

play15:30

simply

play15:31

an abstract relation between these two

play15:34

constants

play15:35

and once again of course seth and osx

play15:38

just represent abstract objects that are

play15:41

reasoned about so then in our third

play15:44

example over here we would interpret

play15:46

this as nick likes

play15:48

windows and in this case in the third

play15:51

example jim

play15:52

likes linux

play15:56

so propositions such as the atomic

play15:59

propositions that we've just discussed

play16:02

can be stated in two forms firstly a

play16:05

proposition

play16:06

can be effect so this is then a

play16:08

proposition

play16:09

that is assumed to be true in other

play16:11

words it's some kind of axiomatic truth

play16:15

that our logic system assumes

play16:18

secondly a proposition can represent a

play16:21

query

play16:22

and in this case we are then presenting

play16:25

the proposition

play16:26

to our logic system or logic programming

play16:29

language

play16:30

and we are asking the system to

play16:32

determine

play16:33

the truth of the proposition so for

play16:36

example

play16:37

if we have the atomic proposition

play16:40

student

play16:41

john then we can read this either as a

play16:44

fact or as a query

play16:45

so we read it as a fact we are then

play16:47

stating that john

play16:49

is a student and this is something that

play16:51

our logic system

play16:52

can assume to be true

play16:55

as a query we are asking a question

play16:59

related to this proposition so we are

play17:01

asking

play17:02

is john a student or more accurately we

play17:05

are asking the

play17:06

logic system or the logic programming

play17:08

language can you prove

play17:10

that john is in fact a student

play17:15

so so far we have looked at atomic

play17:18

propositions

play17:19

we'll now move on to the second kind of

play17:22

proposition that we can define

play17:24

namely compound propositions so

play17:27

compound propositions are more complex

play17:29

than atomic propositions

play17:31

and this is because they contain two or

play17:34

more

play17:35

atomic propositions as components

play17:38

these atomic propositions then are

play17:40

connected by means

play17:41

of logic operators

play17:45

this table summarizes the logical

play17:48

operators

play17:49

that can be used to connect atomic

play17:51

propositions

play17:53

so first of all we have the negation

play17:56

symbol

play17:57

and this is an example of its use so

play18:00

here a would be

play18:01

an atomic proposition and we would then

play18:04

read this

play18:04

as not a conjunctions

play18:08

are represented by this inverted cup

play18:10

symbol

play18:11

so here we have two atomic propositions

play18:14

a and b

play18:15

and they are connected with a

play18:16

conjunction we read a conjunction as an

play18:19

and so we would have a and b

play18:23

a disjunction uses the cup symbol

play18:26

and this is read as an or so over here

play18:28

we have

play18:29

a or b then we have

play18:32

equivalence represented by this triple

play18:35

lined

play18:36

equals symbol and here we have it then

play18:39

connecting

play18:40

a and b so we would read this as a is

play18:43

equivalent to b and then finally we have

play18:47

our implication symbols which can point

play18:50

in either direction right or left

play18:53

so over here we have a implies b

play18:57

in a more familiar form we could read

play19:00

that

play19:00

as if a is the case then b

play19:03

is also the case we can also write

play19:06

implications the other way around

play19:07

so over here we have b implies a

play19:10

and we could also read that as if b is

play19:13

the case

play19:14

then a is also the case

play19:19

now variables which we've mentioned

play19:21

briefly before

play19:23

can also appear within propositions

play19:26

the only way that we can introduce a

play19:28

variable is

play19:29

by using what is called a quantifier

play19:33

so we have two quantifiers the universal

play19:35

quantifier and the existential

play19:37

quantifier

play19:38

the symbols used here are an inverted a

play19:42

to represent the universal quantifier

play19:45

and

play19:46

a reversed e to represent the

play19:48

existential quantifier

play19:50

so this is an example of the kind of

play19:53

notation we would use

play19:55

for the universal quantifier we would

play19:57

read this

play19:58

as for all x p is

play20:01

true then for the existential quantifier

play20:04

here we have an example of the notation

play20:06

that we would use

play20:08

and we would read this as there exists

play20:12

at least one value for x such

play20:15

that p is true so

play20:18

here are two concrete examples then of

play20:21

how we would actually use the universal

play20:23

and existential quantifiers with

play20:26

variables

play20:28

so over here we have the universal

play20:30

quantifier

play20:31

and the way we would read this is that

play20:34

for all x where x

play20:38

is a woman it also implies

play20:41

that x must be human

play20:44

so this is a universal truth that holds

play20:47

for all

play20:48

objects if they satisfy the left

play20:51

condition over here

play20:52

then the right proposition

play20:56

also holds here we have an example of

play20:59

the use

play21:00

of the existential quantifiers so we

play21:02

would read this as

play21:03

there exists at least one x

play21:06

such that mary is the mother of x

play21:10

and x is also male

play21:13

so in other words we are specifying here

play21:16

that

play21:17

mary has at least one male child

play21:22

so we've now discussed all of the

play21:25

components

play21:26

of first order predicate calculus

play21:29

however it turns out that if we use

play21:31

these components

play21:32

to write propositions then

play21:35

there are a large number of different

play21:37

ways that we can write

play21:39

the same compound proposition

play21:42

now this is of course not a problem for

play21:44

mathematicians

play21:45

however in a computational system

play21:49

such as an inferencing engine in a logic

play21:52

programming language

play21:54

this can be an issue because of the

play21:57

additional ambiguity that it introduces

play22:00

so in order to counteract this we use a

play22:03

standard form

play22:05

for all of our propositions and this

play22:07

form is

play22:08

called clausal form so in clausal form

play22:11

the

play22:12

existential quantifier is not used

play22:15

the universal quantifier is used however

play22:19

it is only used when

play22:20

variables are involved and therefore its

play22:23

use

play22:24

is implicit so we never explicitly write

play22:27

down

play22:27

the universal quantifier we also

play22:31

only need to use the and or and left

play22:34

implication

play22:35

operators the other logical operators

play22:38

are

play22:39

not required now if we write a compound

play22:42

proposition

play22:43

that involves an implication then this

play22:46

is the standard form

play22:47

that we will use so the a's

play22:50

and b's here then are terms in other

play22:53

words

play22:54

they are atomic propositions

play22:57

so we can see then that on the

play23:00

right hand side of the implication we

play23:03

have a series of

play23:04

a's and they are all connected by means

play23:07

of logical ands

play23:09

the implication points to the left

play23:12

and then on the left hand side we have a

play23:15

series of b's

play23:17

that are connected by logical ors

play23:20

so the way that we would then interpret

play23:22

this is if all of the a's

play23:24

are true then at least one of the b's

play23:29

must also be true now the terminology

play23:33

that we use

play23:33

for the two sides of such an implication

play23:37

of for the right side we use the term

play23:41

antecedent and then for the left side we

play23:44

use the term consequent and this is

play23:47

because

play23:48

the left side is the consequence

play23:52

of the right side being true

play23:55

so here's then an example of

play23:58

a concrete proposition

play24:02

which we could express using clausal

play24:05

form that

play24:06

involves an implication and the way that

play24:09

we would read this

play24:10

is if bob likes fish

play24:13

and trout is a fish then this

play24:16

implies that bob also likes

play24:20

trout

play24:23

now if we only use first order predicate

play24:26

calculus

play24:26

to express propositions we have a

play24:30

system that is not of much practical use

play24:32

to us

play24:33

so what we really want to be able to do

play24:35

is to infer

play24:36

facts from propositions that we already

play24:40

know about now these known propositions

play24:42

are then

play24:43

axiomatic facts and we assume the

play24:46

truth of these facts resolution then

play24:51

is an inference principle which allows

play24:54

us to infer propositions

play24:57

where we compute these new propositions

play25:00

from

play25:01

given propositions so over here we have

play25:05

a simple example of the resolution

play25:08

process

play25:09

just to illustrate the concept and we

play25:12

will assume that we

play25:14

are given these two propositions over

play25:17

here

play25:17

which are assumed to hold

play25:20

so the first proposition then states

play25:23

that if jaane

play25:24

is the mother of jake then it implies

play25:28

that jaana is also older than jake

play25:32

the second proposition states that if

play25:35

ja'anae

play25:36

is older than jake then it implies

play25:39

that cho anna is also wiser than jack

play25:43

so now we can apply the resolution

play25:46

process

play25:46

given these propositions that we assume

play25:50

and we can then derive a new proposition

play25:54

that states that if joanne is the

play25:57

mother of jake then it implies that

play26:00

jaina

play26:01

is also wiser than jake now what's

play26:04

actually happened here

play26:05

is that a matching process has been

play26:09

performed

play26:10

so what we've done is we've matched the

play26:13

consequent

play26:14

of the first proposition with the

play26:17

antecedent of the second proposition

play26:20

and you can see that those are the same

play26:24

and this then allows us to infer

play26:27

that the antecedent of the first

play26:31

proposition implies the consequence

play26:34

of the second proposition

play26:39

now variables can of course appear

play26:41

within propositions

play26:43

and if this is the case we need to use

play26:45

what is called the unification process

play26:48

so the unification process tries to find

play26:51

values

play26:52

for variables in propositions that will

play26:55

allow the

play26:56

matching process performed by resolution

play26:59

to succeed so an instantiation then

play27:04

is an assignment of a temporary value to

play27:07

a variable

play27:08

in order to allow unification to succeed

play27:12

so after we've instantiated a variable

play27:15

with a value

play27:16

then this matching process performed by

play27:18

resolution

play27:19

may fail in that case we need to

play27:22

backtrack

play27:23

which means that we instantiate our

play27:25

variable

play27:26

with a different value and then we begin

play27:29

the matching process

play27:30

from the beginning again we may need to

play27:33

backtrack

play27:34

several times in other words trying

play27:37

different instantiations

play27:38

of values for our variable until

play27:42

eventually the matching process succeeds

play27:45

and resolution concludes

play27:48

now what's important to understand here

play27:50

is that

play27:51

the variables that we use in first order

play27:54

predicate calculus

play27:56

are not variables as they are used

play27:59

in imperative programming languages so

play28:03

these variables only receive values

play28:06

during the

play28:07

unification process by means of

play28:10

instantiation

play28:11

and in a logic programming language this

play28:14

instantiation process

play28:16

is performed by the inferencing engine

play28:18

in the background

play28:20

the instantiation is not performed by

play28:22

the programmer themselves

play28:27

in first order predicate calculus we

play28:29

want to be able to prove

play28:31

theorems so a theorem essentially is a

play28:35

statement that we make and we want to be

play28:37

able to prove the truth

play28:39

of this statement so resolution then can

play28:43

be used

play28:44

to prove theorems so

play28:47

we have hypotheses then and hypotheses

play28:50

are a set of propositions

play28:54

that state a theorem so we can have in

play28:56

other words

play28:57

multiple propositions that are part of

play29:01

our theorem we then have a goal

play29:04

which is the negation of our theorem

play29:07

and it is stated as a proposition

play29:11

and then we can use proof by

play29:12

contradiction in order to prove our

play29:15

theorem

play29:16

so a theorem is proved if we can find an

play29:19

inconsistency in the goal in other words

play29:22

the negation of the theorem

play29:25

so essentially if we can find a

play29:27

contradiction

play29:29

to the goal then because the goal is the

play29:32

negation

play29:33

of the theorem we have then proved the

play29:36

theorem to be true now we saw previously

play29:41

in this lecture that we adopt clausal

play29:45

form

play29:45

for expressing our propositions the

play29:48

reason we did this

play29:49

is because there are a lot of different

play29:53

ways that we can

play29:54

express logical propositions and so

play29:57

using clausal form we have a simplified

play30:00

standardized way

play30:02

to describe and express our propositions

play30:05

within a

play30:06

first order predicate calculus system

play30:10

now even if we use clausal form

play30:13

the resolution process that we've just

play30:16

discussed is often not

play30:19

very efficient and therefore not

play30:22

practical so one way that we can address

play30:26

this problem

play30:27

is by assuming an even more restricted

play30:31

form in which we express our

play30:34

propositions

play30:35

and this then simplifies the resolution

play30:38

process

play30:38

drastically so in this restricted

play30:42

representation

play30:43

we use horn clauses and horn clauses

play30:46

can have only two forms so first of

play30:50

all we have headed horn clauses

play30:53

and these are implications

play30:56

and we have a single atomic proposition

play31:00

on the left which is the consequent

play31:04

and then we can have multiple atomic

play31:06

propositions

play31:07

on the right linked by logical

play31:10

and so over here then we have an

play31:13

example of a headed horn clause and we

play31:16

would read this headed horn clause

play31:18

as if bob likes fish

play31:21

and trout is a fish then this

play31:24

implies that bob also likes trout

play31:28

the second kind of horned claws is a

play31:31

headless horn clause

play31:33

and essentially here the left side

play31:36

of the implication in other words the

play31:38

consequent

play31:40

is empty so over here we have an example

play31:43

of a headless horn clause and we would

play31:45

simply read this

play31:46

as bob is the father of

play31:50

jake now headless horn clauses are

play31:53

typically used

play31:54

to state facts that we know about the

play31:57

environment or the world that we are

play32:00

reasoning about

play32:02

so most but not all propositions can be

play32:06

stated as

play32:07

horn clauses in our case

play32:10

for logic programming on clauses are

play32:14

sufficient for what we need to be able

play32:16

to express

play32:19

so we've now looked at first order

play32:22

predicates calculus

play32:23

as well as its various components and

play32:26

we've also looked at the

play32:28

resolution process as well as the

play32:31

unification process

play32:33

which occurs when we perform resolution

play32:36

in the presence

play32:37

of variables we're now ready to move

play32:40

on to logic programming but at this

play32:43

point we're only going to talk about the

play32:45

overarching philosophy

play32:48

that underlies logic programming we

play32:51

won't be looking at

play32:52

any actual logic programming code

play32:56

but in the next two lectures we'll be

play32:58

moving on to the prologue

play33:00

programming language and there we will

play33:02

look at

play33:03

practical logic programming

play33:06

so we've already mentioned that logic

play33:09

programming languages

play33:10

are declarative programming languages

play33:13

and this

play33:13

is because the semantics of these

play33:16

languages

play33:17

is declarative in nature so what this

play33:21

means

play33:21

is that there's a simple way to

play33:24

determine the meaning

play33:25

of each statement within a logic

play33:28

program and the statement meaning is

play33:31

also

play33:32

context independent so what this means

play33:36

is that we can determine the meaning of

play33:38

a statement by simply looking at that

play33:40

statement on its own

play33:42

without having to refer to any other

play33:44

statements

play33:46

now this is not the case in an

play33:48

imperative programming language such as

play33:50

c

play33:50

plus or java so let's say for example

play33:54

that we have a statement in c

play33:56

plus and this is an arithmetic statement

play33:59

that

play33:59

involves variables so in order to

play34:02

understand

play34:03

the meaning of the statement in other

play34:04

words the result produced by the

play34:06

statement

play34:07

we need to know what the values of the

play34:10

variables

play34:11

are but the only way that we can

play34:13

determine

play34:14

what the variables values are is by

play34:18

understanding the entire execution of

play34:20

the program

play34:22

up to that point so this then means that

play34:25

statements

play34:26

in an imperative programming language

play34:28

are not context independent because

play34:30

understanding them

play34:31

requires understanding of previous

play34:34

statements that have been

play34:35

executed in a logic programming language

play34:39

we should be able to just look at a

play34:41

statement on its own

play34:43

and understand the meaning without

play34:44

having to refer to

play34:46

any pre previous statements that have

play34:48

been executed

play34:50

now this also implies that the order of

play34:52

statements

play34:53

in a logic program should not be

play34:57

important and as we'll see in the next

play34:59

two lectures

play35:00

this isn't strictly speaking true so

play35:03

this means that logic programming

play35:05

languages such as prologue are not

play35:07

entirely context independent but they

play35:10

are more context independent than

play35:12

imperative programming languages are

play35:16

so what this then means is that the

play35:18

semantics

play35:19

of a logic programming language will be

play35:23

much simpler than the semantics of

play35:26

an imperative programming language

play35:30

now programming in a logic programming

play35:33

language

play35:33

is non-procedural in nature so what this

play35:36

means

play35:37

is that programs which we write don't

play35:40

state how a result is to be computed

play35:42

but rather what form the result takes

play35:47

so we then use predicate calculus which

play35:50

is our descriptive mechanism for

play35:52

specifying

play35:53

our programs and then the resolution

play35:56

process is used in order to get a result

play36:00

so the resolution process then is

play36:03

executed by the

play36:04

inferencing engine that runs in the

play36:07

background

play36:08

of a logic programming language

play36:11

this then in essence allows the

play36:14

programmer to focus

play36:15

on the meaning of their programs rather

play36:18

than the actual

play36:19

mechanisms for achieving those meanings

play36:22

and so this should in theory then make

play36:26

programs far easier to write

play36:29

in a logic programming language

play36:33

so in order to illustrate the

play36:35

non-procedural nature

play36:37

of programs that are implemented in a

play36:39

logic

play36:40

programming language we'll look at a

play36:42

fairly

play36:43

simple example that is fairly commonly

play36:45

used to illustrate these concepts

play36:49

the problem that we will look at is how

play36:52

to

play36:53

sort a list of values so if we use

play36:56

an imperative programming language like

play36:58

c plus

play36:59

or java we have to use a procedural

play37:02

approach so what this means is we need

play37:05

to write

play37:06

an algorithm that will sort the list and

play37:09

then

play37:09

our computer will execute the steps of

play37:13

the algorithm

play37:14

now there are a number of drawbacks

play37:16

associated with this approach

play37:18

first of all we need to decide on a

play37:21

sorting algorithm

play37:22

so as you should recall from your data

play37:24

structures course

play37:25

there are quite a number of different

play37:27

sorting algorithms that exist

play37:29

and each one has various benefits and

play37:32

drawbacks so there are trade-offs that

play37:34

we need

play37:35

to consider before we choose

play37:38

a sorting algorithm we also have to

play37:41

consider

play37:41

things like the time complexity of the

play37:44

algorithm

play37:45

and also the resource usage

play37:48

for example the memory usage that

play37:52

the sorting algorithm will require

play37:56

so once we've then selected a sorting

play37:58

algorithm we actually need

play38:00

to implement it and as you should also

play38:02

recall

play38:03

sorting algorithms are relatively

play38:05

complex so this

play38:07

allows for quite a lot of scope in terms

play38:10

of

play38:10

bugs and errors in our implementation

play38:14

so all of this then requires a lot of

play38:17

effort

play38:18

on the part of the programmer now in

play38:21

a logic programming language we follow a

play38:24

non-procedural approach

play38:26

so what this means then is that we

play38:29

describe

play38:29

the characteristics that a sorted list

play38:32

has

play38:33

and not the process that will be

play38:35

followed

play38:36

in terms of rearranging the elements

play38:39

contained in the list

play38:41

so over here we have an example of how

play38:44

we would express the

play38:46

nature of a sorted list in terms

play38:50

of an unsorted list in a logic

play38:53

programming language

play38:55

so here we will then assume that a list

play38:58

contains

play38:59

n elements and that we can index these

play39:01

elements

play39:02

starting at index 1 and ending at index

play39:06

in so the first thing we need to do is

play39:08

we need to specify

play39:10

what a sorted list looks like

play39:13

so we can say then that a list

play39:16

is sorted if for all values of

play39:20

j where j is then acting as our index

play39:23

within the list

play39:24

which a then takes on the values 1 up to

play39:28

1 less than

play39:29

n then the value contained in the list

play39:32

at index j is less than

play39:35

or equal to the value contained in the

play39:38

list at index j

play39:39

plus 1. so what we are essentially

play39:42

expressing here

play39:43

is that the list must be a monotonically

play39:47

increasing set of values so in other

play39:50

words

play39:51

each next value is either the same as

play39:54

the previous value

play39:55

or larger than the previous value

play39:58

so now we're ready to express what a

play40:01

sorted list looks like in terms of an

play40:04

unsorted list so here old list

play40:08

is our unsorted list and new list is

play40:11

a sorted version of that list so what we

play40:14

are specifying here then

play40:16

is that new list is a sorted version of

play40:19

old list

play40:20

if new list is a permutation

play40:24

of the old list in other words a

play40:26

reordering of the values contained in

play40:28

the old list

play40:30

and our new list is

play40:33

sorted according to our previous

play40:35

definition of what a

play40:36

sorted list looks like so this is then

play40:40

a relatively compact expression of what

play40:44

a sorted list looks like however

play40:47

what i would like you to consider at

play40:49

this point

play40:50

and pause the video to think about this

play40:52

for a moment

play40:54

is what drawback then is associated with

play40:58

this kind of expression of how to

play41:02

sort a list

play41:05

so essentially we need to think about

play41:09

how prologue will actually then go about

play41:12

sorting this list and essentially

play41:16

what will happen is that prologue or any

play41:19

other logic programming language

play41:21

will generate a permutation

play41:24

of our old list and then it will test

play41:28

to see whether this permutation is

play41:30

sorted

play41:32

if this permutation is not sorted then

play41:34

it generates

play41:35

another permutation of the old list and

play41:38

then it again

play41:39

checks to see whether this permutation

play41:41

is sorted

play41:43

so what this means is we may potentially

play41:46

run

play41:46

through every possible permutation of

play41:49

the old list

play41:50

before we happen to stumble on a sorted

play41:54

version

play41:54

of that list this is obviously then

play41:58

a very inefficient approach to sorting

play42:01

so this is one of the major drawbacks

play42:04

then associated with logic programming

play42:06

in general

play42:07

is that it's often fairly

play42:11

easy to express the characteristics of a

play42:14

solution

play42:15

but the inferencing process that runs in

play42:17

the background

play42:19

may follow a very inefficient procedure

play42:22

in order to arrive at a solution

play42:26

right so that then concludes our

play42:29

discussion on

play42:30

logic programming from an overall

play42:33

philosophical perspective

play42:35

and in the next lecture we will move on

play42:38

to actual concrete logic programming

play42:41

in the prologue programming language

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Logic ProgrammingProlog LanguagePredicate CalculusResolution PrincipleDeclarative ProgrammingProgramming ParadigmsInferencing EngineTheoretical UnderpinningsEducational LectureProgramming Concepts