COS 333: Chapter 16, Part 1
Summary
TLDRThe lecture introduces logic programming, emphasizing its importance in exams and urging students to practice Prolog code. It covers the theoretical foundations, focusing on predicate calculus, which is essential for understanding Prolog's construction. The non-procedural nature of logic programming is highlighted, contrasting it with imperative and functional programming. The lecture also discusses the inefficiency of logic programming for certain tasks, such as sorting, due to the potential need to generate and test all permutations.
Takeaways
- 📚 The lecture introduces Chapter 16 on logic programming and Prolog, emphasizing its importance for exams and tests.
- 🎯 Students are expected to write Prolog code and understand the complexity of practical logic programming tasks.
- 🔍 The theoretical underpinnings of logic programming, specifically predicate calculus, are essential to grasp Prolog programming.
- 📝 Logic programming is based on a declarative paradigm, contrasting with the procedural nature of imperative and functional programming languages.
- 🤖 The inferencing engine in logic programming languages like Prolog automatically handles the procedure to achieve desired results based on logical rules.
- 📈 Predicate calculus, including proposition formation, relationships, and symbolic logic, is the core concept underlying logic programming.
- 🔑 First-order predicate calculus is the specific form of symbolic logic used in logic programming, involving constants, variables, and compound terms.
- 🔍 Atomic and compound propositions, along with logical operators, are used to construct the logic within Prolog programs.
- 🔄 The resolution principle and unification process are key to inferring new facts from existing axiomatic propositions in logic programming.
- 📉 The use of clausal form and Horn clauses simplifies the resolution process in logic programming, making it more computationally efficient.
- 🚀 The non-procedural nature of logic programming allows for simpler program writing by focusing on the desired outcomes rather than the steps to achieve them.
Q & A
What is the focus of Chapter 16 in the lecture series?
-Chapter 16 focuses on logic programming, the Prolog programming language, and its theoretical underpinnings. It includes an introduction to logic programming, predicate calculus, and an overview of how logic programming works.
Why is Chapter 16 considered important for students?
-Chapter 16 is important because it carries significant weight in the second semester test and the exam, with many marks dedicated to it. Students are expected to write Prolog code and understand the complexity of programs presented in practical questions and example programs.
What is the fundamental concept that logic programming is based on?
-The fundamental concept that logic programming is based on is predicate calculus, which is used to prove theorems and construct Prolog programs.
What is the difference between logic programming and imperative or functional programming?
-Logic programming is declarative and non-procedural, meaning programmers specify what results should look like without detailing the procedure to achieve those results. This contrasts with imperative and functional programming, which are more focused on the steps or procedures to produce results.
What is a proposition in the context of logic programming?
-A proposition is a simple logical statement that may be true or untrue. It consists of objects and defines relationships among those objects, which are abstract entities that the logic system reasons about.
Can you explain the concept of symbolic logic in logic programming?
-Symbolic logic in logic programming allows the expression of propositions, relationships between propositions, and the inference of new propositions from existing ones using logical rules. It forms the basis for the logic programming paradigm.
What are the two types of terms used to represent objects in propositions within first-order predicate calculus?
-The two types of terms used to represent objects in propositions are constants, which represent specific objects, and variables, which represent different objects at different times and are used within rules.
What is the significance of clausal form in first-order predicate calculus?
-Clausal form is a standard form used to express propositions in first-order predicate calculus. It simplifies the expression of logical propositions and is designed to reduce the ambiguity that can arise from multiple ways of writing the same compound proposition.
What is the resolution process in logic programming, and why is it used?
-The resolution process is an inference principle used in logic programming to infer new propositions from given ones. It computes these new propositions by matching and unifying variables in propositions, allowing for the derivation of conclusions from premises.
What are Horn clauses, and how do they simplify the resolution process?
-Horn clauses are a restricted form of expressions in first-order predicate calculus that simplifies the resolution process. They can be either headed, with a single atomic proposition as the consequent, or headless, with an empty consequent, and are used to state facts or rules in a simplified manner.
Why might a logic programming approach to sorting a list be considered inefficient?
-A logic programming approach to sorting a list might be inefficient because it could potentially generate and test every possible permutation of the list to find a sorted version. This can be a very time-consuming process compared to traditional sorting algorithms.
Outlines
📘 Introduction to Logic Programming and Prolog
This paragraph introduces Chapter 16, focusing on logic programming and Prolog. It emphasizes the importance of this chapter for the second semester test and final exam, noting that students are expected to write Prolog code. The lecturer outlines the topics to be covered, including the theoretical foundations of logic programming, predicate calculus, and specifics of the Prolog language. The paragraph also explains that logic programming is a form of declarative programming, where programmers specify desired results rather than the procedures to achieve them, and introduces the concept of an inferencing engine that deduces results from logical statements.
🔍 Delving into Predicate Calculus and Symbolic Logic
The second paragraph delves into predicate calculus, a core concept underlying logic programming. It defines propositions and their role in logic programming, explaining how they can represent objects and relationships. The paragraph further explores symbolic logic, detailing how it allows the expression of propositions, relationships between them, and the inference of new propositions based on existing ones. First-order predicate calculus is highlighted as the specific system used in logic programming, with objects represented by constants and variables within propositions.
📚 Propositions and Their Structures in Logic Programming
This paragraph examines the structure of propositions in logic programming, distinguishing between atomic and compound propositions. Atomic propositions are the simplest, formed by compound terms that represent mathematical relations. The paragraph provides examples of how these terms are constructed using functors and parameters. It also discusses the two forms propositions can take: as facts or queries, which ask the logic system to prove or disprove their validity.
🔄 Understanding Logical Operators and Quantifiers
The fourth paragraph explains the logical operators used to connect atomic propositions in compound propositions, such as negation, conjunction, disjunction, equivalence, and implication. It also introduces quantifiers, which allow for the introduction of variables into propositions, either universally or existentially. Examples are provided to illustrate how these quantifiers can be used to express general truths or the existence of specific instances.
📉 Clausal Form and Resolution Principle in Predicate Calculus
The focus of this paragraph is on clausal form, a standardized way to express propositions in predicate calculus to reduce ambiguity. It discusses the use of Horn clauses, which simplify the resolution process, a fundamental inference principle for deriving new propositions from existing ones. The paragraph also explains the unification process, which is necessary when variables are present in propositions, and how instantiation and backtracking work in the resolution process.
📝 Theoretical Foundation of Logic Programming
This paragraph discusses the philosophical underpinnings of logic programming, contrasting it with imperative programming. It highlights the declarative and non-procedural nature of logic programming, where the focus is on what the result should be rather than how to compute it. The paragraph also touches on the simplicity of semantics in logic programming languages compared to imperative ones, and the implications of context independence in logic programs.
🔄 Non-Procedural Approach to Sorting in Logic Programming
The seventh paragraph illustrates the non-procedural nature of logic programming with a sorting example. Instead of detailing an algorithm, as in imperative programming, logic programming describes the characteristics of a sorted list. The paragraph explains how a sorted list can be defined in terms of an unsorted list, emphasizing the declarative approach where the program specifies the desired outcome without dictating the process to achieve it.
🚧 Inefficiency in Logic Programming: The Sorting Example
The final paragraph addresses a potential drawback of logic programming: inefficiency. Using the sorting example, it explains that a logic programming language like Prolog might generate and test every possible permutation of a list to find a sorted version, which is not practical. This highlights the challenge of expressing efficient solutions in logic programming, where the inferencing process may not always follow the most efficient path to a solution.
Mindmap
Keywords
💡Logic Programming
💡Prolog
💡Predicate Calculus
💡Inference Engine
💡Declarative Programming
💡Propositions
💡Resolution
💡Unification
💡Horn Clauses
💡Non-Procedural
Highlights
Introduction to Chapter 16 on logic programming and Prolog, emphasizing its importance in the curriculum and exams.
Expectation for students to write Prolog code and complete practical exercises to understand logic programming complexity.
Theoretical underpinnings of logic programming languages, focusing on Prolog, and its declarative nature.
Predicate calculus as the core concept of logic programming, with a simplified introduction for Prolog understanding.
The role of an inferencing engine in logic programming to produce results from symbolic logic.
Difference between logic programming and imperative or functional programming in terms of specifying results versus procedures.
Explanation of propositions as simple logical statements and their role in logic programming.
Introduction to symbolic logic and its three components: propositions, relationships, and inference rules.
First-order predicate calculus as the specific symbolic logic system used in logic programming.
Use of constants and variables in propositions within first-order predicate calculus.
Atomic and compound propositions construction and their significance in logic programming.
Logical operators for connecting atomic propositions and their use in compound propositions.
Quantifiers in first-order predicate calculus for introducing variables within propositions.
Standardization of propositions into clausal form to simplify the resolution process in logic programming.
Resolution principle as an inference process for deriving new propositions from given ones.
Unification process for variables during the resolution process to allow matching and inference.
Use of Horn clauses to simplify the expression of propositions and improve resolution efficiency.
Philosophy of logic programming as a declarative, non-procedural approach to programming.
Illustration of sorting a list in logic programming versus imperative programming to demonstrate the declarative nature.
Potential inefficiency in logic programming due to the inferencing process generating all permutations to find a solution.
Transcripts
we will now be moving on to chapter 16
which deals with logic programming and
the prologue
programming language and this is the
first
of three lectures that will dedicate to
these topics as with chapter 15
where we covered functional programming
and scheme
this is a very important chapter
and there will be a lot of marks
dedicated to chapter
16 in the second semester test
as well as in the exam so please pay
attention to this chapter
once again i will definitely be asking
questions that will require you to write
prologue code
so be sure that you spend enough time
reviewing this chapter
and also once the practical for
logic programming has been released
please be
sure that you do that practical and that
you can write programs of
the level of complexity that
is presented within those practical
questions
as well as the example programs that are
provided in
this chapter
these are the topics that we'll be
discussing in today's
lecture so we will be looking
at the theoretical underpinnings behind
logic programming languages
and prologue in particular and then
in the second and third lectures
of this chapter we will be looking at
specifics related to the prologue
programming language so we will begin
then with an introduction
to logic programming we'll then
look at predicate calculus and do a
quick introduction
on that and we will only be looking at
predicate calculus
on the level that is required in order
to understand
how prologue programs are constructed
so this won't go to the kind of level
that it might be
covered in a mathematics course for
example
we'll then look at how predicate
calculus is used to
prove theorems and this is the
core concept that logic programming is
based on
and then we'll finish off with a general
overview on
logic programming and how it works
this will then lay the foundation for
the
next part of the discussion which will
start in the next lecture
where we will actually look at practical
logic programming
and the prologue programming language
now when we talk about logic programming
languages
we are generally referring to the
prologue programming language which
we'll be focusing on
in the next few lectures there are other
logic programming languages that exist
within this paradigm however they're not
very widely used
and so in general when we refer to logic
programming we're referring either to
prologue in its original formation
or a dialect of prologue so logic
programming then is also often referred
to
as declarative programming and these
programming languages consequently are
often referred to then as declarative
programming languages so programs then
written in a logic programming language
are expressed in the form of symbolic
logic which we'll be focusing on
in this lecture and then behind the
scenes
there is what is referred to as an
inferencing
engine which uses a logical inferencing
process
in order to produce results so what this
thing
means is that our programming is
declarative in nature
rather than procedural and what this
means
is that in our programs we specify what
the results should look like but we
don't specify
in detail the procedure that should be
followed to produce those results
because the procedure is handled
entirely by
the logical inferencing engine that runs
behind the scenes
so this is therefore a very drastic
departure from
the kind of philosophy that we have
followed with imperative
programming languages as well as
functional programming languages
we will now move our discussion onto
predicate calculus
which is the core concept that underlies
logic programming but before we can
discuss predicate calculus itself
we need to define a number of terms
so the first concept we'll look at is
the proposition
a proposition is a simple logical
statement
that may be true or untrue
and propositions are the things that
logic languages
reason about so proposition then
consists
of one or more objects and then
it also defines relationships among
objects
now these objects are abstract things
that the logic system reasons about
they don't have intrinsic meaning other
than to
us as programmers so we may for example
have an object called mary and we
as humans may then associate mary with
a specific person however as far as
a logic programming language goes mary
is simply an abstract object that it
reasons
about an example of a proposition
related to mary
may then be that mary is female for
example
and this then specifies a relationship
to other objects that may not be female
for example
we can also create a proposition such
as mary is married to joe for example
the there are then two objects that are
incorporated within this proposition
mary and joe
and the relationship that is specified
is that they
are married once again this relationship
is an abstract notion so
female and married have a
meaning for us as humans but to the
logic
programming language these are just
abstract relationships
that it reasons about
next we have the concept of symbolic
logic
so a symbolic logic system allows us to
specify
three things which are the basic
building blocks
of formal logic so first of all the
symbolic logic system allows us
to express propositions and we've spoken
about propositions a moment ago
so a proposition might be something like
peter is the father of joe or
peter is male then a symbolic logic
system also allows us to express
relationships between propositions
so we can link the previous two example
propositions
using a logical and so we can say
peter is the father of joe and
peter is also male and then
finally a symbolic logic system allows
us to describe how new propositions can
be
inferred from other propositions so
we would use simple rules to describe
these kinds of inferences
and we might say something like if x is
the father of
someone else then it implies
that x must also be male
so first order predicate calculus then
is a particular form of symbolic logic
and it is the symbolic logic system that
is used for logic programming so
we will from this point on and
throughout the remainder of chapter 15
focus on
first order predicate calculus
now we've spoken about objects within
a symbolic logic system such as first
order predicate calculus
a little earlier in this lecture now
within
propositions objects are represented
by simple terms simple terms can take on
two forms so firstly we have constants
and secondly we have
variables so a constant then is a symbol
in other words some kind of name that
represents
a specific object within the domain that
we are reasoning about
so if we go back to my previous example
peter
and joe are both examples of constants
once again to us as humans they
represent specific
people with specific characteristics
however as far
as first order predicate calculus goes
these are just
abstract objects that the system
reasons then secondly we have
variables now it's very important to
understand that these
variables are not like variables
in imperative programming languages like
c plus
so they don't represent memory spaces
that we can assign values to
instead they are symbols so
once again names that represent
different objects
at different times and we will use
variables within rules
so if we go back to my previous example
we specified a rule
which stated that if x is the father of
somebody else
then x must also be male
so x in this case then is a variable it
represents
any object that satisfies the
original condition that x must be the
father of somebody else
and then we can conclude something from
that particular condition
constants and variables can be used
within
propositions recall that a proposition
is a logical statement of something that
might be true
or not true so we'll turn our attention
once again to propositions but look at
them in some more detail
and there are two types of propositions
that we
can construct firstly atomic
propositions
and then secondly compound propositions
so we'll look first of all at atomic
propositions which are simpler
and then in a moment we'll get on to
compound
propositions so atomic propositions then
are the simplest kinds of propositions
that we can construct
and they are represented by means of
what we call compound
terms so a compound term
represents a mathematical relation
and it is written like a mathematical
function
so let's take a closer look at these
compound
terms that are used to define atomic
propositions
a compound term is composed of two parts
so firstly we have the functor which is
simply the
name of the relation then following the
functor we have a pair of parentheses
and within the parentheses we have an
ordered list of parameters
these parameters together form a tuple
so if there's only one parameter then we
have a one tuple
there two parameters we have a two tuple
and so on and so forth
there's no limit on the number of
parameters that a compound term
can have although a compound term must
have
at least one parameter present
so let's look then at some examples of
compound terms that define
atomic propositions and starting with
the first
one we see that the functi in this case
is student now note
that our functors always start with
lowercase letters
and we will be using this convention
from now on
because prologue requires functions to
be named with a word that starts with a
lowercase
letter then we can see that
we have a set of parentheses and there's
only one parameter
contained within the parentheses so this
means we're dealing with a
one tuple now
john then in this case is a constant so
in other words
it represents a specific object that we
can
reason about note also that
our constants start with lowercase
letters
and again this is a convention that we
will be following
because prologue enforces this kind of
naming so if we then
look at our first compound term we are
essentially specifying some sort of
property
or characteristic for the constant
john now again whatever meaning is
associated with the
relation and the constant john
these are attached by humans
so we would in other words typically
interpret this
as john is a student but as far
as our logic system goes
there is no intrinsic meaning associated
with either so
john could represent anything it's just
a generic
object that is reasoned about and
student could
represent any kind of characteristic or
property
associated with john so it may represent
something entirely different like john
is male
for example as far as the logic system
is concerned so in the next three
examples over here we again have a
function in each case and in all three
cases the function is
like and we can see then that we have
two parameters in each case separated by
a comma
once again all of the parameters
are constants so in other words they
represent
individual unique objects that we can
reason about
so because these are then two tuples we
are specifying some kind of relationship
between the two parameters now of course
as humans we would typically interpret
this
as seth likes os x
but once again as far as the logic
system is concerned
there's no specific intrinsic meaning
associated with any of this
so this relation could then mean that os
x likes sith or sith is
a subclass of os x or the two
are related in some other kind of way as
far as the logic system is concerned it
doesn't really
care what this relation is it's just
simply
an abstract relation between these two
constants
and once again of course seth and osx
just represent abstract objects that are
reasoned about so then in our third
example over here we would interpret
this as nick likes
windows and in this case in the third
example jim
likes linux
so propositions such as the atomic
propositions that we've just discussed
can be stated in two forms firstly a
proposition
can be effect so this is then a
proposition
that is assumed to be true in other
words it's some kind of axiomatic truth
that our logic system assumes
secondly a proposition can represent a
query
and in this case we are then presenting
the proposition
to our logic system or logic programming
language
and we are asking the system to
determine
the truth of the proposition so for
example
if we have the atomic proposition
student
john then we can read this either as a
fact or as a query
so we read it as a fact we are then
stating that john
is a student and this is something that
our logic system
can assume to be true
as a query we are asking a question
related to this proposition so we are
asking
is john a student or more accurately we
are asking the
logic system or the logic programming
language can you prove
that john is in fact a student
so so far we have looked at atomic
propositions
we'll now move on to the second kind of
proposition that we can define
namely compound propositions so
compound propositions are more complex
than atomic propositions
and this is because they contain two or
more
atomic propositions as components
these atomic propositions then are
connected by means
of logic operators
this table summarizes the logical
operators
that can be used to connect atomic
propositions
so first of all we have the negation
symbol
and this is an example of its use so
here a would be
an atomic proposition and we would then
read this
as not a conjunctions
are represented by this inverted cup
symbol
so here we have two atomic propositions
a and b
and they are connected with a
conjunction we read a conjunction as an
and so we would have a and b
a disjunction uses the cup symbol
and this is read as an or so over here
we have
a or b then we have
equivalence represented by this triple
lined
equals symbol and here we have it then
connecting
a and b so we would read this as a is
equivalent to b and then finally we have
our implication symbols which can point
in either direction right or left
so over here we have a implies b
in a more familiar form we could read
that
as if a is the case then b
is also the case we can also write
implications the other way around
so over here we have b implies a
and we could also read that as if b is
the case
then a is also the case
now variables which we've mentioned
briefly before
can also appear within propositions
the only way that we can introduce a
variable is
by using what is called a quantifier
so we have two quantifiers the universal
quantifier and the existential
quantifier
the symbols used here are an inverted a
to represent the universal quantifier
and
a reversed e to represent the
existential quantifier
so this is an example of the kind of
notation we would use
for the universal quantifier we would
read this
as for all x p is
true then for the existential quantifier
here we have an example of the notation
that we would use
and we would read this as there exists
at least one value for x such
that p is true so
here are two concrete examples then of
how we would actually use the universal
and existential quantifiers with
variables
so over here we have the universal
quantifier
and the way we would read this is that
for all x where x
is a woman it also implies
that x must be human
so this is a universal truth that holds
for all
objects if they satisfy the left
condition over here
then the right proposition
also holds here we have an example of
the use
of the existential quantifiers so we
would read this as
there exists at least one x
such that mary is the mother of x
and x is also male
so in other words we are specifying here
that
mary has at least one male child
so we've now discussed all of the
components
of first order predicate calculus
however it turns out that if we use
these components
to write propositions then
there are a large number of different
ways that we can write
the same compound proposition
now this is of course not a problem for
mathematicians
however in a computational system
such as an inferencing engine in a logic
programming language
this can be an issue because of the
additional ambiguity that it introduces
so in order to counteract this we use a
standard form
for all of our propositions and this
form is
called clausal form so in clausal form
the
existential quantifier is not used
the universal quantifier is used however
it is only used when
variables are involved and therefore its
use
is implicit so we never explicitly write
down
the universal quantifier we also
only need to use the and or and left
implication
operators the other logical operators
are
not required now if we write a compound
proposition
that involves an implication then this
is the standard form
that we will use so the a's
and b's here then are terms in other
words
they are atomic propositions
so we can see then that on the
right hand side of the implication we
have a series of
a's and they are all connected by means
of logical ands
the implication points to the left
and then on the left hand side we have a
series of b's
that are connected by logical ors
so the way that we would then interpret
this is if all of the a's
are true then at least one of the b's
must also be true now the terminology
that we use
for the two sides of such an implication
of for the right side we use the term
antecedent and then for the left side we
use the term consequent and this is
because
the left side is the consequence
of the right side being true
so here's then an example of
a concrete proposition
which we could express using clausal
form that
involves an implication and the way that
we would read this
is if bob likes fish
and trout is a fish then this
implies that bob also likes
trout
now if we only use first order predicate
calculus
to express propositions we have a
system that is not of much practical use
to us
so what we really want to be able to do
is to infer
facts from propositions that we already
know about now these known propositions
are then
axiomatic facts and we assume the
truth of these facts resolution then
is an inference principle which allows
us to infer propositions
where we compute these new propositions
from
given propositions so over here we have
a simple example of the resolution
process
just to illustrate the concept and we
will assume that we
are given these two propositions over
here
which are assumed to hold
so the first proposition then states
that if jaane
is the mother of jake then it implies
that jaana is also older than jake
the second proposition states that if
ja'anae
is older than jake then it implies
that cho anna is also wiser than jack
so now we can apply the resolution
process
given these propositions that we assume
and we can then derive a new proposition
that states that if joanne is the
mother of jake then it implies that
jaina
is also wiser than jake now what's
actually happened here
is that a matching process has been
performed
so what we've done is we've matched the
consequent
of the first proposition with the
antecedent of the second proposition
and you can see that those are the same
and this then allows us to infer
that the antecedent of the first
proposition implies the consequence
of the second proposition
now variables can of course appear
within propositions
and if this is the case we need to use
what is called the unification process
so the unification process tries to find
values
for variables in propositions that will
allow the
matching process performed by resolution
to succeed so an instantiation then
is an assignment of a temporary value to
a variable
in order to allow unification to succeed
so after we've instantiated a variable
with a value
then this matching process performed by
resolution
may fail in that case we need to
backtrack
which means that we instantiate our
variable
with a different value and then we begin
the matching process
from the beginning again we may need to
backtrack
several times in other words trying
different instantiations
of values for our variable until
eventually the matching process succeeds
and resolution concludes
now what's important to understand here
is that
the variables that we use in first order
predicate calculus
are not variables as they are used
in imperative programming languages so
these variables only receive values
during the
unification process by means of
instantiation
and in a logic programming language this
instantiation process
is performed by the inferencing engine
in the background
the instantiation is not performed by
the programmer themselves
in first order predicate calculus we
want to be able to prove
theorems so a theorem essentially is a
statement that we make and we want to be
able to prove the truth
of this statement so resolution then can
be used
to prove theorems so
we have hypotheses then and hypotheses
are a set of propositions
that state a theorem so we can have in
other words
multiple propositions that are part of
our theorem we then have a goal
which is the negation of our theorem
and it is stated as a proposition
and then we can use proof by
contradiction in order to prove our
theorem
so a theorem is proved if we can find an
inconsistency in the goal in other words
the negation of the theorem
so essentially if we can find a
contradiction
to the goal then because the goal is the
negation
of the theorem we have then proved the
theorem to be true now we saw previously
in this lecture that we adopt clausal
form
for expressing our propositions the
reason we did this
is because there are a lot of different
ways that we can
express logical propositions and so
using clausal form we have a simplified
standardized way
to describe and express our propositions
within a
first order predicate calculus system
now even if we use clausal form
the resolution process that we've just
discussed is often not
very efficient and therefore not
practical so one way that we can address
this problem
is by assuming an even more restricted
form in which we express our
propositions
and this then simplifies the resolution
process
drastically so in this restricted
representation
we use horn clauses and horn clauses
can have only two forms so first of
all we have headed horn clauses
and these are implications
and we have a single atomic proposition
on the left which is the consequent
and then we can have multiple atomic
propositions
on the right linked by logical
and so over here then we have an
example of a headed horn clause and we
would read this headed horn clause
as if bob likes fish
and trout is a fish then this
implies that bob also likes trout
the second kind of horned claws is a
headless horn clause
and essentially here the left side
of the implication in other words the
consequent
is empty so over here we have an example
of a headless horn clause and we would
simply read this
as bob is the father of
jake now headless horn clauses are
typically used
to state facts that we know about the
environment or the world that we are
reasoning about
so most but not all propositions can be
stated as
horn clauses in our case
for logic programming on clauses are
sufficient for what we need to be able
to express
so we've now looked at first order
predicates calculus
as well as its various components and
we've also looked at the
resolution process as well as the
unification process
which occurs when we perform resolution
in the presence
of variables we're now ready to move
on to logic programming but at this
point we're only going to talk about the
overarching philosophy
that underlies logic programming we
won't be looking at
any actual logic programming code
but in the next two lectures we'll be
moving on to the prologue
programming language and there we will
look at
practical logic programming
so we've already mentioned that logic
programming languages
are declarative programming languages
and this
is because the semantics of these
languages
is declarative in nature so what this
means
is that there's a simple way to
determine the meaning
of each statement within a logic
program and the statement meaning is
also
context independent so what this means
is that we can determine the meaning of
a statement by simply looking at that
statement on its own
without having to refer to any other
statements
now this is not the case in an
imperative programming language such as
c
plus or java so let's say for example
that we have a statement in c
plus and this is an arithmetic statement
that
involves variables so in order to
understand
the meaning of the statement in other
words the result produced by the
statement
we need to know what the values of the
variables
are but the only way that we can
determine
what the variables values are is by
understanding the entire execution of
the program
up to that point so this then means that
statements
in an imperative programming language
are not context independent because
understanding them
requires understanding of previous
statements that have been
executed in a logic programming language
we should be able to just look at a
statement on its own
and understand the meaning without
having to refer to
any pre previous statements that have
been executed
now this also implies that the order of
statements
in a logic program should not be
important and as we'll see in the next
two lectures
this isn't strictly speaking true so
this means that logic programming
languages such as prologue are not
entirely context independent but they
are more context independent than
imperative programming languages are
so what this then means is that the
semantics
of a logic programming language will be
much simpler than the semantics of
an imperative programming language
now programming in a logic programming
language
is non-procedural in nature so what this
means
is that programs which we write don't
state how a result is to be computed
but rather what form the result takes
so we then use predicate calculus which
is our descriptive mechanism for
specifying
our programs and then the resolution
process is used in order to get a result
so the resolution process then is
executed by the
inferencing engine that runs in the
background
of a logic programming language
this then in essence allows the
programmer to focus
on the meaning of their programs rather
than the actual
mechanisms for achieving those meanings
and so this should in theory then make
programs far easier to write
in a logic programming language
so in order to illustrate the
non-procedural nature
of programs that are implemented in a
logic
programming language we'll look at a
fairly
simple example that is fairly commonly
used to illustrate these concepts
the problem that we will look at is how
to
sort a list of values so if we use
an imperative programming language like
c plus
or java we have to use a procedural
approach so what this means is we need
to write
an algorithm that will sort the list and
then
our computer will execute the steps of
the algorithm
now there are a number of drawbacks
associated with this approach
first of all we need to decide on a
sorting algorithm
so as you should recall from your data
structures course
there are quite a number of different
sorting algorithms that exist
and each one has various benefits and
drawbacks so there are trade-offs that
we need
to consider before we choose
a sorting algorithm we also have to
consider
things like the time complexity of the
algorithm
and also the resource usage
for example the memory usage that
the sorting algorithm will require
so once we've then selected a sorting
algorithm we actually need
to implement it and as you should also
recall
sorting algorithms are relatively
complex so this
allows for quite a lot of scope in terms
of
bugs and errors in our implementation
so all of this then requires a lot of
effort
on the part of the programmer now in
a logic programming language we follow a
non-procedural approach
so what this means then is that we
describe
the characteristics that a sorted list
has
and not the process that will be
followed
in terms of rearranging the elements
contained in the list
so over here we have an example of how
we would express the
nature of a sorted list in terms
of an unsorted list in a logic
programming language
so here we will then assume that a list
contains
n elements and that we can index these
elements
starting at index 1 and ending at index
in so the first thing we need to do is
we need to specify
what a sorted list looks like
so we can say then that a list
is sorted if for all values of
j where j is then acting as our index
within the list
which a then takes on the values 1 up to
1 less than
n then the value contained in the list
at index j is less than
or equal to the value contained in the
list at index j
plus 1. so what we are essentially
expressing here
is that the list must be a monotonically
increasing set of values so in other
words
each next value is either the same as
the previous value
or larger than the previous value
so now we're ready to express what a
sorted list looks like in terms of an
unsorted list so here old list
is our unsorted list and new list is
a sorted version of that list so what we
are specifying here then
is that new list is a sorted version of
old list
if new list is a permutation
of the old list in other words a
reordering of the values contained in
the old list
and our new list is
sorted according to our previous
definition of what a
sorted list looks like so this is then
a relatively compact expression of what
a sorted list looks like however
what i would like you to consider at
this point
and pause the video to think about this
for a moment
is what drawback then is associated with
this kind of expression of how to
sort a list
so essentially we need to think about
how prologue will actually then go about
sorting this list and essentially
what will happen is that prologue or any
other logic programming language
will generate a permutation
of our old list and then it will test
to see whether this permutation is
sorted
if this permutation is not sorted then
it generates
another permutation of the old list and
then it again
checks to see whether this permutation
is sorted
so what this means is we may potentially
run
through every possible permutation of
the old list
before we happen to stumble on a sorted
version
of that list this is obviously then
a very inefficient approach to sorting
so this is one of the major drawbacks
then associated with logic programming
in general
is that it's often fairly
easy to express the characteristics of a
solution
but the inferencing process that runs in
the background
may follow a very inefficient procedure
in order to arrive at a solution
right so that then concludes our
discussion on
logic programming from an overall
philosophical perspective
and in the next lecture we will move on
to actual concrete logic programming
in the prologue programming language
5.0 / 5 (0 votes)