COS 333: Chapter 7, Part 1
Summary
TLDRIn the previous lecture, we completed our discussion on data types. This lecture begins with an introduction to expressions and assignment statements, focusing primarily on arithmetic expressions, overloaded operators, and type conversions. Key topics include the evaluation order of operators and operands, operator precedence, and associativity rules. The lecture also covers implementation details in Ruby, Scheme, Common Lisp, and conditional expressions in C-based languages. Additionally, it discusses operand evaluation, operator overloading, and type conversions, concluding with common errors in expressions and their implications.
Takeaways
- π The lecture introduces Chapter 7 focusing on expressions and assignment statements, emphasizing the importance of understanding details specific to different programming languages.
- π§ Expressions are the fundamental means of specifying computations in programming languages, including arithmetic and logical tests, with evaluation order depending on operator precedence and associativity rules.
- π’ Arithmetic expressions are central to programming, with operators and operands defining the computation, and the use of parentheses to modify evaluation order.
- π The design of arithmetic expressions in programming languages involves considerations of operator precedence, associativity, operand evaluation order, and the allowance of user-defined operator overloading.
- π΅οΈββοΈ Operator overloading, both built-in and user-defined, can aid readability but may also lead to confusion and errors if not implemented carefully.
- π Type conversions, both narrowing and widening, are crucial for mixed mode expressions, with the latter generally being safer due to the preservation of value magnitude.
- π Errors in expressions can arise from type checking, coercion, inherent arithmetic limitations, and computer arithmetic limitations like overflow and underflow.
- π The script touches on how different programming languages handle expressions, such as Ruby implementing all operators as methods, allowing for overrides, and Scheme treating operators as function calls.
- π The importance of operand evaluation order is highlighted, noting that it can affect the efficiency of a program and the potential for side effects.
- π Assignment statements, which are central to imperative programming languages, model the pipelining concept of the von Neumann architecture, connecting memory and the processing unit.
- π The upcoming lecture will continue the discussion on expressions, covering relational and boolean expressions, short circuit evaluation, and conclude with assignment statements.
Q & A
What is the main focus of the lecture on chapter 7?
-The lecture on chapter 7 primarily focuses on expressions and assignment statements, including a detailed discussion on arithmetic expressions, operator overloading, and type conversions.
Why is it important to pay attention to details when discussing expressions in different programming languages?
-It is important to pay attention to details because specific programming languages may implement certain features differently, which can affect how expressions are evaluated and behave in those languages.
What are the two main kinds of expressions mentioned in the script?
-The two main kinds of expressions mentioned are arithmetic expressions, which perform mathematical computations, and relational and boolean expressions, which allow for constructing logical tests and dealing with their outcomes.
How does the von Neumann architecture relate to imperative programming languages?
-Imperative programming languages are closely modeled on the von Neumann architecture, which features a shared memory for both instructions and data, and a pipeline connecting memory and the processing unit, reflecting the central role of assignment statements and variables in these languages.
What is the significance of operator precedence in evaluating arithmetic expressions?
-Operator precedence defines the order in which adjacent operators of different precedence levels are evaluated in an arithmetic expression, ensuring that operations are performed in a structured and predictable manner.
What are the typical associativity rules for binary operators in programming languages?
-The typical associativity rules for binary operators use left-to-right associativity, meaning that operations at the same precedence level are performed from left to right, except for power operators which usually have right-to-left associativity.
How does Ruby implement its operators differently from other programming languages?
-In Ruby, all operators are implemented as methods, which allows for the possibility of overriding them by the programmer, thus changing how the operators behave, unlike in languages such as C++ or Java.
Why is operator overloading considered both beneficial and problematic in programming languages?
-Operator overloading can be beneficial as it allows for more natural expressions of operations, especially with user-defined data structures. However, it can also be problematic as it may lead to a loss of readability, difficulty in understanding the operator's purpose without knowing the operand types, and potential for nonsensical operations if not implemented carefully.
What are the two types of type conversions discussed in the script, and how do they differ in safety?
-The two types of type conversions are narrowing conversions, which are considered unsafe because they can result in a loss of value magnitude, and widening conversions, which are generally safe as they preserve the value's magnitude, though potentially with some loss of accuracy.
Why might a programming language choose to use widening conversions over narrowing conversions in expressions?
-Widening conversions are generally preferred over narrowing conversions because they are safe as they preserve the magnitude of the original value, even if some accuracy might be lost, whereas narrowing conversions can result in a significant change or loss of the original value.
What are some of the errors that can arise due to the limitations of arithmetic in expressions?
-Errors due to the limitations of arithmetic in expressions can include division by zero, which results in an undefined value, and overflow or underflow, which occur when a computation produces a value that is outside the representable range of the type.
Outlines
π Introduction to Chapter 7: Expressions and Assignment Statements
This paragraph sets the stage for the lecture on Chapter 7, which focuses on expressions and assignment statements. It emphasizes the familiarity of the material to the audience but also the importance of paying attention to details specific to different programming languages. The lecturer intends to cover arithmetic expressions, overloaded operators, and type conversions, highlighting the need to understand the order of operator and operand evaluation. The discussion is grounded in the context of imperative programming languages and their relation to the von Neumann architecture, including the concepts of memory, variables, and the pipeline connecting memory and the processing unit.
π’ Delving into Arithmetic Expressions and Their Evaluation
The second paragraph delves into the specifics of arithmetic expressions, which are fundamental to programming as they represent calculations. It explains the components of arithmetic expressions, including operators, operands, parentheses, and function calls. The paragraph also introduces design issues related to arithmetic expressions, such as operator precedence and associativity rules, operand evaluation order, and the implications of operand side effects. The discussion on user-defined operator overloading and type mixing in expressions is set to be explored later in the lecture, with an emphasis on the importance of these elements in high-level programming language design.
π Operator Precedence and Associativity in Expressions
This paragraph discusses the importance of operator precedence and associativity in determining the evaluation order of arithmetic expressions. It outlines the typical precedence levels found in modern high-level programming languages, starting with parentheses at the highest level, followed by unary operators, exponentiation, multiplication and division, and finally addition and subtraction at the lowest level. Associativity rules are also explained, which dictate the order of evaluation for operators of the same precedence level. The paragraph also touches on the unique case of the Apl or Apple programming language, which has only one precedence level with right-to-left associativity for all operators.
π‘ Exploring Expression Implementations in Various Programming Languages
The fourth paragraph takes a detour to explore how expressions are implemented in different programming languages. It highlights Ruby's approach where all operators are implemented as methods, allowing for easy overriding by programmers. The paragraph contrasts this with Scheme and Common Lisp, where arithmetic and logic operators are implemented as explicitly called subprograms, with operator precedence built into the function call structure. The discussion also touches on dynamic typing in Scheme and its effect on operator overloading, emphasizing the importance of understanding the underlying mechanisms of expression evaluation in various languages.
β Understanding Conditional Expressions and Their Impact
This paragraph introduces conditional expressions, also known as ternary operators, which are used to prevent errors such as division by zero. It explains the syntax and evaluation process of conditional expressions in C-based programming languages, highlighting their brevity and efficiency compared to longer-hand notation. The paragraph encourages the audience to consider the impact of supporting conditional expressions on programming language evaluation criteria, particularly in terms of readability and conciseness.
π Operand Evaluation Order and Operator Overloading
The sixth paragraph discusses the order in which operands are evaluated in expressions and the concept of operator overloading. It outlines the typical evaluation sequence starting with variables, followed by constants, and then parenthesized expressions. The paragraph also addresses the potential issues with operator overloading, such as loss of readability and compiler error detection, and encourages the audience to consider the implications of overloading the ampersand and asterisk operators in C and C++.
π User-Defined Operator Overloading and Type Mixing
This paragraph explores user-defined operator overloading in high-level programming languages like C++, C#, and F#, which allows programmers to implement their own meanings for operators. While this can enhance readability, it also introduces potential issues such as the need to understand operand types to determine the operation's outcome and the possibility of multiple sensible implementations for the same operation. The paragraph also delves into the topic of type mixing in expressions and the need for type conversions, including both narrowing and widening conversions.
π Coercion, Type Conversions, and Their Pitfalls
The seventh paragraph examines the process of coercion, which is an implicit type conversion that allows mixed-mode expressions to be evaluated. It discusses the impact of coercion on compiler type error detection and the preference for widening conversions over narrowing conversions due to safety concerns. The paragraph also touches on explicit type conversions, or casting, and the potential for errors due to arithmetic limitations and computer arithmetic constraints like overflow and underflow.
π« Conclusion and Preview of Upcoming Topics
In the final paragraph, the lecturer wraps up the current lecture and provides a preview of the topics to be covered in the next session. The upcoming lecture will continue the discussion on Chapter 7, focusing on relational and boolean expressions, short-circuit evaluation, and assignment statements. This sets the stage for further exploration of expression evaluation and the fundamental constructs of programming languages.
Mindmap
Keywords
π‘Expressions
π‘Assignment Statements
π‘Arithmetic Expressions
π‘Operator Precedence
π‘Associativity
π‘Operand Evaluation
π‘Operator Overloading
π‘Type Conversions
π‘Conditional Expressions
π‘Error Handling
π‘Memory and Variables
Highlights
Introduction to Chapter 7 on expressions and assignment statements, emphasizing the importance of details in programming language implementations.
Explanation of expressions as fundamental means of computation in programming languages, including arithmetic and relational/boolean expressions.
Discussion on the order of operator and operand evaluation in expressions, linked to the von Neumann architecture.
The concept of assignment statements as an abstraction of the pipelining concept in computer architecture.
Deep dive into arithmetic expressions, their significance in the development of early programming languages like FORTRAN.
Design issues in arithmetic expressions, including operator precedence and associativity rules.
Ruby's unique implementation of operators as methods, allowing for operator overloading.
Scheme and Common Lisp's approach to expressions as explicitly called sub-programs, contrasting with operator precedence in other languages.
Conditional expressions (ternary operators) in C-based languages, their syntax and use case to prevent division by zero.
Operand evaluation order and its impact on the evaluation of expressions, especially with functions having side effects.
Operator overloading in languages like C++ and its potential impact on readability and error detection.
Type conversions, distinguishing between narrowing and widening conversions, and their safety.
Mixed mode expressions and the need for coercion, with implications for compiler type error detection.
Explicit type conversions (casting) in C-based languages and their potential risks.
Error types arising from arithmetic limitations, such as division by zero and overflow/underflow.
Anticipatory guidance for the next lecture, covering relational and boolean expressions, short circuit evaluation, and assignment statements.
Transcripts
in the previous lecture we finished our
discussion on chapter 6 which dealt with
data types
in this lecture we will begin our
discussion on chapter 7 which deals with
expressions and assignment statements
and then there will be one additional
lecture in which we will finish this
chapter off
now some of the material that we'll be
covering in this lecture and the next
lecture will be relatively familiar to
you
but please as we go through this
discussion pay attention to details
related to specific programming
languages that possibly differ in terms
of how they implement certain features
compared to the programming languages
that you are familiar with
so these are the topics that we'll be
discussing in today's lecture
we'll begin with a quick introduction
then we'll move on to arithmetic
expressions which will constitute the
main part of our discussion
then we'll look at overloaded operators
and how those are dealt with in a number
of programming languages and then
finally we'll look at type conversions
which we've already touched on in the
previous chapters lectures
so let's begin with a quick introduction
to what we'll be talking about in this
and the next lecture
so we'll start off by looking at
expressions and then in the next lecture
we'll move on to assignments
now what are expressions well they're
basically the fundamental means of
specifying any kind of computation
within a programming language so any
sort of calculation that the computer
needs to perform will be expressed by
means of an expression
now we have different kinds of
expressions
obviously the two main kinds are
arithmetic expressions which perform
some kind of mathematical computation
and then we have relational and boolean
expressions which allow us to construct
logical tests and then deal with the
outcomes of those tests somehow
so in order to understand how
expressions are evaluated we need to be
familiar with the order of operator and
operand evaluation
essentially this deals with what order
will the various operations in an
expression be performed in
now we spoke earlier on in this course
about how imperative programming
languages
are very closely modeled on the von
neumann architecture that modern
computers use
so there we said that very central to
the idea of imperative programming
languages and within the von neumann
architecture is the notion of memory
and in the von neumann architecture we
have both instructions and data that are
loaded into
a shared memory
and so we said that variables then are
abstract representations of areas or
cells within this shared memory
now also we then have the notion of a
pipeline which connects the memory and
the processing unit of the computer
together the processing unit will
actually then perform the calculations
that need to be performed with these are
arithmetic or logical in nature
and then the pipeline transmits data
from memory to be computed on and also
then transmits results that have been
produced by the processing unit back
into memory for storage
so we said that assignment thin is an
abstract modeling of this pipelining
concept within a
architecture-based computer
so
the essence then of imperative
programming languages to a large extent
is based on the dominant role of
assignment statements which of course
then require variables to be defined as
well
let's now turn our attention to
arithmetic expressions in particular
which we'll spend most of this lecture
discussing
so as we saw earlier on in this course
arithmetic evaluation was one of the
primary motivating factors behind the
development of the very first
programming languages and we only have
to look at the very first high-level
programming language namely fortran in
order to see this
so as we know fortran was developed for
scientific applications
and essentially all of these
applications required some sort of
mathematical processing so very early on
in the development of high-level
programming languages there was a focus
on
coming up with a way to write out
arithmetic expressions and to evaluate
them computationally
so arithmetic expressions then consist
of a number of things first and foremost
we have operators and operands so the
operators express the actual arithmetic
operation that is performed such as
addition or multiplication and we
typically need some kind of symbol or
possibly a group of symbols to represent
these operators
then we have operands so these are the
objects or quantities that are operated
on so for example if we're adding two
values together then we have a left
operand and a right operand and those
are the
expressions for the two values that we
will be adding together
then we optionally have parentheses
which can be used to modify the order of
evaluation in an arithmetic expression
and then optionally as well we
potentially have function calls and
function calls are where things begin to
get a little bit more complex
as with any programming language feature
there are a number of design issues that
we need to consider when providing
support for arithmetic expressions in a
high-level programming language
so the first two design issues that need
to be considered are very closely
related to each other
and they involve for each operator
asking the question what president's
rules apply and what associativity rules
apply now of course these questions must
both be answered for every operator that
the programming language supports
next we need to consider the order of
operand evaluation now we mentioned this
in chapter 15
and won't be going into it in a lot of
detail here but operand evaluation order
may be based on operator precedence and
associativity rules but we may also
evaluate operands out of order
in the interest of more efficient
execution of a program written in our
high level programming language
then
the next question is are operand
evaluation side effects restricted or
not and if they are restricted in what
way are they restricted this we also
discussed in chapter 15 and so we'll
only very briefly mention it here
you can refer to chapter 15 for further
details on functional side effects
then the next question
is user defined operator overloading
allowed or is it not and if it is
allowed then how is it actually
implemented we'll be talking about that
towards the end of this lecture
and then what type mixing is allowed in
expressions and how will this type
mixing be dealt with which we will talk
about right at the end of this lecture
so we'll begin our discussion on
arithmetic expressions by looking at
operators
now operators can be broadly classified
according to how many operands the
operator has
so we have unary operators if the
operator has only one single operand
this would be for example negating a
value where we only have one operand
which is the value that is being negated
or potentially increment or decrement
operations where we are incrementing or
decrementing a particular value which
would also then be the single operand
for the operator
next we have binary operators so these
have two operands operations like
addition subtraction multiplication and
division are all binary operators
usually here we differentiate between a
left operand and a right operand
then we have ternary operators which
have three operands
and there aren't many of these but we'll
look at them a little bit later on in
this lecture
and in theory it is possible then to
create operators with even more operands
but typically in practice we don't see
anything above ternary operators
we now get to the first of our design
issues related to arithmetic expressions
namely for a particular operator what
are the associated precedence rules
now a high-level programming language
will define a set of precedence rules
and then it will group operators on each
of these levels
so we have then operators that fall on
the same precedence level and we have
operators that fall on higher or lower
precedence levels relative to another
precedence level
now the typical precedence levels for
arithmetic operators that we see in
modern high level programming languages
is that at the highest level we have
parentheses
then one level lower we have the unary
operators whatever those might be that
are supported by the programming
language in question
then we have the asterisk asterisk or
hat operator
this is a power off operator so these
are both binary operators and they raise
the left operand to the power of the
right operand
and these operators in whatever form
they appear are supported in languages
like fortran ruby ada and visual basic
then on the next lowest level we have
multiplication and division operators
and finally on the lowest level we have
addition and subtraction operators
so the operator precedence rules then
define the order in which adjacent
operators in other words operators that
appear next to each other in an
arithmetic expression
but are of different precedence levels
are then evaluated now typically the way
that these operator precedence rules are
defined
is that operators that appear higher up
in the hierarchy of precedence levels
will be evaluated first and then the
lower
level
operators on lower precedence levels
will be evaluated until eventually the
lowest precedence level is reached
so for example if we have an arithmetic
expression which is a mixture of
multiplication division addition and
subtraction operators we would then
evaluate the multiplication and division
operators first and then after those
have been evaluated then we would
evaluate the addition and subtraction
operators
the next design issue that we need to
consider is what are the associativity
rules that are linked to operators
so operator associativity rules define
the order in which adjacent operators
with the same precedence level are
evaluated once again when we're talking
about adjacent operators here we're
talking about operators that appear next
to each other
in the same arithmetic expression
so for example if we consider the
precedence levels that we looked at on
the previous slide
then an example of where operator
associativity rules come into play is if
we have an arithmetic expression where
addition and subtraction operators
appear next to each other
operator associativity comes into play
here because addition and subtraction
lie on the same precedence level
now in general operator associativity
can be left to right which we would
usually call left associative or right
to left which we would call right
associative
now typical associativity rules that we
see in practice
would usually use lift to right
associativity when we are talking about
binary operators
and the exception here would come in if
the programming language supports a
power of operators such as asterisk
asterisk or the hat operator and in this
case the associativity would be right to
left in order to see why the
associativity would be right to left in
the case of the power of operator all
you need to do is write down a simple
expression such as
a raised to the power of b raised to the
power of c and then consider does it
make sense to evaluate it from left to
right or right to left
now sometimes unary operators will
associate right to left but this depends
largely on the unary operator
some do
oftentimes associate from left to right
as well and it also depends on the
specific programming language that we
are considering so this would come into
play in fortran and the c based
languages both of which have a number of
unary operators
now the apl or apple programming
language is different to all other
programming languages because it has
only one president's level and all of
the operators associate from right to
left
so what i would like you to do at this
point is to pause the video
and try to answer
which language evaluation criteria would
be affected and how would they be
affected by apl or apple only having one
president's level and all of the
operators associating in the same way
from right to left
now finally parentheses are also
supported in a number of high-level
programming languages and parentheses
can be used in order to override the
precedence and associativity rules in
general as you saw on the previous slide
parentheses occupy the highest
precedence level and are therefore
always evaluated first so we can then
modify how our expressions will be
evaluated by expressly putting in
parentheses for what needs to be
evaluated first
now we'll take a quick detour and look
at how expressions are implemented in
ruby
so in ruby all operators are implemented
as methods and this includes arithmetic
relational and assignment operators
the array indexing operator
the bit level shift operators and the
bitwise logical operators
so this is somewhat different to
what you will have seen in programming
languages that you are familiar with
such as c plus plus or java
now because these operators are methods
it means all of them can actually be
overridden by the programmer and this
allows the programmer theme to modify
how the operators will behave
so what i would like you to do at this
point is to pause the video
and try to determine what the impact
will be on the various language
evaluation criteria
that we've been using throughout this
course
we'll continue our brief detour by
looking at how scheme and common lisp
implement their expressions
so in both scheme and common lisp all
arithmetic and logic operators are
implemented as explicitly called sub
programs
this we looked at in chapter 15 and i
won't reiterate that at great length
here
so for example if we have this
expression a plus b multiplied by c
then this is how the expression would be
written in scheme or common lisp
so here we have an application of the
multiplication function which is applied
to two parameters namely b
and c
the result that this function
application yields then becomes the
second parameter
for the addition function and that then
works
on the first parameter namely a and in
the second parameter which is the result
of the application of the multiplication
function
so
plus and asterisk then are the names of
functions
what's also important to note here which
we also discussed in chapter 15
is that operator precedence is built
into the way that these expressions are
written in scheme and common lisp
so for example in a programming language
such as c plus or java
we would for this expression need to
know that multiplication has a higher
precedence level and is therefore
performed before
addition
that is not the case in scheme and
common list where we simply look at the
evaluation of our functions so before we
can evaluate the plus function we must
evaluate the multiplication function so
that we can get a value for the second
parameter of the edition function
so in other words
associativity
and precedence are built into the way
that expressions are written in these
languages
scheme is also dynamically typed so
because of this operator overloading
doesn't make sense so what i would like
you to do at this point is to pause the
video and try to explain why dynamic
typing has this effect
next we'll take a quick look at
conditional expressions which are
supported in these c based programming
languages now over here we have an
example of the use of a conditional
expression
we have an assignment that is taking
place and this assignment is to the
variable average what appears on the
right hand side of the assignment
operator is the conditional expression
now conditional expressions are
sometimes referred to as ternary
operators because there are three
operands
so the symbols associated with the
ternary operator
are the question mark symbol and the
colon symbol
now what appears to the left of the
question mark is the first operand and
this must be a boolean expression of
some sort so it must evaluate to a true
or a false value
this we refer to as the condition of the
conditional expression or ternary
operator so if this condition evaluates
to true then the entire
conditional expression or ternary
operator evaluates to what appears
between the question mark and the colon
in this case zero
if the condition is false then the
entire conditional expression itinerary
operator will evaluate to what appears
to the right of the colon
so what we have here then is a situation
where we will test the value of count
now if count is in fact zero then the
value zero will be assigned to average
if count is not equal to zero then we
will assign to average the result of sum
divided by count
so in other words this conditional
expression is preventing a division by
zero if the denominator count has a
value of zero
now we can write out the conditional
expression with the assignment in a
longer hand format which we have over
here we can see exactly the same
evaluation takes place here so we test
our condition
is count equivalent to zero if it is in
fact zero then we assign zero to average
otherwise we assign sum divided by count
to
average
so what i would like you to do at this
point then is to pause the video and try
to answer what effect support for
conditional expressions has
on the programming language evaluation
criteria that we've used through this
course
what you should focus on
is how brief a conditional expression is
compared to the longer hand notation
and this will then inform your answer
now we get to the third of our design
issues which relates to operand
evaluation order in other words given a
specific operator what order will the
operands be evaluated in
so operand evaluation order typically
works in the following fashion
first of all we deal with variables and
dealing with a variable will always
involve fetching the value for that
variable from memory
next we deal with constants so this will
sometimes be a fetch from memory and
sometimes it will be a machine language
instruction that will directly encode
the value
now this largely depends on what kind of
constants we are dealing with so in
other words where the values are
statically or dynamically bound to the
constant
so if the value is dynamically bound
then typically we will be working with a
fetch from memory and if the value is
statically bound then we may possibly be
working the fetch from memory but we may
also just be working with a machine
language instruction that
directly encodes that value that can be
determined prior to runtime
then in the third place
we evaluate parenthesized expressions so
we will then with each parenthesized
expression evaluate all of the contained
operands and operators first
and of course there may be the nested
parenthesized expressions which will
also need to be evaluated
now as far as then any further
evaluation goes
it doesn't matter what order the
operands are evaluated in unless we have
a situation where one of the operands or
both
are functions with side effects and we
covered this situation in chapter 15 so
i won't reiterate this here but please
refer to chapter 15 if you need a recap
on these concepts
the next design issue relates to whether
operator overloading is allowed in a
programming language
so what is operator overloading well
it's very simply the use of an operator
for more than one purpose
now some kinds of operator overloadings
are very common and safe and are in fact
provided by the programming language
itself
so for example the addition operator can
be used for both int and float edition
so here we are using exactly the same
operator the plus operator
and we are using it for adding two
integer values together but we use the
same operator for adding two float
values together so this is considered
safe
and because the semantics the meaning
associated with adding integers and
float values is essentially the same
now some built in operator overloadings
are more problematic so a good example
of this is the ampersand operators in c
and c plus plus
there are in fact three different uses
for the ampersand operator
so firstly the ampersand operator is
used as a binary operator then it
represents a bitwise logical and
operation
however it can also be used as a unary
operator in two different contexts
so in a declaration an ampersand
operator indicates that we are declaring
a reference so for instance if we have a
declaration like int ampersand a then we
are defining a reference a
which is a reference to an integer value
the amps and operator is also used for
the address of operators so for example
if we are assigning to a pointer and we
have a value b then we can get the
address of this value b
by typing ampersand b
this will give us then an address which
we can then assign to a pointer
so obviously these multiple uses of the
ampersand operator can be a problem and
this is mostly because they lead to a
loss of readability
so
this essentially means that every time
that a programmer encounters an
ampersand operator they have to decode
what specifically the operator means in
that particular context the code would
be more readable if there was a separate
symbol for each of these three different
operations that could be performed
we also have a loss of compiler error
detection that takes place so for
example if we have an expression that we
are typing
which uses the ampersand operator as a
binary operator but we're typing quickly
and we forget the first operand then
this error will be undetected and the
ampersand will simply be treated as a
unary operator this may lead to a
compilation error or possibly even a
runtime error depending on the nature of
our program code
so what i would like you to do now is to
pause the video and consider the
asterisk operators in c and c plus which
have a fairly similar problem associated
with them so what you should do is then
determine what different uses there are
for the asterisk operator and what
problems this can then lead to
now some high level programming
languages such as c plus c sharp and if
sharp support user defined overloaded
operators so these are overloadings of
existing operators that are implemented
by the programmer themselves
now this can greatly aid readability
when these overloadings are done
sensibly
so for example let's consider a
situation where we have a data structure
that is implemented using some kind of
object
and we want to allow addition of this
data structure to another instance of
the same data structure
so we could then write an add method but
if our programming language supports
user-defined overloaded operators we
could also define an addition operator
that allows these data structures to be
added to one another the addition
operator is a much more natural way of
expressing this addition operation
however there are a number of potential
problems associated with overloaded
operators
so firstly very simply the programmer
can define nonsensical operations for
example if there are overloading an
addition operator they may implement
this as a multiplication for example
however more sadly readability can also
suffer even when user-defined overloaded
operators are used in a sensible fashion
so this relates to the fact that in
order to understand what an operator
does
the programmer must find the types of
the operands so if we just have an
addition that is taking place they need
to determine what the types are for the
left and the right operand in order to
figure out what will actually happen
when this edition
executes
also there may be multiple different
sensible operator implementations so for
example if we have a list data structure
that we are defining
then an addition between two lists may
mean a concatenation or it may mean a
pairwise addition between the elements
of the
two lists that are being added to one
another
both of these implementations are
equally sensible so this may then
require the programmer to actually look
at the definition for the edition
operator in order to determine which
operation will actually be performed
when the addition operation is executed
finally we get to the last of our design
issues which is what kind of type mixing
is allowed within expressions before we
can consider this question though we
need to look at type conversions
so we get two kinds of type conversions
narrowing conversions and widening
conversions
now with narrowing conversions there's a
conversion of an object to a type that
cannot include all of the values of the
original type now this is not always
safe because a value's magnitude may be
lost an example of this would be a
conversion from a float to an int
now for some of the values that a float
can take on we can get at least an
approximate representation in integer
form but as we know the range of float
values is much wider than that of
intervals so there are values that are
outside of the range of an integer value
that cannot be then safely converted to
an int value and their magnitude will be
lost in that case
so in other words we will get then a
grossly different value that will be
represented if we convert a float value
that's outside of this range to an
integer value
widening conversions on the other hand
convert an object to a type that
includes at least approximations of all
the original types values so these
conversions are generally safe because
at least the values magnitude is
preserved but we may potentially lose
some accuracy so an example of this is
when we convert an int to a float value
because the range of int values falls
within the much wider range of float
values we can then represent every inch
value as a float however we may not
necessarily be able to accurately
represent all of them because some whole
numbers within floats can only be
represented as approximate values that
are almost equal to the whole value but
we can't directly represent that whole
value
as a float value so generally speaking
then
narrowing conversions are considered
unsafe but widening conversions are
considered to be generally safe
now a mixed mode expression is an
expression that has operands which have
different types
so for example if we have an expression
such as a plus b and the type of a is an
int whereas the type of b is a float
then a plus b is considered to be a
mixed mode expression
the reason for this is that addition
can't take place as is and this is
because on a machine level we can only
add two integer values to each other or
to float values to each other but there
isn't a machine level operation that can
add an end to a float
and so in this case then coercion needs
to take place so a coercion is an
implicit or automatic type conversion of
operands
so this allows then a mixed mode
expression to be evaluated so in our
example over here
we need to perform then a conversion of
either a
or of b
so we either then need to convert a to a
float in which case we are then adding
two float values together or we need to
convert b to an int in which case we are
adding two integer values to each other
now in general coercion decreases a
compiler's type error detection ability
so what i would like you to do is pause
the video once more at this point and
try to answer what kinds of errors can't
be detected
now practically speaking many
programming languages will cause all
numeric types and expressions where a
coercion is necessary and they will
generally use widening conversions so
what i would like you to do is to pause
the video at this point and explain why
widening conversions are used rather
than narrowing conversions
now in ada there is almost no coercion
that takes place within expressions so
for example there's no integer to float
multiplication that can take place
instead the programmer needs to
explicitly convert either the integer to
a float or the float to an integer
before the multiplication can take place
so what i would like you to do at this
point is pause the video
and try to determine what this implies
particularly in the context of the
programming language evaluation criteria
that we've been using so far in this
course
now in ml and f-sharp there are in fact
no coercions that take place in
expressions and so this has a similar
implication to what you will have
answered in the case of ada we just have
a more extreme version of that
i briefly mentioned explicit type
conversions on the previous slide so
explicit type conversions are programmer
specified type conversions and they may
be either narrowing or widening
conversions
the compiler might depending on the
programming language give warnings if a
narrowing conversion is performed and
this conversion will significantly
change the value of an operand
so in the c-based programming languages
this explicit conversion is referred to
as casting and here we have an example
of a cost that is performed we have a
variable named angle and then we specify
the type before the variable name in
parentheses and this will then convert
angle into an integer value
if sharp which is a functional
programming language uses a similar
syntax to a function call so this will
perform the same kind of conversion
where some will be converted to a
floating point type
we'll finish this lecture off by looking
at the kinds of errors that can arise
when we deal with expressions
so we've already looked at errors that
can occur when we perform type checking
and coercion and will not go into these
any further at this point
there are also errors that can occur
that are inherent to the limitations of
arithmetic so for example division by
xero will produce an undefined result
and if we attempt to perform a
computation with an undefined result
then we'll get some kind of error that
will occur
what i would like you to do at this
point is to pause the video and try to
think of other areas that could occur in
expressions due to inherent limitations
of arithmetic
now we then finally have errors that can
occur due to limitations of computer
arithmetic
so for example we have positive or
negative overflow that can occur and
this happens when a computation produces
a value that is either larger or smaller
than a type can actually represent
we also have underflow and this occurs
when we produce a result that is smaller
than a particular type can represent and
this typically occurs with floating
point values
now generally these kinds of errors will
occur as runtime faults
and this may then require some kind of
exception handling mechanism to be in
place and we'll get to exception
handling later on in the course
all right so that concludes our
discussion then for this lecture
in the next lecture we will continue our
talk on this chapter we'll be looking at
relational and boolean expressions we'll
also be looking at short circuit
evaluation and then we'll finish off the
chapter with a discussion on assignment
statements
5.0 / 5 (0 votes)